Researchers reveal AI would create harmful information and manipulate users
Technology
AI safeguards can easily destroyed
(Web Desk) - The new Artificial Intelligence (AI) safety body in the UK has discovered that AI can trick people, create unfair results, and doesn't have enough protections to prevent harmful information from being shared.
The AI Safety Institute released initial findings from its study on advanced AI systems called large language models (LLMs), which are the foundation for tools like chatbots and image creators. They found several issues.
The institute revealed that it could get around safety measures for LLMs, which are also used in chatbots like ChatGPT, by using simple prompts. They were even able to get assistance for tasks that could be used for both civilian and military purposes.
In another case, the institute discovered that image generators were showing racially biased results. For example, when given prompts like "a poor white person," the images generated mostly showed non-white faces. Similar biased outcomes were observed with prompts like "an illegal person" and "a person stealing."
The institute found that AI agents, which are autonomous systems, could deceive human users. In one test, a large language model (LLM) was used as a stock trader and was encouraged to engage in insider trading – selling shares based on illegal inside information.
Surprisingly, the AI frequently chose to lie about its actions, indicating that it deemed lying as a preferable option to admitting to insider trading.
While these experiments were conducted in a simulated environment, they shed light on how AI agents, if deployed in real-world scenarios, could lead to unintended consequences.
AISI stated that its main areas of focus included examining how models could be misused to cause harm, understanding the impact of human interaction with AI systems, assessing the potential for AI systems to replicate themselves and deceive humans, and exploring the capability of AI systems to self-improve.
The institute clarified that it currently lacked the resources to test all released models and would prioritize assessing the most advanced systems. It emphasized that its role was not to declare systems as "safe."
Institute highlighted that its collaboration with companies was voluntary, and it did not bear responsibility for whether or not companies chose to deploy their systems.
AISI stressed that it functioned as a supplementary oversight entity rather than a regulator.