AI models pose no existential threat to humanity: researchers

AI models pose no existential threat to humanity: researchers

Technology

They conducted thousands of experiments

Follow on
Follow us on Google News
 

(Web Desk) - AI or Artificial intelligence has been the buzzword in technology for the past few years. Some argue that it can be misused, while others think of it as a boon for various fields, from medicine to science to creative industries.

Does AI really pose an existential threat to humanity? No, according to a large study that conducted thousands of experiments on Large Language Models or LLMs, which are at the heart of AI apps like ChatGPT.

The research team from Bath University in UK and Technical University of Darmstadt in Germany demonstrated through these experiments that a combination of LLMs' ability to follow instructions, memory and linguistic proficiency can account for both the capabilities and limitations exhibited by LLMs.

The experiments were aimed at testing the ability of LLMs to complete tasks that the models have never come across before. These are called emergent abilities.

It has been seen in past research that LLMs can answer questions about social situations without ever having been explicitly trained or programmed to do so.

This was thought to be a result of these models ‘knowing’ about social situations. But the researchers showed that "it was in fact the result of models using a well-known ability to complete tasks based on a few examples presented to them, known as ‘in-context learning’ or ICL," Bath University said in a release.

What the study found is that while LLMs have "a superficial ability" to follow instructions and might have excellent language proficiency, they still have no potential "to master new skills without explicit instruction."

This means they remain inherently controllable, predictable and safe," said the release, leading to the conclusion that LLMs are not an existential threat to humanity.

"The research team concluded that LLMs – which are being trained on ever larger datasets – can continue to be deployed without safety concerns, though the technology can still be misused," it said.

While in future, these AI models might be able to 'speak' and write more sophisticated language, and follow more detailed prompts, they "are highly unlikely to gain complex reasoning skills," the researchers concluded.

“The prevailing narrative that this type of AI is a threat to humanity prevents the widespread adoption and development of these technologies, and also diverts attention from the genuine issues that require our focus,” said Dr Harish Tayyar Madabushi, computer scientist at the University of Bath and co-author of the study.

"Our study shows that the fear that a model will go away and do something completely unexpected, innovative and potentially dangerous is not valid," the release quoted him as saying.

“While it's important to address the existing potential for the misuse of AI, such as the creation of fake news and the heightened risk of fraud, it would be premature to enact regulations based on perceived existential threats,” he said.