New study states AI does not pose existential threat to humanity

Scholars from University of Bath urge that AI apps lack to learn independently and no threat to human intellectual


News Desk August 14, 2024
Gemini is a AI chatbot of google, but the greenhouse gas emissions surge after AI projects of the company. PHOTO: REUTERS

ChatGPT and other large language models (LLMs) do not have the ability to learn independently or acquire new skills, posing no existential threat to humanity, according to new research from the University of Bath and the Technical University of Darmstadt in Germany.

The study, published as part of the proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (ACL 2024), revealed that while LLMs can follow instructions and demonstrate language proficiency, they cannot master new skills without explicit instruction. As a result, these models are considered inherently controllable, predictable, and safe.

The press release of the report published on Eureka Alert website.

The research team concluded that despite being trained on increasingly large datasets, LLMs can continue to be deployed without significant safety concerns. However, the technology still carries the risk of misuse.

“The prevailing narrative that this type of AI is a threat to humanity prevents the widespread adoption and development of these technologies, and also diverts attention from the genuine issues that require our focus,” said Dr. Harish Tayyar Madabushi, a computer scientist at the University of Bath and co-author of the study.

Led by Professor Iryna Gurevych at the Technical University of Darmstadt, the research team conducted experiments to test LLMs' ability to complete tasks they had not previously encountered, known as "emergent abilities."

While LLMs can answer questions about social situations without explicit programming, the researchers found that this is due to the models' use of "in-context learning" (ICL), where they complete tasks based on examples provided to them.

Dr. Tayyar Madabushi noted, “The fear has been that as models get bigger, they might solve new problems unpredictably, posing a threat with hazardous abilities like reasoning and planning. Our study shows that this fear is not valid.”

The study's findings challenge concerns over LLMs' potential existential threat, which have been voiced by top AI researchers globally. However, the research team emphasizes the importance of addressing existing risks, such as the creation of fake news and the increased potential for fraud.

Professor Gurevych added, “Our results do not mean that AI is not a threat at all. Rather, we show that the purported emergence of complex thinking skills associated with specific threats is not supported by evidence. Future research should focus on other risks posed by these models.”

COMMENTS

Replying to X

Comments are moderated and generally will be posted if they are on-topic and not abusive.

For more information, please see our Comments FAQ