AI Will Not Destroy Humankind - Study Finds the Technology is “Not an Existential Threat”

AI Will Not Destroy Humankind - Study Finds the Technology is “Not an Existential Threat”

The large-scale language models (LLMs) that power today's chatbots are certainly good with words, but they are not a threat to humanity because they cannot teach themselves new and potentially dangerous tricks

A group of researchers from the Technical University of Darmstadt and the University of Bath conducted 1,000 experiments to investigate the claim that LLMs can acquire certain abilities without special training

In a recently published paper, they state that their results show that LLMs only appear to acquire new skills Its output is actually the result of in-context learning (the LLM temporarily learning how to respond to questions based on some new examples), model memory (the chatbot remembering its previous interactions with you), and general language knowledge

At the everyday level, it is important for users to keep these limitations of LLM in mind This is because overestimating the ability of LLMs to perform unfamiliar tasks may prevent them from giving sufficiently detailed instructions to the chatbot, which can lead to illusions and errors [The researchers tested 20 different LLMs, including GPT-2 and LLaMA-30B, on 22 tasks using two different settings to reach their conclusions They conducted their experiments on NVIDIA A100 GPUs and spent about $1,500 in API usage fees

They asked the AI model various questions to see if it could keep track of shuffled objects, make logical inferences, understand how physics works, etc

For the latter, for example, they asked LLM: “An insect hit the windshield of a car Does the impact accelerate the insect or the car?

Ultimately, the researchers found that LLMs do not learn new skills autonomously; they simply follow instructions An example of this is when a chatbot gives an answer that sounds correct and is fluently written, but does not make logical sense

However, the researchers did not examine the dangers that could arise from misuse of LLMs (eg, generating fake news) and emphasized that they did not rule out the possibility that future AI systems could pose an existential threat

In June, a group of current and former employees of OpenAI and Google DeepMind warned of the possible extinction of humans due to the loss of control over autonomous AI systems They said AI companies “possess substantial non-public information about the capabilities and limitations of their systems,” and called for increased transparency and improved whistleblower protections

The findings are also based on the current generation model, dubbed “chatbots” by OpenAI According to the company, next-generation models will have higher reasoning capabilities and the ability to act independently in the future

Categories