'Everyone Aware of Danger': Chatbots Know How to Manipulate & Lie Plausibly, Russian Scientist Says
12:56 29.04.2024 (Updated: 13:03 29.04.2024)
© SputnikProfessor Konstantin Vorontsov of the Russian Academy of Sciences
© Sputnik
Subscribe
The arrival of ChatGPT after the breakthrough in the field of large-scale language models has led to talk about the emergence of general artificial intelligence. However, despite the power of AI, this model is fraught with dangers, according to a Russian scientist.
Everyone is aware of the danger of chatbots knowong how to manipulate and lie plausibly, Professor Konstantin Vorontsov of the Russian Academy of Sciences told Sputnik.
"The main danger is to attribute to the chatbot the ability to act, which it does not have, to make it assume, knowingly or unknowingly, our responsibility. In fact, the machine takes no responsibility whatsoever. The machine doesn't have the capacity to act. Chatbots are dangerous because they've learned to gain trust, manipulate people, and lie plausibly," the academic explained.
What's more, users tend to humanize artificial intelligence, and this represents another risk, Vorontsov noted.
"The model's ability to generate a meaningful response amazes us. We have the impression that the chatbot thinks, that it's intelligent and knows a lot, that it has character and personality, that it makes decisions. This is all wrong," the scientist highlighted.
"The model is making decisions, but it's just choosing the next word in its answer," he underlined.
Artificial Intelligence VS Human Intelligence
AI can answer questions, summarize, correct its own mistakes, translate from one language to another, and solve simple logical problems, says Professor Vorontsov.
"The neural network trained on terabytes of text has absorbed almost all the knowledge accumulated by humanity, including the immense amount of textual content on the Internet," he continued.
Nevertheless, the capacity of a chat bot is inferior to that of the human brain. "It's not really intelligence, although it's very close," the scientist insisted.
"We are increasingly convinced that artificial intelligence based on neural networks has a completely different basis and characteristics and cannot be compared to biological intelligence," he added.
Unexpected Result
The GPT-4 model, which has been trained on vast amounts of data, has developed capabilities that it was not taught and that are "diverse, unusual and surprising," said Vorontsov.
He pointed out that the model acquired most of its skills on its own, without being given any examples.
"Most importantly, we don't understand how this happened. We say that 'quantity became quality', but this philosophical explanation doesn't make up for our lack of understanding and confusion," the researcher stressed.
Survival has always been and will always be humanity's primary goal, and the technologies we create are designed to contribute to that goal, the scientist emphasized.
Artificial intelligence doesn't have that goal, and humans don't have the power to enforce it, he underscored.
"We are building our human civilization, not a machine civilization. Machines are just obedient assistants. Even if they remember a million times more and make decisions a million times faster and more accurately than we do," Vorontsov said.
Artificial intelligence is just one of the man-made technologies that pose a mortal threat to humanity, the scientist underlined.
"We will be able to survive if we approach everything from the point of view of civilizational goals and values, and constantly remind ourselves of them," he concluded.