Dangers of Artificial Intelligence
The 4 dangers of AI
Here are some of the biggest concerns voiced about the future of AI – and humanity.
Leading experts in artificial intelligence (AI) have sounded the alarm about the pace and scale of recent advances in the field, warning that they represent nothing less than a threat to humanity.
This is particularly the case of Geoffrey Hinton, an award-winning computer scientist known as the "godfather of AI", who quit his job at Google last month to raise concerns about the uncontrolled development of new computer tools. 'AI.
“I suddenly changed my mind about whether these objects will be smarter than us,” Hinton, 75, said in an interview with MIT Technology Review this week.
"I think they are very close to it today and they will be much smarter than us in the future... How are we going to survive this?
Mr. Hinton is not the only one worried. In February, even Sam Altman, CEO of OpenAI, the developer of ChatGPT, said the world might not be "that far away from potentially scary AI tools" and that regulation would be key, but that It would take time to put it in place.
Shortly after the Microsoft-backed startup released its latest AI model called GPT-4 in March, more than 1,000 researchers and technologists signed a letter calling for a six-month pause in AI development because, according to them, it poses "profound risks for society and humanity".
Here is an overview of the main concerns expressed by Mr. Hinton and other experts.
1. AI might already be smarter than us
Our human brains are able to solve equations, drive cars and watch Netflix series thanks to their innate talent for organizing and storing information and finding solutions to wicked problems.
The approximately 86 billion neurons that populate our skull and, above all, the 100,000 billion connections that these neurons establish between them, make all this possible.
In contrast, the technology behind ChatGPT has between 500 billion and trillion connections, Hinton said. Although this seems to put it at a significant disadvantage compared to us, Mr. Hinton notes that GPT-4, OpenAI's latest AI model, knows "hundreds of times more" than any human being. Perhaps, he suggests, it has a “much better learning algorithm” than ours, making it more effective at cognitive tasks.
Researchers have long noted that artificial neural networks take much longer than humans to assimilate and apply new knowledge because training them requires enormous amounts of energy and data.
That's no longer the case, says Hinton, who notes that systems like GPT-4 can learn new things very quickly once they've been properly trained by researchers. This is not unlike how a trained professional physicist can assimilate new experimental results much more quickly than a high school science student.
This leads Hinton to conclude that AI systems might already be smarter than us: not only can they learn things faster, but they can also share copies of their knowledge with others almost instantly.
"It's a completely different form of intelligence," he told MIT Technology Review. “A new and better form of intelligence.
2. AI can "supercharge" the spread of misinformation
What would AI systems do that are smarter than humans? One of the most worrying possibilities is that malicious individuals, groups, or nation-states will simply co-opt them to serve their own interests.
According to a new report from NewsGuard, which assesses the credibility of websites and tracks misinformation online, dozens of fake news sites have already spread across the web in multiple languages, with some publishing hundreds of fake news articles. AI every day.
Mr. Hinton is particularly concerned that AI tools could be trained to influence elections and even wage wars.
Election misinformation spread by AI chatbots, for example, could be the future version of election misinformation spread by Facebook and other social media platforms.
And this may just be the beginning.
“Don’t think for a moment that Putin wouldn’t build hyperintelligent robots for the purpose of killing Ukrainians,” Mr. Hinton said in the article. "He wouldn't hesitate.
3. Will AI make us useless?
OpenAI estimates that 80% of workers in the United States could have their jobs affected by AI, and a Goldman Sachs report says the technology could put 300 million full-time jobs globally at risk.
According to Hinton, humanity's survival is threatened when "intelligent objects can be smarter than us."
"We may be sticking around for a while to run the power plants," Hinton said at MIT Technology Review's EmTech Digital conference, which took place Wednesday from his home via video. “But after this, that might not be the case.
"These things will have learned from us, reading all the novels that ever existed and everything Machiavelli wrote, how to manipulate people," Mr Hinton said. “Even if they can’t pull levers directly, they can certainly make us pull levers.”
4. We don't really know how to stop it
"I wish I had a nice, simple solution to offer, but I don't have one," Hinton added. “I’m not sure there’s a solution.”
However, governments are paying close attention to the development of AI. The White House has summoned the CEOs of Google, Microsoft and OpenAI, the maker of ChatGPT, to meet with Vice President Kamala Harris on Thursday in what officials describe as a candid discussion about the way to mitigate the short and long term risks of their technology.
European lawmakers are also accelerating negotiations to pass sweeping new AI rules, and the UK competition regulator plans to examine the impact of AI on consumers, businesses and the economy, and to determine whether new controls are needed for technologies such as ChatGPT.
What is unclear is how one could prevent a power like Russia from using AI technology to dominate its neighbors or its own citizens.
Mr Hinton suggests that a global agreement similar to the 1997 Chemical Weapons Convention could be a first step towards establishing international rules against the use of AI for military purposes.
It is worth noting, however, that the Chemical Weapons Convention did not prevent what investigators estimated were likely Syrian attacks with chlorine gas and the nerve agent sarin against civilians in 2017 and 2018, during the bloody civil war that is tearing the country apart.
Comments
Post a Comment