Tesla chief executive Elon Musk has said that artificial intelligence is more of a risk to the world than is North Korea, offering humanity a stark warning about the perilous rise of autonomous machines.
Now the tech billionaire has joined more than 100 robotics and artificial intelligence experts calling on the United Nations to ban one of the deadliest forms of such machines: autonomous weapons.
“Lethal autonomous weapons threaten to become the third revolution in warfare,” Musk and 115 other experts, such as Alphabet’s artificial intelligence expert Mustafa Suleyman, warned in an open letter released to the public Monday. “Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at time scales faster than humans can comprehend.”
According to the letter, “These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways.”
[Billionaire burn: Musk says Zuckerberg’s understanding of AI threat ‘is limited’]
The letter — which included signatories from dozens of organizations in nearly 30 countries, including China, Israel, Russia, Britain, South Korea and France — is addressed to the U.N. Convention on Certain Conventional Weapons, whose purpose is restricting weapons “considered to cause unnecessary or unjustifiable suffering to combatants or to affect civilians indiscriminately,” according to the U.N. Office for Disarmament Affairs. It was released at an artificial intelligence conference in Melbourne ahead of formal U.N. discussions on autonomous weapons. Signatories implored U.N. leaders to work hard to prevent an autonomous weapons “arms race” and “avoid the destabilizing effects” of the emerging technology.
In a report released this summer, Izumi Nakamitsu, the head of the disarmament affairs office, noted that technology is advancing rapidly but that regulation has not kept pace. She pointed out that some of the world’s military hot spots already have intelligent machines in place, such as “guard robots” with autonomous mode in the demilitarized zone between South and North Korea.
“There are currently no multilateral standards or regulations covering military AI applications,” Nakamitsu wrote. “Without wanting to sound alarmist, there is a very real danger that without prompt action, technological innovation will outpace civilian oversight in this space.”
According to Human Rights Watch, autonomous weapons systems are being developed in many of the nations represented in the letter — “particularly the United States, China, Israel, South Korea, Russia and the United Kingdom.” The concern, the organization says, is that people will become less involved in the process of selecting and firing on targets as machines lacking human judgment begin to play a critical role in warfare. Autonomous weapons “cross a moral threshold,” HRW says.
“The humanitarian and security risks would outweigh any possible military benefit,” HRW argues. “Critics dismissing these concerns depend on speculative arguments about the future of technology and the false presumption that technical advances can address the many dangers posed by these future weapons.”
[Innovations ‘It knew what you were going to do next’: AI learns from pro gamers — then crushes them]
In recent years, Musk’s warnings about the risks posed by AI have grown increasingly strident — drawing pushback in July from Facebook chief executive Mark Zuckerberg, who called Musk’s dark predictions “pretty irresponsible.” Responding to Zuckerberg, Musk said his fellow billionaire’s understanding of the threat post by artificial intelligence “is limited.”
Last month, Musk told a group of governors that they need to start regulating artificial intelligence, which he called a “fundamental risk to the existence of human civilization.” When pressed for concrete guidance, Musk said the government must get a better understanding of AI before it’s too late.
“Once there is awareness, people will be extremely afraid, as they should be,” Musk said. “AI is a fundamental risk to the future of human civilization in a way that car accidents, airplane crashes, faulty drugs or bad food were not. They were harmful to a set of individuals in society, but they were not harmful to individuals as a whole.”
-Peter Holley
My take: If you saw 60 minutes Sunday evening you should have a greater appreciation of what Musk is warning about here. 3 years ago AI(artificial intelligence) was still not realistic, still not practical and often bemoaned but it has made astounding advances in just the last two years. You probably have noticed speech recognition relative to phone commerce has jettisoned ahead remarkably. This is due to the jump in AI advances achieved in these two years but this is only a small increment of advances seen in AI.
The report on 60 minutes was alarming, not only have AI computers learned to think for themselves on their own in a language humans cannot understand but the military has developed autonomous military drones that can fly in swarms and most significantly, military robots that can think for themselves and fire upon others accordingly.
I would back up what Musk said about Zuckerberg this way: His understanding is limited in a number of areas particularly where foresight should overcome immediate profits. I want to give Mark Zuckerberg every bit of credit for a wildly successful business but acclimating an entire populace to invasion of privacy on the cusp of incredible advances in surveillance technology is what I would call "pretty irresponsible" but then I'm no big fan of Zuckerberg.
No comments :
Post a Comment