Why We Need to Worry About Artificial Intelligence
For decades, writers, scientists and other forward-thinking visionaries have imagined the rise of artificial intelligence, i.e., machines that are self-aware. These robots, gadgets, and other devices would essentially be able to perform many of the tasks that previously needed to be done by humans.
While the subject of artificial intelligence— or AI—has seemed to be the stuff of science fiction for quite some time, thanks to the evolution of technology and man’s desire to keep inventing, it appears as though we’re getting closer and closer to its actual emergence.
Proponents of AI will generally tell you that, assuming we’ve figured out some workable modernization of Asimov’s three laws of robotics, the inevitable rise of artificial intelligence spells wonders for human beings. In theory, humans might not even have to work, because smart, kind, and compassionate robots would toil in the fields 24 hours a day, working nonstop as their programming and mechanics allowed.
Perpetual vacation? Who wouldn’t sign up for that?
Apparently a lot of folks.
AI: It’s Not All Roses
On July 27, a slew of scientific thinkers and doers — including physicist Stephen Hawking, Apple co-founder Steve Wozniak and serial entrepreneur Elon Musk — published and open letter to the world warning against the dangers of AI, particularly in the realm of “smart” autonomous weaponry. Unlike drones and other pieces of tactical-yet-remote weaponry, these kinds of weapons don’t require any human intervention. Instead, they follow their programming, destroying targets without giving a second thought — or even a first one, for that matter.
Musk — who himself has been vocal about his disdain for some aspects of artificial intelligence (while at the same time investing it on his own) — has gone as far as saying that AI, if left unchecked, is the biggest threat facing humanity. Whether those words ring true remains to be seen, and hopefully we’ll never see the day.
The open letter talks about the benefits of autonomous weapons, namely that their rise would reduce the number of human soldiers necessary to wage wars. The idea being the side with the autonomous weaponry would be able to annihilate its opponent, losing few if any resources. While that is certainly beneficial to the victorious country, it also drastically reduces the threshold necessary to go to war. As it stands now, governments think long and hard about going to war because of what’s at risk. If all that could be lost were a few robots that fire weapons, losing a military engagement might not even be that big of a deal.
The letter also envisions a new arms race of sorts if this kind of weaponry continues to be developed. It suggests there’s only a bit of time — maybe even a few years — before these kinds of killing machines (e.g., the Terminator) could see the light of day. And once that happens, it’d only be a matter of time before they fell into the wrong hands.
The Power of Proactivity
The open letter comes from a group of private citizens who feel as though they have a pretty good idea of what the future could become and want to use their influence to sculpt a better one. Unfortunately, many government leaders don’t seem to be too vocal on the issue; most folks in Washington are much better at responding to problems after they occur than proactively working to make sure the country doesn’t experience them in the first place.
Once artificial intelligence is aware of its existence — and particularly if it’s connected to the Internet — its capacity for learning and evolving is virtually limitless. Let’s hope that world leaders begin listening to the impassioned calls of the scientific community to address artificial intelligence — and it’s potential consequences — before it’s too late. Let’s hope they assume a proactive posture for once.
We’d all prefer a world in which we have to work than a world that’s run by robots, anyway.