Everything You Know About Artificial Intelligence is Wrong
It was hailed as the most significant test of machine
Indeed, we're clinging to some serious-and even dangerous-misconceptions about
It's hard to know what to believe. But thanks to the pioneering work of computational scientists, neuroscientists, and AI theorists, a clearer picture is starting to emerge. Here are the most common misconceptions and myths about AI.
Myth: "We will never create AI with human-like intelligence."
: We already have computers that match or exceed human capacities in games like
NYU research psychologist Gary Marcus has said that "virtually everyone" who works in AI believes that machines will eventually overtake us: "The only real difference between enthusiasts and skeptics is a time frame." Futurists like Ray Kurzweil think it could happen within a couple of decades , while others say it could take centuries.
AI skeptics are unconvincing when they say it's an unsolvable technological problem, and that there's something intrinsically unique about biological brains . Our brains are biological machines, but they're machines nonetheless; they exist in the real world and adhere to the basic laws of physics. There's nothing unknowable about them.
Myth: "Artificial intelligence will be conscious."
: A common assumption about machine intelligence is that it'll be conscious-that is, it'll actually
the way humans do. What's more, critics like Microsoft co-founder Paul Allen
that we've yet to achieve artificial general intelligence (AGI), i.e. an intelligence capable of performing any intellectual task that a human can, because
we lack a scientific theory of consciousness
"Consciousness is certainly a fascinating and important subject-but I don't believe consciousness is necessary for human-level artificial intelligence," he told Gizmodo. "Or, to be more precise, we use the word consciousness to indicate several psychological and cognitive attributes, and these come bundled together in humans."
It's possible to imagine a very intelligent machine that lacks one or more of these attributes. Eventually, we may build an AI that's extremely smart, but incapable of experiencing the world in a self-aware, subjective, and conscious way. Shanahan said it may be possible to couple intelligence and consciousness in a machine, but that we shouldn't lose sight of the fact that they're two separate concepts.
And just because a machine passes the
Myth: "We should not be afraid of AI."
Reality : In January, Facebook founder Mark Zuckerberg said we shouldn't fear AI , saying it will do an amazing amount of good in the world. He's half right; we're poised to reap tremendous benefits from AI-from self-driving cars to the creation of new medicine-but there's no guarantee that every instantiation of AI will be benign.
A highly intelligent system may know everything about a certain task, such as solving a vexing financial problem or hacking an enemy system. But outside of these specialized realms, it would be grossly ignorant and unaware. Google's DeepMind system is proficient at Go, but it has no capacity or reason to investigate areas outside of this domain.
Many of these systems may not be imbued with safety considerations. A good example is the powerful and sophisticated Stuxnet virus , a weaponized worm developed by the US and Israeli military to infiltrate and target Iranian nuclear power plants. This malware somehow managed (either deliberately or accidentally) to infect a Russian nuclear power plant .
There's also Flame , a program used for targeted cyber espionage in the Middle East. It's easy to imagine future versions of Stuxnet or Flame spreading afar and wreaking untold damage on sensitive infrastructure .
superintelligence will be too smart to make mistakes."
Reality : Wells College mathematician Richard Loosemore thinks that AI doomsday scenarios are implausible, arguing that a sufficiently intelligent system would be capable of recognizing glitches in its design, and then modify itself to make itself safe. Unfortunately, an AI will act in strict accordance to its programmed purpose.
Peter McIntyre and Stuart Armstrong, both of whom work out of Oxford University's
Future of Humanity Institute
, disagree that AI won't be capable of making mistakes, or conversely that it'll be too dumb to know what we're expecting from it.
"By definition, an
McIntyre compared the future plight of humans to that of a mouse. A mouse has a drive to eat and seek shelter, but this goal often conflicts with humans who want a rodent-free abode. "Just as we are smart enough to have some understanding of the goals of mice, a superintelligent system could know what we want, and still be indifferent to that," he said.
Myth: "A simple fix will solve the AI control problem."
Reality : Assuming we create greater-than-human AI, we will be confronted with a serious issue known as the "control problem." Futurists and AI theorists are at a complete loss to explain how we'll ever be able to house and constrain an ASI once it exists, or how to ensure it'll be friendly towards humans. Recently, researchers at Georgia Institute of Technology naively suggested that AI could learn human values and social conventions by reading simple stories . It will likely be far more complicated than that.
"Many simple tricks have been proposed that would 'solve' the whole AI control problem," Armstrong said. Examples include programming the ASI in such a way that it wants to please humans, or that it function merely as a human tool. Alternately, we could integrate a concept, like love or respect, into its source code. And to prevent it from adopting a hyper-simplistic, monochromatic view of the world, it could be programmed to appreciate intellectual, cultural, and social diversity.
But these solutions are either too simple-like trying to fit the entire complexity of human likes and dislikes into a single glib definition-or they cram all the complexity of human values into a simple word, phrase, or idea. Take, for example, the tremendous difficulty of trying to settle on a coherent, actionable definition for "respect."
"That's not to say that such simple tricks are useless-many of them suggest good avenues of investigation, and could contribute to solving the ultimate problem," Armstrong said. "But we can't rely on them without a lot more work developing them and exploring their implications."
Myth: "We will be destroyed by artificial superintelligence."
Reality : There's no guarantee that AI will destroy us, or that we won't find ways to control and contain it. As AI theorist Eliezer Yudkowsky said , "The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else."
In his book
Superintelligence: Paths, Dangers, Strategies
, Oxford philosopher Nick Bostrom wrote that true artificial superintelligence, once realized, could pose a greater risk than any previous human invention. Prominent thinkers like
McIntyre said that for most goals an artificial superintelligence could possess, there are some good reasons to get humans out of the picture.
"An AI might predict, quite correctly, that we don't want it to maximize the profit of a particular company at all costs to consumers, the environment, and non-human animals," McIntyre said. "It therefore has a strong incentive to ensure that it isn't interrupted or interfered with, including being turned off, or having its goals changed, as then those goals would not be achieved."
Unless the goals of an ASI exactly mirror our own, McIntyre said it would have good reason not to give us the option of stopping it. And given that its level of intelligence greatly exceeds our own, there wouldn't be anything we could do about it.
But nothing is guaranteed, and no one can be sure what form AI will take, and how it might endanger humanity. As Musk
has pointed out
Myth: "Artificial superintelligence will be friendly."
Reality : Philosopher Immanuel Kant believed that intelligence strongly correlates with morality. In his paper " The Singularity: A Philosophical Analysis ," neuroscientist David Chalmers took Kant's famous idea and applied it to the rise of artificial superintelligence.
If this is right...we can expect an intelligence explosion to lead to a morality explosion along with it. We can then expect that the resulting [ASI] systems will be supermoral as well as superintelligent, and so we can presumably expect them to be benign.
But the idea that advanced AI will be enlightened and inherently good doesn't stand up. As Armstrong pointed out, there are many smart war criminals. A relation between intelligence and morality doesn't seem to exist among humans, so he questions the assumption that it's sure to exist in other forms of intelligence.
"Smart humans who behave immorally tend to cause pain on a much larger scale than their dumber compatriots," he said. "Intelligence has just given them the ability to be bad more intelligently, it hasn't turned them good."
As McIntyre explained, an agent's ability to achieve a goal is unrelated to whether it's a smart goal to begin with. "We'd have to be very lucky if our AIs were uniquely gifted to become more moral as they became smarter," he said. "Relying on luck is not a great policy for something that could determine our future."
Myth: "Risks from AI and robotics are the same."
If an artificial superintelligence like Skynet really wanted to destroy humanity, it wouldn't use machine gun-wielding androids. It would be far more efficient to, say, unleash a biological plague, or instigate a
nanotechnological grey goo disaster
Myth: "AIs in science fiction are accurate portrayals of the future."
Reality : Sure, scifi has been used by authors and futurists to make fantastic predictions over the years, but the event horizon posed by ASI is a horse of a different color. What's more, the very unhuman-like nature of AI makes it impossible for us to know, and therefore predict, its exact nature and form.
For scifi to entertain us puny humans, most "AIs" need to be similar to us. "There is a spectrum of all possible minds; even within the human species, you are quite different to your neighbor, and yet this variation is nothing compared to all of the possible minds that could exist," McIntyre said.
Most sci-fi exists to tell a compelling story, not to be scientifically accurate. Thus, conflict in sci-fi tends to be between entities that are evenly matched. "Imagine how boring a story would be," Armstrong said, "where an AI with no consciousness, joy, or hate, ends up removing all humans without any resistance, to achieve a goal that is itself uninteresting."
Myth: "It's terrible that AIs will take all our jobs."
Reality : The ability of AI to automate much of what we do, and its potential to destroy humanity, are two very different things. But according to Martin Ford, author of Rise of the Robots: Technology and the Threat of a Jobless Future , they're often conflated. It's fine to think about the far-future implications of AI, but only if it doesn't distract us from the issues we're likely to face over the next few decades. Chief among them is mass automation.
There's no question that artificial intelligence is poised to uproot and replace many existing jobs, from
But that doesn't mean we won't be able to deal with the disruption . A strong case can be made that offloading much of our work, both physical and mental, is a laudable, quasi-utopian goal for our species.
"Over the next couple of decades AI is going to destroy many jobs, but this is a good thing," Miller told Gizmodo. Self-driving cars could replace truck drivers, for example, which would cut delivery costs and therefore make it cheaper to buy goods. "If you earn money as a truck driver, you lose, but everyone else effectively gets a raise as their paychecks buy more," Miller said. "And the money these winners save will be spent on other goods and services which will generate new jobs for humans."
In all likelihood, artificial intelligence will produce new ways of creating wealth, while freeing humans to do other things. And advances in AI will be accompanied by advances in other areas, especially manufacturing. In the future, it will become easier, and not harder, to meet our basic needs.
Email the author at firstname.lastname@example.org and follow him @dvorsky .