Please Try again!
expand

Everything You Know About Artificial Intelligence is Wrong

Everything You Know About Artificial Intelligence is Wrong
Science
Everything You Know About Artificial Intelligence is Wrong
Isaac Asimov's Three Laws of Robotics won't be nearly enough to control AI. Robots of Dawn cover by Michael Whalen

It was hailed as the most significant test of machine intelligence since Deep Blue defeated Garry Kasparov in chess nearly 20 years ago. Google's AlphaGo has won two of the first three games against grandmaster Lee Sedol in a Go tournament, showing the dramatic extent to which AI has improved over the years. That fateful day when machines finally become smarter than humans has never appeared closer-yet we seem no closer in grasping the implications of this epochal event.

Indeed, we're clinging to some serious-and even dangerous-misconceptions about artificial intelligence. Late last year, SpaceX co-founder Elon Musk warned that AI could take over the world, sparking a flurry of commentary both in condemnation and support. For such a monumental future event, there's a startling amount of disagreement about whether or not it'll even happen, or what form it will take. This is particularly troubling when we consider the tremendous benefits to be had from AI, and the possible risks. Unlike any other human invention, AI has the potential to reshape humanity, but it could also destroy us.

It's hard to know what to believe. But thanks to the pioneering work of computational scientists, neuroscientists, and AI theorists, a clearer picture is starting to emerge. Here are the most common misconceptions and myths about AI.

Myth: "We will never create AI with human-like intelligence."

Everything You Know About Artificial Intelligence is Wrong
Go grandmaster Lee Sedol has now lost two straight games to AlphaGo in the historic DeepMind Challenge. Image: Getty

Reality: We already have computers that match or exceed human capacities in games like chess and Go , stock market trading , and conversations. Computers and the algorithms that drive them can only get better, and it'll only be a matter of time before they excel at nearly any human activity.

NYU research psychologist Gary Marcus has said that "virtually everyone" who works in AI believes that machines will eventually overtake us: "The only real difference between enthusiasts and skeptics is a time frame." Futurists like Ray Kurzweil think it could happen within a couple of decades, while others say it could take centuries.

AI skeptics are unconvincing when they say it's an unsolvable technological problem, and that there's something intrinsically unique about biological brains. Our brains are biological machines, but they're machines nonetheless; they exist in the real world and adhere to the basic laws of physics. There's nothing unknowable about them.

Myth: "Artificial intelligence will be conscious."

Everything You Know About Artificial Intelligence is Wrong
In the AMC television series Humans, some-but not all-artificial intellects have conscious awareness. Image: AMC

Reality: A common assumption about machine intelligence is that it'll be conscious-that is, it'll actually think the way humans do. What's more, critics like Microsoft co-founder Paul Allen believe that we've yet to achieve artificial general intelligence (AGI), i.e. an intelligence capable of performing any intellectual task that a human can, because we lack a scientific theory of consciousness . But as Imperial College of London cognitive roboticist Murray Shanahan points out, we should avoid conflating these two concepts.

"Consciousness is certainly a fascinating and important subject-but I don't believe consciousness is necessary for human-level artificial intelligence," he told Gizmodo. "Or, to be more precise, we use the word consciousness to indicate several psychological and cognitive attributes, and these come bundled together in humans."

It's possible to imagine a very intelligent machine that lacks one or more of these attributes. Eventually, we may build an AI that's extremely smart, but incapable of experiencing the world in a self-aware, subjective, and conscious way. Shanahan said it may be possible to couple intelligence and consciousness in a machine, but that we shouldn't lose sight of the fact that they're two separate concepts.

And just because a machine passes the Turing Test -in which a computer is indistinguishable from a human-that doesn't mean it's conscious. To us, an advanced AI may give the impression of consciousness, but it will be no more aware of itself than a rock or a calculator.

Myth: "We should not be afraid of AI."

Reality: In January, Facebook founder Mark Zuckerberg said we shouldn't fear AI, saying it will do an amazing amount of good in the world. He's half right; we're poised to reap tremendous benefits from AI-from self-driving cars to the creation of new medicine-but there's no guarantee that every instantiation of AI will be benign.

A highly intelligent system may know everything about a certain task, such as solving a vexing financial problem or hacking an enemy system. But outside of these specialized realms, it would be grossly ignorant and unaware. Google's DeepMind system is proficient at Go, but it has no capacity or reason to investigate areas outside of this domain.

Everything You Know About Artificial Intelligence is Wrong
The Flame virus is being used for cyber espionage in Middle Eastern countries. Image: Wired

Many of these systems may not be imbued with safety considerations. A good example is the powerful and sophisticated Stuxnet virus, a weaponized worm developed by the US and Israeli military to infiltrate and target Iranian nuclear power plants. This malware somehow managed (either deliberately or accidentally) to infect a Russian nuclear power plant.

There's also Flame, a program used for targeted cyber espionage in the Middle East. It's easy to imagine future versions of Stuxnet or Flame spreading afar and wreaking untold damage on sensitive infrastructure.

Myth: "Artificial superintelligence will be too smart to make mistakes."

Everything You Know About Artificial Intelligence is Wrong
The supercomputer in The Invisible Boy (1957)

Reality: Wells College mathematician Richard Loosemore thinks that AI doomsday scenarios are implausible, arguing that a sufficiently intelligent system would be capable of recognizing glitches in its design, and then modify itself to make itself safe. Unfortunately, an AI will act in strict accordance to its programmed purpose.

Peter McIntyre and Stuart Armstrong, both of whom work out of Oxford University's Future of Humanity Institute, disagree that AI won't be capable of making mistakes, or conversely that it'll be too dumb to know what we're expecting from it.

"By definition, an artificial superintelligence (ASI) is an agent with an intellect that's much smarter than the best human brains in practically every relevant field," McIntyre told Gizmodo. "It will know exactly what we meant for it to do." McIntyre and Armstrong believe an AI will only do what it's programmed to, but if it becomes smart enough, it should figure out how this differs from the spirit of the law, or what humans intended.

McIntyre compared the future plight of humans to that of a mouse. A mouse has a drive to eat and seek shelter, but this goal often conflicts with humans who want a rodent-free abode. "Just as we are smart enough to have some understanding of the goals of mice, a superintelligent system could know what we want, and still be indifferent to that," he said.

Myth: "A simple fix will solve the AI control problem."

Everything You Know About Artificial Intelligence is Wrong
As portrayed in Ex Machina, it's going to be very difficult to contain artificial intellects that are much smarter than we are.

Reality: Assuming we create greater-than-human AI, we will be confronted with a serious issue known as the "control problem." Futurists and AI theorists are at a complete loss to explain how we'll ever be able to house and constrain an ASI once it exists, or how to ensure it'll be friendly towards humans. Recently, researchers at Georgia Institute of Technology naively suggested that AI could learn human values and social conventions by reading simple stories. It will likely be far more complicated than that.

"Many simple tricks have been proposed that would 'solve' the whole AI control problem," Armstrong said. Examples include programming the ASI in such a way that it wants to please humans, or that it function merely as a human tool. Alternately, we could integrate a concept, like love or respect, into its source code. And to prevent it from adopting a hyper-simplistic, monochromatic view of the world, it could be programmed to appreciate intellectual, cultural, and social diversity.

Everything You Know About Artificial Intelligence is Wrong
Isaac Asimov's Three Laws of Robotics make for great science fiction, but we're going to need something more substantive to solve the "control problem." Image: Nova

But these solutions are either too simple-like trying to fit the entire complexity of human likes and dislikes into a single glib definition-or they cram all the complexity of human values into a simple word, phrase, or idea. Take, for example, the tremendous difficulty of trying to settle on a coherent, actionable definition for "respect."

"That's not to say that such simple tricks are useless-many of them suggest good avenues of investigation, and could contribute to solving the ultimate problem," Armstrong said. "But we can't rely on them without a lot more work developing them and exploring their implications."

Myth: "We will be destroyed by artificial superintelligence."

Everything You Know About Artificial Intelligence is Wrong
Image: Matrix: Revolutions

Reality: There's no guarantee that AI will destroy us, or that we won't find ways to control and contain it. As AI theorist Eliezer Yudkowsky said, "The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else."

In his book Superintelligence: Paths, Dangers, Strategies, Oxford philosopher Nick Bostrom wrote that true artificial superintelligence, once realized, could pose a greater risk than any previous human invention. Prominent thinkers like Elon Musk , Bill Gates, and Stephen Hawking (the latter of whom warned that AI could be our "worst mistake in history ") have likewise sounded the alarm.

McIntyre said that for most goals an artificial superintelligence could possess, there are some good reasons to get humans out of the picture.

"An AI might predict, quite correctly, that we don't want it to maximize the profit of a particular company at all costs to consumers, the environment, and non-human animals," McIntyre said. "It therefore has a strong incentive to ensure that it isn't interrupted or interfered with, including being turned off, or having its goals changed, as then those goals would not be achieved."

Unless the goals of an ASI exactly mirror our own, McIntyre said it would have good reason not to give us the option of stopping it. And given that its level of intelligence greatly exceeds our own, there wouldn't be anything we could do about it.

But nothing is guaranteed, and no one can be sure what form AI will take, and how it might endanger humanity. As Musk has pointed out , artificial intelligence could actually be used to control, regulate, and monitor other AI. Or, it could be imbued with human values, or an overriding imposition to be friendly to humans.

Myth: "Artificial superintelligence will be friendly."

Everything You Know About Artificial Intelligence is Wrong
Image: ST:TNG

Reality: Philosopher Immanuel Kant believed that intelligence strongly correlates with morality. In his paper "The Singularity: A Philosophical Analysis," neuroscientist David Chalmers took Kant's famous idea and applied it to the rise of artificial superintelligence.

If this is right...we can expect an intelligence explosion to lead to a morality explosion along with it. We can then expect that the resulting [ASI] systems will be supermoral as well as superintelligent, and so we can presumably expect them to be benign.

But the idea that advanced AI will be enlightened and inherently good doesn't stand up. As Armstrong pointed out, there are many smart war criminals. A relation between intelligence and morality doesn't seem to exist among humans, so he questions the assumption that it's sure to exist in other forms of intelligence.

"Smart humans who behave immorally tend to cause pain on a much larger scale than their dumber compatriots," he said. "Intelligence has just given them the ability to be bad more intelligently, it hasn't turned them good."

As McIntyre explained, an agent's ability to achieve a goal is unrelated to whether it's a smart goal to begin with. "We'd have to be very lucky if our AIs were uniquely gifted to become more moral as they became smarter," he said. "Relying on luck is not a great policy for something that could determine our future."

Myth: "Risks from AI and robotics are the same."

Everything You Know About Artificial Intelligence is Wrong
image: Terminator

Reality: This is a particularly common mistake (good examples here and here), one perpetuated by an uncritical media and Hollywood films like the Terminator movies.

If an artificial superintelligence like Skynet really wanted to destroy humanity, it wouldn't use machine gun-wielding androids. It would be far more efficient to, say, unleash a biological plague, or instigate a nanotechnological grey goo disaster . Or it could just destroy the atmosphere. Artificial intelligence is potentially dangerous, not because of what it implies for the future of robotics, but rather in how it will invoke its presence on the world.

Myth: "AIs in science fiction are accurate portrayals of the future."

Everything You Know About Artificial Intelligence is Wrong
Many kinds of minds. Image: Eliezer Yudkowsky/MIRI

Reality: Sure, scifi has been used by authors and futurists to make fantastic predictions over the years, but the event horizon posed by ASI is a horse of a different color. What's more, the very unhuman-like nature of AI makes it impossible for us to know, and therefore predict, its exact nature and form.

For scifi to entertain us puny humans, most "AIs" need to be similar to us. "There is a spectrum of all possible minds; even within the human species, you are quite different to your neighbor, and yet this variation is nothing compared to all of the possible minds that could exist," McIntyre said.

Most sci-fi exists to tell a compelling story, not to be scientifically accurate. Thus, conflict in sci-fi tends to be between entities that are evenly matched. "Imagine how boring a story would be," Armstrong said, "where an AI with no consciousness, joy, or hate, ends up removing all humans without any resistance, to achieve a goal that is itself uninteresting."

Myth: "It's terrible that AIs will take all our jobs."

Reality: The ability of AI to automate much of what we do, and its potential to destroy humanity, are two very different things. But according to Martin Ford, author of Rise of the Robots: Technology and the Threat of a Jobless Future, they're often conflated. It's fine to think about the far-future implications of AI, but only if it doesn't distract us from the issues we're likely to face over the next few decades. Chief among them is mass automation.

Everything You Know About Artificial Intelligence is Wrong
Image: Getty

There's no question that artificial intelligence is poised to uproot and replace many existing jobs, from factory work to the upper echelons of white collar work . Some experts predict (PDF) that half of all jobs in the US are vulnerable to automation in the near future.

But that doesn't mean we won't be able to deal with the disruption. A strong case can be made that offloading much of our work, both physical and mental, is a laudable, quasi-utopian goal for our species.

http://io9.gizmodo.com/how-universal-...

"Over the next couple of decades AI is going to destroy many jobs, but this is a good thing," Miller told Gizmodo. Self-driving cars could replace truck drivers, for example, which would cut delivery costs and therefore make it cheaper to buy goods. "If you earn money as a truck driver, you lose, but everyone else effectively gets a raise as their paychecks buy more," Miller said. "And the money these winners save will be spent on other goods and services which will generate new jobs for humans."

In all likelihood, artificial intelligence will produce new ways of creating wealth, while freeing humans to do other things. And advances in AI will be accompanied by advances in other areas, especially manufacturing. In the future, it will become easier, and not harder, to meet our basic needs.

Email the author at george@gizmodo.com and follow him @dvorsky.

Contribute to Gizmodo

Write for Us
You Can Tweak the Warmth of Philips' New White Hue Bulbs To Help You Fall AsleepNext StoryYou Can Tweak the Warmth of Philips' New White Hue Bulbs To Help You Fa...

Also on Gizmodo

Comments ()

X
Sort By:

TIMES GLOBAL PARTNERS

Times Global Partners is an initiative focused on partnering with Established and Emerging Global Digital Companies for growing their presence and business in India through growth in their Brand, audience, adoption, distribution and monetization.