Sunday, October 27, 2013

Why We Should Think About the Threat of Artificial Intelligence

A dark new book by James Barrat, “Our Final Invention: Artificial Intelligence and the End of the Human Era,” lays out a strong case for why we should be at least a little worried.

The British cyberneticist Kevin Warwick once asked, “How can you reason, how can you bargain, how can you understand how that machine is thinking when it’s thinking in dimensions you can’t conceive of?”
If there is a hole in Barrat’s dark argument, it is in his glib presumption that if a robot is smart enough to play chess, it might also “want to build a spaceship”—and that tendencies toward self-preservation and resource acquisition are inherent in any sufficiently complex, goal-driven system. For now, most of the machines that are good enough to play chess, like I.B.M.’s Deep Blue, haven’t shown the slightest interest in acquiring resources.

But before we get complacent and decide there is nothing to worry about after all, it is important to realize that the goals of machines could change as they get smarter. Once computers can effectively reprogram themselves, and successively improve themselves, leading to a so-called “technological singularity” or “intelligence explosion,” the risks of machines outwitting humans in battles for resources and self-preservation cannot simply be dismissed.

One of the most pointed quotes in Barrat’s book belongs to the legendary serial A.I. entrepreneur Danny Hillis, who likens the upcoming shift to one of the greatest transitions in the history of biological evolution: “We’re at that point analogous to when single-celled organisms were turning into multi-celled organisms. We are amoeba and we can’t figure out what the hell this thing is that we’re creating.”


- More Here

No comments: