Tuesday, December 3, 2013

A.I. Gone Awry - The Futile Quest for Artificial Intelligence

A Brief History of A.I.:
Duplicating or mimicking human-level intelligence is an old notion — perhaps as old as humanity itself. In the 19th century, as Charles Babbage conceived of ways to mechanize calculation, people started thinking it was possible — or arguing that it wasn’t. Toward the middle of the 20th century, as mathematical geniuses Claude Shannon, Norbert Wiener, John von Neumann, Alan Turing, and others laid the foundations of the theory of computing, the necessary tool seemed available.

In 1955, a research project on artificial intelligence was proposed; a conference the following summer is considered the official inauguration of the field. The proposal is fascinating for its assertions, assumptions, hubris, and naïveté, all of which have characterized the field of A.I. ever since. The authors proposed that ten people could make significant progress in the field in two months. That ten-person, two-month project is still going strong — 50 years later. And it’s involved the efforts of more like tens of thousands of people.

Connectionism:
Moore’s “Law” is often invoked at this stage in the A.I. argument. But Moore’s Law is more of an observation than a law, and it is often misconstrued to mean that about every 18 months computers and everything associated with them double in capacity, speed, and so on. But Moore’s Law won’t solve the complexity problem at all. There’s another “law,” this one attributed to Nicklaus Wirth: Software gets slower faster than hardware gets faster. Even though, according to Moore’s Law, your personal computer should be about a hundred thousand times more powerful than it was 25 years ago, your word processor isn’t. Moore’s Law doesn’t apply to software.

Computationalism:
Computationalist A.I. is plagued by a host of other problems. First of all its systems don’t have any common sense. Then there’s “the symbol- grounding problem.” The analogy is trying to learn a language from a dictionary (without pictures) — every word (symbol) is simply defined using other words (symbols), so how does anything ever relate to the world? Then there’s the “frame problem” — which is essentially the problem of which context to apply to a given situation. Some researchers consider it to be the fundamental problem in both computationalist and connectionist A.I.

The most serious computationalist attempt to duplicate human-level intelligence — perhaps the only serious attempt — is known as CYC — short for enCYClopedia (but certainly meant also to echo “psych”). The head of the original project and the head of CYCORP, Douglas Lenat has been making public claims about its imminent success for more than twenty years. The stated goal of CYC is to capture enough human knowledge — including common sense — to, at the very least, pass an unrestricted Turing Test. If any computationalist approach could succeed, it would be this mother of all expert systems.

Robotics:
According to the roboticists and their fans, Moore’s Law will come to the rescue. The implication is that we have the programs and the data all ready to go, and all that’s holding us back is a lack of computing power. After all, as soon as computers got powerful enough, they were able to beat the world’s best human chess player, weren’t they? (Well, no — a great deal of additional programming and chess knowledge was also needed.)Sad to say, even if we had unlimited computer power and storage, we wouldn’t know what to do with it. The programs aren’t ready to go, because there aren’t any programs.

- More Here

No comments: