Tuesday, December 29, 2015

Philosophy Will Be The Key That Unlocks Artificial Intelligence

But no brain on Earth is yet close to knowing what brains do in order to achieve any of that functionality. The enterprise of achieving it artificially – the field of "artificial general intelligence" or AGI – has made no progress whatever during the entire six decades of its existence.

Despite this long record of failure, AGI must be possible. That is because of a deep property of the laws of physics, namely the universality of computation. It entails that everything that the laws of physics require physical objects to do can, in principle, be emulated in arbitrarily fine detail by some program on a general-purpose computer, provided it is given enough time and memory.

[---]


That AGIs are "people" has been implicit in the very concept from the outset. If there were a program that lacked even a single cognitive ability that is characteristic of people, then by definition it would not qualify as an AGI; using non-cognitive attributes (such as percentage carbon content) to define personhood would be racist, favouring organic brains over silicon brains. But the fact that the ability to create new explanations is the unique, morally and intellectually significant functionality of "people" (humans and AGIs), and that they achieve this functionality by conjecture and criticism, changes everything.

Currently, personhood is often treated symbolically rather than factually – as an honorific, a promise to pretend that an entity (an ape, a foetus, a corporation) is a person in order to achieve some philosophical or practical aim. This isn't good. Never mind the terminology; change it if you like, and there are indeed reasons for treating various entities with respect, protecting them from harm and so on. All the same, the distinction between actual people, defined by that objective criterion, and other entities, has enormous moral and practical significance, and is going to become vital to the functioning of a civilisation that includes AGIs.

[---]

Some people are wondering whether we should welcome our new robot overlords and/or how we can rig their programming to make them constitutionally unable to harm humans (as in Asimov's "three laws of robotics"), and/or prevent them from acquiring the theory that the universe should be converted into paperclips. That's not the problem. It has always been the case that a single exceptionally creative person can be thousands of times as productive, economically, intellectually, or whatever, as most people; and that such a person, turning their powers to evil instead of good, can do enormous harm.

These phenomena have nothing to do with AGIs. The battle between good and evil ideas is as old as our species and will continue regardless of the hardware on which it is running. The issue is: we want the intelligences with (morally) good ideas always to defeat the evil intelligences, biological and artificial; but we are fallible, and our own conception of "good" needs continual improvement. How should society be organised so as to promote that improvement? "Enslave all intelligence" would be a catastrophically wrong answer, and "enslave all intelligence that doesn't look like us" would not be much better.

One implication is that we must stop regarding education (of humans or AGIs alike) as instruction – as a means of transmitting existing knowledge unaltered, and causing existing values to be enacted obediently. As Karl Popper wrote (in the context of scientific discovery, but it applies equally to the programming of AGIs and the education of children): "there is no such thing as instruction from without … We do not discover new facts or new effects by copying them, or by inferring them inductively from observation, or by any other method of instruction by the environment. We use, rather, the method of trial and the elimination of error." That is to say, conjecture and criticism. Learning must be something that newly created intelligences do, and control, for themselves.


- More Here

No comments: