Profile of Douglas Hofstadter - The Man Who Would Teach Machines to Think is the best piece I read on AI this year. Whatever little AI I learned in the past few years doesn't even fall under is radar of interest; so it about time I start reading his books.
“It depends on what you mean by artificial intelligence.” Douglas Hofstadter is in a grocery store in Bloomington, Indiana, picking out salad ingredients. “If somebody meant by artificial intelligence the attempt to understand the mind, or to create something human-like, they might say—maybe they wouldn’t go this far—but they might say this is some of the only good work that’s ever been done.”
Hofstadter says this with an easy deliberateness, and he says it that way because for him, it is an uncontroversial conviction that the most-exciting projects in modern artificial intelligence, the stuff the public maybe sees as stepping stones on the way to science fiction—like Watson, IBM’s Jeopardy-playing supercomputer, or Siri, Apple’s iPhone assistant—in fact have very little to do with intelligence. For the past 30 years, most of them spent in an old house just northwest of the Indiana University campus, he and his graduate students have been picking up the slack: trying to figure out how our thinking works, by writing computer programs that think.
Their operating premise is simple: the mind is a very unusual piece of software, and the best way to understand how a piece of software works is to write it yourself. Computers are flexible enough to model the strange evolved convolutions of our thought, and yet responsive only to precise instructions. So if the endeavor succeeds, it will be a double victory: we will finally come to know the exact mechanics of our selves—and we’ll have made intelligent machines.
Here's Peter Norvig author of Artificial Intelligence: A Modern Approach on Douglas Hofstadter's work:
“I thought he was tackling a really hard problem, and I guess I wanted to do an easier problem.”
It requires a life time of perseverance and sacrifice for this kind of dedication:
“There are very few ideas in science that are so black-and-white that people say ‘Oh, good God, why didn’t we think of that?’ ” says Bob French, a former student of Hofstadter’s who has known him for 30 years. “Everything from plate tectonics to evolution—all those ideas, someone had to fight for them, because people didn’t agree with those ideas. And if you don’t participate in the fight, in the rough-and-tumble of academia, your ideas are going to end up being sidelined by ideas which are perhaps not as good, but were more ardently defended in the arena.”
Hofstadter hasn’t been to an artificial-intelligence conference in 30 years. “There’s no communication between me and these people,” he says of his AI peers. “None. Zero. I don’t want to talk to colleagues that I find very, very intransigent and hard to convince of anything. You know, I call them colleagues, but they’re almost not colleagues—we can’t speak to each other.”
“It depends on what you mean by artificial intelligence.” Douglas Hofstadter is in a grocery store in Bloomington, Indiana, picking out salad ingredients. “If somebody meant by artificial intelligence the attempt to understand the mind, or to create something human-like, they might say—maybe they wouldn’t go this far—but they might say this is some of the only good work that’s ever been done.”
Hofstadter says this with an easy deliberateness, and he says it that way because for him, it is an uncontroversial conviction that the most-exciting projects in modern artificial intelligence, the stuff the public maybe sees as stepping stones on the way to science fiction—like Watson, IBM’s Jeopardy-playing supercomputer, or Siri, Apple’s iPhone assistant—in fact have very little to do with intelligence. For the past 30 years, most of them spent in an old house just northwest of the Indiana University campus, he and his graduate students have been picking up the slack: trying to figure out how our thinking works, by writing computer programs that think.
Their operating premise is simple: the mind is a very unusual piece of software, and the best way to understand how a piece of software works is to write it yourself. Computers are flexible enough to model the strange evolved convolutions of our thought, and yet responsive only to precise instructions. So if the endeavor succeeds, it will be a double victory: we will finally come to know the exact mechanics of our selves—and we’ll have made intelligent machines.
Here's Peter Norvig author of Artificial Intelligence: A Modern Approach on Douglas Hofstadter's work:
“I thought he was tackling a really hard problem, and I guess I wanted to do an easier problem.”
It requires a life time of perseverance and sacrifice for this kind of dedication:
“There are very few ideas in science that are so black-and-white that people say ‘Oh, good God, why didn’t we think of that?’ ” says Bob French, a former student of Hofstadter’s who has known him for 30 years. “Everything from plate tectonics to evolution—all those ideas, someone had to fight for them, because people didn’t agree with those ideas. And if you don’t participate in the fight, in the rough-and-tumble of academia, your ideas are going to end up being sidelined by ideas which are perhaps not as good, but were more ardently defended in the arena.”
Hofstadter hasn’t been to an artificial-intelligence conference in 30 years. “There’s no communication between me and these people,” he says of his AI peers. “None. Zero. I don’t want to talk to colleagues that I find very, very intransigent and hard to convince of anything. You know, I call them colleagues, but they’re almost not colleagues—we can’t speak to each other.”
No comments:
Post a Comment