Saturday, May 7, 2016

Wisdom Of The Week

Even though there's a lot of hype about AI and a lot of money being invested in AI, I feel like the field is headed in the wrong direction. There's been a local maximum where there's a lot of low-hanging fruit right now in a particular direction, which is mainly deep learning and big data. People are very excited about the big data and what it's giving them right now, but I'm not sure it's taking us closer to the deeper questions in artificial intelligence, like how we understand language or how we reason about the world.

[---]

The natural language understanding is coming along slowly. You wouldn't be able to dictate this conversation into Siri and expect it to come out with anything whatsoever. But you could get most of the words right, and that's a big improvement. It turns out that it works best with a lot of brute force data available. When you're doing speech recognition on white males, who are native language speakers, in a quiet room, it works pretty well. But if you're in a noisy environment, or you're not a native speaker, or if you're a woman or a child, the speech recognition doesn't work that well. Speech recognition is brute force. It's not brute force in the same way as Deep Blue, which considered a lot of positions; it's brute force in the sense that it needs a lot of data to work efficiently.

[---]

What we're trying to address is what I call the problem of sparse data: If you have a small amount of data, how do you solve a problem? The ultimate sparse data learners are children. They get tiny amounts of data about language and by the time they're three years old, they've figured out the whole linguistic system. I wouldn't say that we are directly neuroscience-inspired; we're not directly using an algorithm that I know for fact that children have. But we are trying to look to some extent at how you might solve some of the problems that children do. Instead of just memorizing all the training data, how might you do something deeper and more abstract in order to learn better? I don't run experiments, at least very often, on my children, but I observe them very carefully. My wife, who's also a developmental psychologist, does too. We are super well calibrated to what the kids are doing, what they've just learned, what their vocabulary is, what their syntax is. We take note of what they do.

[---]

I did want to say just a little bit about neuroscience and its relation to AI. One model here is that the solution to all the problems that we've been talking about is we will simulate the brain. This is the Henry Markham and the Ray Kurzweil approach. Kurzweil made a famous bet with the Long Now Foundation about when we will get to AI. He based his bet on when he felt we would get to understand the brain. My sense is we're not going to understand the brain anytime soon; there's too much complexity there. The models that people build are like one or two kinds of neurons, and there are many of them and they connect together. But if you look at the actual biology, we have hundreds or maybe thousands of kinds of neurons in the brain. Each synapse has hundreds of different molecules, and the interconnection between the brain is vastly more complicated than we ever imagined.

Rather than using neuroscience as a path to AI, maybe we use AI as a path to neuroscience. That level of complexity is something that human beings can't understand. We need better AI systems before we'll understand the brain, not the other way around.


- Is Big Data Taking Us Closer to the Deeper Questions in Artificial Intelligence?, A Conversation With Gary Marcus


No comments: