Saturday, November 22, 2014

Wisdom Of The Week

I'll give you a few examples of what I mean by that. Maybe I'll start with Netflix. The thing about Netflix is that there isn't much on it. There's a paucity of content on it. If you think of any particular movie you might want to see, the chances are it's not available for streaming, that is; that's what I'm talking about. And yet there's this recommendation engine, and the recommendation engine has the effect of serving as a cover to distract you from the fact that there's very little available from it. And yet people accept it as being intelligent, because a lot of what's available is perfectly fine.

The one thing I want to say about this is I'm not blaming Netflix for doing anything bad, because the whole point of Netflix is to deliver theatrical illusions to you, so this is just another layer of theatrical illusion—more power to them. That's them being a good presenter. What's a theater without a barker on the street? That's what it is, and that's fine. But it does contribute, at a macro level, to this overall atmosphere of accepting the algorithms as doing a lot more than they do. In the case of Netflix, the recommendation engine is serving to distract you from the fact that there's not much choice anyway.

There are other cases where the recommendation engine is not serving that function, because there is a lot of choice, and yet there's still no evidence that the recommendations are particularly good. There's no way to compare them to an alternative, so you don't know what might have been. If you want to put the work into it, you can play with that; you can try to erase your history, or have multiple personas on a site to compare them. That's the sort of thing I do, just to get a sense. I've also had a chance to work on the algorithms themselves, on the back side, and they're interesting, but they're vastly, vastly overrated.

I want to get to an even deeper problem, which is that there's no way to tell where the border is between measurement and manipulation in these systems. For instance, if the theory is that you're getting big data by observing a lot of people who make choices, and then you're doing correlations to make suggestions to yet more people, if the preponderance of those people have grown up in the system and are responding to whatever choices it gave them, there's not enough new data coming into it for even the most ideal or intelligent recommendation engine to do anything meaningful.

In other words, the only way for such a system to be legitimate would be for it to have an observatory that could observe in peace, not being sullied by its own recommendations. Otherwise, it simply turns into a system that measures which manipulations work, as opposed to which ones don't work, which is very different from a virginal and empirically careful system that's trying to tell what recommendations would work had it not intervened. That's a pretty clear thing. What's not clear is where the boundary is.

[---]

I haven't gone through a whole litany of reasons that the mythology of it AI does damage. There's a whole other problem area that has to do with neuroscience, where if we pretend we understand things before we do, we do damage to science, not just because we raise expectations and then fail to meet them repeatedly, but because we confuse generations of young scientists. Just to be absolutely clear, we don't know how most kinds of thoughts are represented in the brain. We're starting to understand a little bit about some narrow things. That doesn't mean we never will, but we have to be honest about what we understand in the present.

A retort to that caution is that there's some exponential increase in our understanding, so we can predict that we'll understand everything soon. To me, that's crazy, because we don't know what the goal is. We don't know what the scale of achieving the goal would be... So to say, "Well, just because I'm accelerating, I know I'll reach my goal soon," is absurd if you don't know the basic geography which you're traversing. As impressive as your acceleration might be, reality can also be impressive in the obstacles and the challenges it puts up. We just have no idea.

This is something I've called, in the past, "premature mystery reduction," and it's a reflection of poor scientific mental discipline. You have to be able to accept what your ignorances are in order to do good science. To reject your own ignorance just casts you into a silly state where you're a lesser scientist. I don't see that so much in the neuroscience field, but it comes from the computer world so much, and the computer world is so influential because it has so much money and influence that it does start to bleed over into all kinds of other things. A great example is the Human Brain Project in Europe, which is a lot of public money going into science that's very influenced by this point of view, and it has upset some in the neuroscience community for precisely the reason I described.


- Jaron Lanier, The Myth Of AI


No comments: