Notice the moral ambiguity in this problem, and how much more difficult that makes it to work with. Some people, including many who gravitate to technical fields like AI research, would prefer to stick to engineering problems, where there’s a clear right answer. To some extent, that’s ok, but they have to then admit that they don’t have any say in how their work will affect the world. They’re essentially a pawn in the hands of whoever decides what topic they work on, and that’s normally whoever supplies the money.
AI safety, as it currently stands, allows the AI world to feel that they do have control over what they’re creating. Of course, they say AI safety is a hard problem and requires more work, but they feel they have basically pinned down what they need to do to avoid things turning out badly and it is at this point an engineering problem.
For those AI researchers that are not comfortable with being a pawn in the game, the right place to begin is with the high-level questions of what we want AI to be, and what we don’t want it to be, and clearly, the ethics of building killing machines is a big part of these questions. This means spending the time to understand parts of the world outside your familiar culture, broaching topics that make people uncomfortable, and tackling morally ambiguous questions that can’t be solved as cleanly as technical ones can.
AI safety blocks people from doing these things, because it gives the illusion that the matter is already being dealt with. That’s the whole purpose of this sort of shadow-boxing: to allow people the comforting but false belief that they’re wrestling with the big issues. Oh, you’re concerned about how AI is shaping the world? Great, join the AI safety team, we’ve already identified the key areas for you to work on. We even have metrics and benchmarks and datasets, so just engineer a way to make one of these scores higher and you’re doing your bit to make AI safe.
[---]
AI is a subject that I came to out of a quasi-spiritual impulse to understand the nature of the mind and the self, and it’s now getting roped into the most powerful and destructive systems on the planet, including the military-industrial complex, and, potentially, the outbreak of the next major global conflicts. Trying to navigate this central and rapidly changing position brings a host of new questions–questions that are unfamiliar, controversial and ambiguous–and so far the AI world has barely found the courage even to ask them.
- More Here
No comments:
Post a Comment