A long long time ago, Micheal Lewis in an interview said something very wise about his first book Liar's Poker. I am paraphrasing here:
I personally have lived and worked in the same industries during the dot-com bubble, real estate crisis and irony has it - in AI.
What I am seeing is deja vu with AI - millions are using it as a ‘how to guide' to make quick buck although most know this is pure snake oil and is going to come down sooner or later.
Tulips to AI - human freaking beings never learn.. well actually they are freaking so good at self deception (hence, I love love Robert Trivers' work)
Not many people make this connection but Andrew is not like other people - He puts Thiel, Musk and Altman in the same bucket as troubled creatures without morals.
It's not clear to me that scaling AI models aggressively somehow makes it more dangerous in terms of military applications. Like, to me, the things that are dangerous for military applications are actually extremely simple AI models. Like, the reporting on Lavender that the Israelis were using to identify targets in Palestine, that was, like, an extremely basic approach. machine learning model that was practically just linear algebra. And the other thing that people worry about is autonomous weapons, which you do not use large language models to develop autonomous weapons. You use things like computer vision for identifying a target and then autonomously operating the weapon.
[---]
A lot of the concerns that people have about AI and military is they're actually talking about totally different types of technologies than what these companies are building. But then the companies are using the confusion to their advantage to say, oh, yeah, like keep giving us all the resources to build this completely wholly unrelated AI technology.
[---]
I'm not that interested in boardroom struggles and all that stuff, I have to say. But the one thing that does... That I did get from that is that we forget these people are humans. They have, the very brilliant ones, the ones that are making a lot of these decisions, Altman, Susqueva, Mirata, Musk, Teal, they're all complicated, flawed people. human beings. And they don't, because they're working in this industry, doesn't mean they have some sort of super intelligence or super morality. They don't. Altman himself, and this brings us to another, sort of just upsetting thing, which is his sister, which is this other story that comes out later, which also begins to just create a general sense of unease about this guy, because she has claimed publicly on many occasions that she was abused sexually by her family, including her brother. for many years, she ended up in a pretty rough state. I mean, she was, she was, she basically reduced only fans to keep herself going.
And she's the sister of this person. And they, of course, anybody involved in other family dynamics is going to, is, is, is dumb. It's obviously something I can't understand. I don't want to stand, but nonetheless, the disparity between this poor woman's utter cutoff, utter isolation, despair in such a massive enterprise that her brother is undergoing is,
[---]
We need to focus on AI development in the future is moving away from large scale models that are intending to be some kind of general purpose tool. And we should really be focusing on small task specific models again, which is what used to be what AI actually was. And the reason is it's so much less energy intensive.
You can train, you know, a cancer detection AI model on something like a powerful computer. You do not actually need cities of iPhones, as you so eloquently put it. And, like, that's, you know, that's very little cost for an extraordinary benefit. We want more cancer-detecting AI.
We also want more AI that can reduce the energy consumption of a building. We want more AI that can help do more accurate weather prediction and climate crisis prediction so that we can evacuate people more accurately when climate disasters strike.
But what Altman might say in return is, but you don't understand, AGI will solve climate change. Of course, which she says all the time. Which we'll get nuclear fission within a few minutes once Big Brain comes on. What are we going to call this thing?
What the fuck are we going to call this giant bloody thing that we all have to worship or that has the supreme intelligence? But yeah, that is the ultimate win-all argument, which is that, look, what we're developing is so smart, it will solve all the problems it creates.
And I have a facetious answer and a more legitimate answer. My facetious answer is throughout history, there have been people that have promised some kind of thing that will solve all your problems. And they have always been charlatans. Like if someone knocked on your door in the medieval ages and was like, I have this potion that's going to solve all your problems, you just have to give me everything, like your firstborn child and everything. Like you would be like, wait a minute, something's not quite adding up here. And now fast forward to today, that is essentially what these AI companies are saying.
They're like, give us everything and then we will give you a solution to all of your problems. I mean, if you just abstract it to that level, it suddenly becomes blatantly obvious what's actually happening. This is entirely a scam. But the less facetious argument is like, they are telling us, ignore all of the current, real, present-day problems based on the promise of something potentially arriving in the future. They've never actually, you know, we cannot guarantee that this technology is going to deliver all these things that they say they will. So how long are we willing to burn down our planet and run down our resources and gouge out our economy and do all of these things for the speculative payoff? Like, at what point do we decide, wait a minute, why don't we actually just reinvest all this capital in solutions that we know will pay off?
[---]
It needs to be dealt with by people who just live ordinary lives. And it needs to be brought back to the human. And what some of these individuals, I think of Thiel particularly, their aspirations are truly, truly important. beyond responsible in my my view and and inhuman and you see in their desire to live forever the obvious natural conclusion to where they go they want to be gods yeah and and ai and hei is really their pathway to become gods and and we're not And it is insane to do so. And we're going to destroy ourselves if we do it.
[---]
And the one possible solace, which is the people leading these companies, are actually solid, moral, sane people. It seems to be lacking. I mean, honestly, I mean... You just observe Elon Musk's tweets and you're like, I understand this man is obviously a genius in many ways, right? The evidence of his achievements are overwhelming.
But he's out of his fucking mind. And the things he's saying are just so loony. The story you tell of Sam Altman is of a deeply disturbed person. A really fucked up person. I'm sorry. I don't know where he's coming from. I feel... I feel kind of proud that a young gay man, openly gay man, has done this. But we gays, we often spend a lot of time in childhood alone looking at computers and things. I mean, it's not an accident that we're overrepresented in many ways at the top of many companies.
But at the same time, boy, are they not that well. And they don't have... values, structures, morals that most of us would understand as solid. I mean, Peter Thiel says he's a Christian because you've read René Girard, but I'm sorry, but no, I don't see it that way at all.