Showing posts with label Technology. Show all posts
Showing posts with label Technology. Show all posts

Sunday, March 29, 2026

Grounded In Reality Piece On AI Mania

I don’t say that because I think that AI models are bad or because I think they won’t get better; I think that AI models are very good and will get much better. No. The fault is not with the models, but with us. The world is run by humans, and because it’s run by humans—entities that are smelly, oily, irritable, stubborn, competitive, easily frightened, and above all else inefficient—it is a world of bottlenecks. And as long as we have human bottlenecks, we’ll need humans to deal with them: we will have, in other words, complementarity.

People frequently underrate how inefficient things are in practically any domain, and how frequently these inefficiencies are reducible to bottlenecks caused simply by humans being human. Laws and regulations are obvious bottlenecks. But so are company cultures, and tacit local knowledge, and personal rivalries, and professional norms, and office politics, and national politics, and ossified hierarchies, and bureaucratic rigidities, and the human preference to be with other humans, and the human preference to be with particular humans over others, and the human love of narrative and branding, and the fickle nature of human preferences and tastes, and the severely limited nature of human comprehension. And the biggest bottleneck is simply the human resistance to change: the fact that people don’t like shifting what they’re doing. All of these are immensely powerful. Production processes are governed by their least efficient inputs: the more efficient the most efficient inputs, the more important the least efficient inputs.

In the long run, we should expect the power of technology to overcome these bottlenecks, in the same way that a river erodes a stone over many years and decades—just as how in the early decades of the twentieth century, the sheer power of what electricity could accomplish gradually overcame the bottlenecks of antiquated factory infrastructure, outdated workflows, and the conservatism of hidebound plant managers. This process, however, takes time: it took decades for electricity, among the most powerful of all general-purpose technologies, to start impacting productivity growth. AI will probably be much faster than that, not least because it can be agentic in a way that electricity cannot. But these bottlenecks are real and important and are obvious if you look at any part of the real world. And as long as those bottlenecks exist, no matter the level of AI capabilities, we should expect a real and powerful complementarity between human labor and AI, simply because the “human plus AI” combination will be more productive than AI alone.

- More Here


Friday, February 13, 2026

No-Technological-Solution Problem

Bingo! What an insight!

We sapiens fucked things up, are still fucking things up, and promise, to continue fucking things up in future. 

Changing their mind and behavior is not in the equation but my species is planning to  innovate the fuck of technologies to clean up the mess they created while they continue to fuck things up. 

Hmm, god bless my species. 

Wonderful, wonderful interview with Dan Brooks about his new book A Darwinian Survival Guide: Hope for the Twenty-First Century:

Well, the primary thing that we have to understand or internalize is that what we’re dealing with is what is called a no-technological-solution problem. In other words, technology is not going to save us, real or imaginary. We have to change our behavior. If we change our behavior, we have sufficient technology to save ourselves. If we don’t change our behavior, we are unlikely to come up with a magical technological fix to compensate for our bad behavior. 

This is why Sal and I have adopted a position that we should not be talking about sustainability, but about survival, in terms of humanity’s future. Sustainability has come to mean, what kind of technological fixes can we come up with that will allow us to continue to do business as usual without paying a penalty for it? As evolutionary biologists, we understand that all actions carry biological consequences. We know that relying on indefinite growth or uncontrolled growth is unsustainable in the long term, but that’s the behavior we’re seeing now.

Stepping back a bit. Darwin told us in 1859 that what we had been doing for the last 10,000 or so years was not going to work. But people didn’t want to hear that message. So along came a sociologist who said, “It’s OK; I can fix Darwinism.” This guy’s name was Herbert Spencer, and he said, “I can fix Darwinism. We’ll just call it natural selection, but instead of survival of what’s-good-enough-to-survive-in-the-future, we’re going to call it survival of the fittest, and it’s whatever is best now.” Herbert Spencer was instrumental in convincing most biologists to change their perspective from “evolution is long-term survival” to “evolution is short-term adaptation.” And that was consistent with the notion of maximizing short term profits economically, maximizing your chances of being reelected, maximizing the collection plate every Sunday in the churches, and people were quite happy with this.

Well, fast-forward and how’s that working out? Not very well. And it turns out that Spencer’s ideas were not, in fact, consistent with Darwin’s ideas. They represented a major change in perspective. What Sal and I suggest is that if we go back to Darwin’s original message, we not only find an explanation for why we’re in this problem, but, interestingly enough, it also gives us some insights into the kinds of behavioral changes we might want to undertake if we want to survive.

To clarify, when we talk about survival in the book, we talk about two different things. One is the survival of our species, Homo sapiens. We actually don’t think that’s in jeopardy. Now, Homo sapiens of some form or another is going to survive no matter what we do, short of blowing up the planet with nuclear weapons. What’s really important is trying to decide what we would need to do if we wanted what we call “technological humanity,” or better said “technologically-dependent humanity,” to survive.

Put it this way: If you take a couple of typical undergraduates from the University of Toronto and you drop them in the middle of Beijing with their cell phones, they’re going to be fine. You take them up to Algonquin Park, a few hours’ drive north of Toronto, and you drop them in the park, and they’re dead within 48 hours. So we have to understand that we’ve produced a lot of human beings on this planet who can’t survive outside of this technologically dependent existence. 

[---]

That’s actually a really good analogy to use, because of course, as you probably know, the temperatures around the Norwegian Seed Bank are so high now that the Seed Bank itself is in some jeopardy of survival. The place where it is was chosen because it was thought that it was going to be cold forever, and everything would be fine, and you could store all these seeds now. And now all the area around it is melting, and this whole thing is in jeopardy. This is a really good example of letting engineers and physicists be in charge of the construction process, rather than biologists. Biologists understand that conditions never stay the same; engineers engineer things for, this is the way things are, this is the way things are always going to be. Physicists are always looking for some sort of general law of in perpetuity, and biologists are never under any illusions about this. Biologists understand that things are always going to change.

[---]

One of the things that’s really important for us to focus on is to understand why it is that human beings are so susceptible to adopting behaviors that seem like a good idea, and are not. Sal and I say, here are some things that seem to be common to human misbehavior, with respect to their survival. One is that human beings really like drama. Human beings really like magic. And human beings don’t like to hear bad news, especially if it means that they’re personally responsible for the bad news. And that’s a very gross, very superficial thing, but beneath that is a whole bunch of really sophisticated stuff about how human brains work, and the relationship between human beings’ ability to conceptualize the future, but living and experiencing the present.

There seems to be a mismatch within our brain — this is an ongoing sort of sloppy evolutionary phenomenon. So that’s why we spend so much time in the first half of the book talking about human evolution, and that’s why we adopt a nonjudgmental approach to understanding how human beings have gotten themselves into this situation.


 

Sunday, January 4, 2026

US, China, AI & More - Dan Wang 2025 Letter

Dan Wang 2025 letter as usual is full of insights plus its funny when he pokes at Paypal mafia morons.

Narrowness of mind is something that makes me uneasy about the tech world. Effective altruists, for example, began with sound ideas like concern for animal welfare as well as cost-benefit analyses for charitable giving. But these solid premises have launched some of its members towards intellectual worlds very distant from moral intuitions that most people hold; they’ve also sent a few into jail. The well-rounded type might struggle to stand out relative to people who are exceptionally talented in a technical domain. Hedge fund managers have views about the price of oil, interest rates, a reliably obscure historical episode, and a thousand other things. Tech titans more obsessively pursue a few ideas — as Elon Musk has on electric vehicles and space launches — rather than developing a robust model of the world.

So the 20-year-olds who accompanied Mr. Musk into the Department of Government Efficiency did not, I would say, distinguish themselves with their judiciousness. The Bay Area has all sorts of autistic tendencies. Though Silicon Valley values the ability to move fast, the rest of society has paid more attention to instances in which tech wants to break things. It is not surprising that hardcore contingents on both the left and the right have developed hostility to most everything that emerges from Silicon Valley. 

[---]

One of the things I like about the finance industry is that it might be better at encouraging diverse opinions. Portfolio managers want to be right on average, but everyone is wrong three times a day before breakfast. So they relentlessly seek new information sources; consensus is rare, since there are always contrarians betting against the rest of the market. Tech cares less for dissent. Its movements are more herdlike, in which companies and startups chase one big technology at a time. Startups don’t need dissent; they want workers who can grind until the network effects kick in. VCs don’t like dissent, showing again and again that many have thin skins. That contributes to a culture I think of as Silicon Valley’s soft Leninism. When political winds shift, most people fall in line, most prominently this year as many tech voices embraced the right. 

The two most insular cities I’ve lived in are San Francisco and Beijing. They are places where people are willing to risk apocalypse every day in order to reach utopia. Though Beijing is open only to a narrow slice of newcomers — the young, smart, and Han — its elites must think about the rest of the country and the rest of the world. San Francisco is more open, but when people move there, they stop thinking about the world at large. Tech folks may be the worst-traveled segment of American elites. People stop themselves from leaving in part because they can correctly claim to live in one of the most naturally beautiful corners of the world, in part because they feel they should not tear themselves away from inventing the future. More than any other topic, I’m bewildered by the way that Silicon Valley talks about AI.

[---]

It’s easy for conversations in San Francisco to collapse into AI. At a party, someone told me that we no longer have to worry about the future of manufacturing. Why not? “Because AI will solve it for us.” At another, I heard someone say the same thing about climate change. One of the questions I receive most frequently anywhere is when Beijing intends to seize Taiwan. But only in San Francisco do people insist that Beijing wants Taiwan for its production of AI chips. In vain do I protest that there are historical and geopolitical reasons motivating the desire, that chip fabs cannot be violently seized, and anyway that Beijing has coveted Taiwan for approximately seven decades before people were talking about AI.

[---]

By being the site of production, they have a keen sense of how to make technical improvements all the time. American scientists may be world leaders in dreaming up new ideas. But American manufacturers have been poor at building industries around these ideas. The history books point out that Bell Labs invented the first solar cell in 1957; today, the lab no longer exists while the solar industry moved to Germany and then to China. While Chinese universities have grown more capable at producing new ideas, it’s not clear that the American manufacturing base has grown stronger at commercializing new inventions.

[---]

So here’s a potential way that China succeeds. Beijing’s goal is to make nearly every important product in the world, while everyone else supplies its commodities and services. By making the country mostly self-sufficient, and by vigorously policing the outputs of LLMs and social media, Xi might hope to make China resilient. He is building Fortress China stone by stone in order to outlast the adversary. Beijing doesn’t have to replicate American diplomatic, cultural, and financial superpowerdom. It might hope that its prowess in advanced manufacturing might deter the US. And its success in manufacturing might directly destabilize the US: by delivering the coup de grace to the rustbelt, the US might shed a few million more manufacturing jobs over the next decade. The job losses combined with AI psychosis, social media, and all the problems with phones could make national politics meaningfully worse.

I don’t think this scenario is likely to be successful. Authoritarian systems have always hoped for the implosion of liberal democracies, while it is the liberal democracies that have a better track record of endurance. But I also don’t think that authoritarian countries are obviously wrong to bet that western polarization will get worse. So it’s up to the US and Europe to show that they can hold on to their values while absorbing the technological changes coming their way. 

[---]

I wish that the tech world could learn to present broader cultural appeal. I hope that Silicon Valley could learn some of the humorousness of New York (or at least LA.) It’s unfortunate that any show or movie made about Silicon Valley is full of awkward nerds; by contrast, Hollywood reliably finds attractive leads when it makes movies about Wall Street. So long as the tech world is talking about the Machine God and the Antichrist, so long as it declines to read more broadly, so long as it is mostly inward looking, it will continue to alienate big parts of the world.



Friday, November 21, 2025

AI Will Never Be A Shortcut To Wisdom

After nearly forty years teaching graduate students and advising some of the most inventive companies on the planet, I’ve earned the right to sigh a bit. But this isn’t about “kids these days.” In fact, it’s not about youth at all. The shift I’m seeing — this collapse of intellectual agility — is striking all generations. All cultures. All walks of life.

Studies on cognitive flexibility, coupled with anecdotal observations about the death of long-form journalism and the slow drift of reader attention, suggest something dire: We are growing unable to sit still with ambiguity. We no longer walk through the fog of a complex question — we skip across it, like stones. Our thoughts sprint, but the world is a marathon. And so, we are left with answers to the wrong questions.

What happens when we can no longer think through contradiction, paradox, tension? When climate change, homelessness, political division, and regional conflict are seen as disconnected problems with easy answers — when, in truth, they are tangled systems that resist simplicity?

The answer is only simple if you don’t understand the question.

This is the danger of living in a world where thinking is outsourced. Where cognition becomes project management. Where uncertainty is eliminated, not explored. Where truth is boxed and shelved, not wrestled with. If the world is a box of nails — individual facts, sharp and ready — then our minds become hammers. Tools of force and certainty. Banging out conclusions. Flattening nuance. And who builds a cathedral with a hammer? Who composes a symphony with a hammer?

This is no way to live. Because if you see the world as nails, you’ll mistake noise for knowledge. You’ll assume volume means validity. And when you no longer know how to recognize true expertise — because you yourself have never gained any — you will fall for the confident fool. The YouTube doctor. The Instagram monk. The LinkedIn philosopher.

[---]

If you want to reclaim your mind — not as a hammer, but as a compass, or a loom, or a garden — start here:

  • Ask better questions.
  • Be suspicious of certainty.
  • Practice long-form attention.
  • Sit with something confusing until it teaches you something.

We are not meant to be hammerheads in a world of nails. We are meant to wonder, to wander, to build. The true mind does not pound — it inquires, connects, reshapes. It listens to contradiction without collapsing. It plays. And most of all, it remembers that the world was never simple. It was just, for a while, flattened by search engines. 

- More Here


Sunday, October 26, 2025

Andrew Interview With Karen Hao - On AI

A long long time ago, Micheal Lewis in an interview said something very wise about his first book Liar's Poker. I am paraphrasing here: 

"I wrote Liars Poker to expose all the bad things Wall Street is doing but little did I realize, lot of people were using this book as a 'how to guide' ! " 

I personally have lived and worked in the same industries during the dot-com bubble, real estate crisis and irony has it - in AI. 

What I am seeing is deja vu with AI - millions are using it as a ‘how to guide' to make quick buck although most know this is pure snake oil and is going to come down sooner or later. 

Tulips to AI - human freaking beings never learn.. well actually they are freaking so good at self deception (hence, I love love Robert Trivers' work) 

Brilliant interview (albeit they missed an important technical stuff - none of these was pioneered by Open AI) with Karen Hao author the new book Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI

Not many people make this connection but Andrew is not like other people - He puts Thiel, Musk and Altman in the same bucket as troubled creatures without morals. 

It's not clear to me that scaling AI models aggressively somehow makes it more dangerous in terms of military applications. Like, to me, the things that are dangerous for military applications are actually extremely simple AI models. Like, the reporting on Lavender that the Israelis were using to identify targets in Palestine, that was, like, an extremely basic approach. machine learning model that was practically just linear algebra. And the other thing that people worry about is autonomous weapons, which you do not use large language models to develop autonomous weapons. You use things like computer vision for identifying a target and then autonomously operating the weapon.

[---]

A lot of the concerns that people have about AI and military is they're actually talking about totally different types of technologies than what these companies are building. But then the companies are using the confusion to their advantage to say, oh, yeah, like keep giving us all the resources to build this completely wholly unrelated AI technology.

[---]

I'm not that interested in boardroom struggles and all that stuff, I have to say. But the one thing that does... That I did get from that is that we forget these people are humans. They have, the very brilliant ones, the ones that are making a lot of these decisions, Altman, Susqueva, Mirata, Musk, Teal, they're all complicated, flawed people. human beings. And they don't, because they're working in this industry, doesn't mean they have some sort of super intelligence or super morality. They don't. Altman himself, and this brings us to another, sort of just upsetting thing, which is his sister, which is this other story that comes out later, which also begins to just create a general sense of unease about this guy, because she has claimed publicly on many occasions that she was abused sexually by her family, including her brother. for many years, she ended up in a pretty rough state. I mean, she was, she was, she basically reduced only fans to keep herself going.

And she's the sister of this person. And they, of course, anybody involved in other family dynamics is going to, is, is, is dumb. It's obviously something I can't understand. I don't want to stand, but nonetheless, the disparity between this poor woman's utter cutoff, utter isolation, despair in such a massive enterprise that her brother is undergoing is,

[---]

We need to focus on AI development in the future is moving away from large scale models that are intending to be some kind of general purpose tool. And we should really be focusing on small task specific models again, which is what used to be what AI actually was. And the reason is it's so much less energy intensive.

You can train, you know, a cancer detection AI model on something like a powerful computer. You do not actually need cities of iPhones, as you so eloquently put it. And, like, that's, you know, that's very little cost for an extraordinary benefit. We want more cancer-detecting AI.

We also want more AI that can reduce the energy consumption of a building. We want more AI that can help do more accurate weather prediction and climate crisis prediction so that we can evacuate people more accurately when climate disasters strike.

But what Altman might say in return is, but you don't understand, AGI will solve climate change. Of course, which she says all the time. Which we'll get nuclear fission within a few minutes once Big Brain comes on. What are we going to call this thing?

What the fuck are we going to call this giant bloody thing that we all have to worship or that has the supreme intelligence? But yeah, that is the ultimate win-all argument, which is that, look, what we're developing is so smart, it will solve all the problems it creates.

And I have a facetious answer and a more legitimate answer. My facetious answer is throughout history, there have been people that have promised some kind of thing that will solve all your problems. And they have always been charlatans. Like if someone knocked on your door in the medieval ages and was like, I have this potion that's going to solve all your problems, you just have to give me everything, like your firstborn child and everything. Like you would be like, wait a minute, something's not quite adding up here. And now fast forward to today, that is essentially what these AI companies are saying.

They're like, give us everything and then we will give you a solution to all of your problems. I mean, if you just abstract it to that level, it suddenly becomes blatantly obvious what's actually happening. This is entirely a scam. But the less facetious argument is like, they are telling us, ignore all of the current, real, present-day problems based on the promise of something potentially arriving in the future. They've never actually, you know, we cannot guarantee that this technology is going to deliver all these things that they say they will. So how long are we willing to burn down our planet and run down our resources and gouge out our economy and do all of these things for the speculative payoff? Like, at what point do we decide, wait a minute, why don't we actually just reinvest all this capital in solutions that we know will pay off?

[---]

It needs to be dealt with by people who just live ordinary lives. And it needs to be brought back to the human. And what some of these individuals, I think of Thiel particularly, their aspirations are truly, truly important. beyond responsible in my my view and and inhuman and you see in their desire to live forever the obvious natural conclusion to where they go they want to be gods yeah and and ai and hei is really their pathway to become gods and and we're not And it is insane to do so. And we're going to destroy ourselves if we do it.

[---]

And the one possible solace, which is the people leading these companies, are actually solid, moral, sane people. It seems to be lacking. I mean, honestly, I mean... You just observe Elon Musk's tweets and you're like, I understand this man is obviously a genius in many ways, right? The evidence of his achievements are overwhelming.

But he's out of his fucking mind. And the things he's saying are just so loony. The story you tell of Sam Altman is of a deeply disturbed person. A really fucked up person. I'm sorry. I don't know where he's coming from. I feel... I feel kind of proud that a young gay man, openly gay man, has done this. But we gays, we often spend a lot of time in childhood alone looking at computers and things. I mean, it's not an accident that we're overrepresented in many ways at the top of many companies.

But at the same time, boy, are they not that well. And they don't have... values, structures, morals that most of us would understand as solid. I mean, Peter Thiel says he's a Christian because you've read René Girard, but I'm sorry, but no, I don't see it that way at all.


Saturday, October 25, 2025

Well-Defined Problems vs. Poorly-Defined Problems

I hate compliments. This is not fake-humility but I really hate compliments and to make it worse, my red flags light up about the person who compliments me. In other words, I don't trust the humans who compliment me. 

A few times in my life I received a compliment, I liked it since I work hard for it. 

That word is - wisdom. A few times in my life, I heard someone utter the phrase - you are wise. 

And I gladly took that compliment as a commitment to work harder.

Word-hard for what? To be not bad at poorly defined problems a.k.a trying to be little less stupid tomorrow than I am today. 

This such an wonderful article on the same - Why aren't smart people happier?

I think all of our various tests of intelligence aren’t as different as they seem. They’re all full of problems that have a few important things in common:

  • There are stable relationships between the variables.
  • There’s no disagreement about whether the problems are problems, or whether they’ve been solved.
  • There have clear boundaries; there is a finite amount of relevant information and possible actions.
  • The problems are repeatable. Although the details may change, the process for solving the problems does not.

I think a good name for problems like these is well-defined. Well-defined problems can be very difficult, but they aren’t mystical. You can write down instructions for solving them. And you can put them on a test. In fact, standardized tests items must be well-defined problems, because they require indisputable answers. Matching a word to its synonym, finding the area of a trapezoid, putting pictures in the correct order—all common tasks on IQ tests—are well-defined problems.

Spearman was right that people differ in their ability to solve well-defined problems. But he was wrong that well-defined problems are the only kind of problems. “Why can’t I find someone to spend my life with?” “Should I be a dentist or a dancer?” and “How do I get my child to stop crying?” are all important but poorly defined problems. “How can we all get along?” is not a multiple-choice question. Neither is “What do I do when my parents get old?” And getting better at rotating shapes or remembering state capitals is not going to help you solve them.

We all share some blame with Spearman, of course, because everybody talks about smarts as if they’re one thing. Google “smartest people in the world” and most of the results will be physicists, mathematicians, computer scientists, and chess masters. These are all difficult problems, but they are well-defined, and that makes it easy to rank people. The best chess player in the world is the one who can beat everybody else. The best mathematician is the one who can solve the problems that nobody else could solve. That makes it seem like the best chess players and mathematicians are not just the smartest in their fields, but the smartest in the whole world.

THE POORLY DEFINED PROBLEM OF BEING ALIVE

There is, unfortunately no good word for “skill at solving poorly defined problems.” Insight, creativity, agency, self-knowledge—they’re all part of it, but not all of it. Wisdom comes the closest, but it suggests a certain fustiness and grandeur, and poorly defined problems aren’t just dramatic questions like “how do you live a good life”; they’re also everyday questions like “how do you host a good party” and “how do you figure out what to do today.”

One way to spot people who are good at solving poorly defined problems is to look for people who feel good about their lives; “how do I live a life I like” is a humdinger of a poorly defined problem. The rules aren’t stable: what makes you happy may make me miserable. The boundaries aren’t clear: literally anything I do could make me more happy or less happy. The problems are not repeatable: what made me happy when I was 21 may not make me happy when I’m 31. Nobody else can be completely sure whether I’m happy or not, and sometimes I’m not even sure. In fact, some people might claim that I’m not really happy, no matter what I say, unless I accept Jesus into my heart or reach nirvana or fall in love—if I think I’m happy before all that, I’m simply mistaken about what happiness is!

This is why the people who score well on intelligence tests and win lots of chess games are no happier than the people who flunk the tests and lose at chess: well-defined and poorly defined problems require completely different problem-solving skills. Life ain’t chess! Nobody agrees on the rules, the pieces do whatever they want, and the board covers the whole globe, as well as the inside of your head and possibly several metaphysical planes as well.

[---]

So if you’re really looking for a transformative change in your happiness, you might be better off reading something ancient. The great thinkers of the distant past seemed obsessed with figuring out how to live good lives: Socrates, Plato, Aristotle, Epicurus, Buddha, Confucius, Jesus, Marcus Aurelius, St. Augustine, even up through Thoreau and Vivekananda. But at some point, this kind of stuff apparently fell out of fashion.

And hey, maybe that’s because there’s just no more progress to make on the poorly defined problem of “how do we live.” But most well-defined problems were once defined poorly. For example, “how do we land on the moon” was a hopelessly poorly defined problem for most of human history. It only makes sense if you know that the moon is a big rock you can land on and not, say, a god floating in the sky. We slowly put some definitions around that problem, and then one day we sent an actual dude to the moon and he walked around and was like “I’m on the moon now.” If we can do that, maybe we can also figure out how to live good lives. It certainly seems worth it to keep trying.


 

Friday, October 24, 2025

There Nothing Contrarian In This Guy

This guy doesn't like silicon valley's biased world.

Most silicon valley idiots either believe in the magic of free markets sans morality and techno-centric-utopia which is nothing but refurbished Christianity.

And this guy doesn't even attempt to refurbish - he takes magic directly from Christianity.

Don't get me wrong, these guys are intelligent in one field and one field only - they are hyper-hedgehogs.

Well, I am talking about Peter Thiel. Here's The Gospel According to Peter Thiel:

For Kawas, Thiel’s economic and social vision borders on the mystic. “The real meaning of zero to one,” Kawas says, “is to make something new . . . the idea that we’re not stuck in the past. We can make something new from nothing . . . [and] that changes the nature of reality.” When resentful people see the world as a zero-sum place, they start redistributing assets, assigning guilt and blame to scapegoats. Instead, Kawas explains, “You can do magic. You can do tech.” This is, he insists, a “deeply Christian idea.” (Thiel himself has frequently publicly identified as Christian, though it’s worth noting that there is no Christian tradition in which the provenance of creating out of nothing—ex nihilo—is not understood as the specific and unique prerogative of God, rather than a right afforded to human beings.) It “rejects the blame” that comes from the erroneous belief that there’s “no way to change the reality.”

Thus, techno-capitalism-as-miracle: the notion that a few brilliant individuals can radically reshape the limits of human reality, which are revealed to be in part the product of intellectual sluggishness and moral fear. Thielism is the belief that a human being can—on his way, say, to a San Francisco speakers’ panel—conjure an idea for reshaping prestigious education in America. Old things must pass away, one way or another.

For starters, if he is really a so-called contrarian then he should drop the notion of "few brilliant individuals can radically reshape the limits of human reality."  No doubt he articulated some smart ideas ("capitalism and competition aren't synonyms" is a great one that made me look capitalism differently) but that doesn't make one contrarian.

Buddha was a contrarian. Ashoka was a contrarian. John Adams and Thomas Jefferson were contrarians. Gandhi and Mandela were contrarians.

They shock the norm of centuries and most importantly they followed what they preached in their real life and had a huge skin the game. To call this guy a contrarian is sad.

Why many smart people believe in a single idea can change the world without any attempts to change people's minds, uh?

Do they even read any history? My hunch is they do but their ideologies are primarily driven by the want and need to be immortal. That's another flavor of the subjective needs of sapiens even after kicking the bucket.

One big thing that I changed my mind during this COVAD-19 phase is that I had deeply underestimated the lack of intelligence of some tech people who are rich and articulate nicely (that is a deadly combination - always watch out for it). I am not talking about Peter Thiel here but others like Naval, Musk, etc., who are talking gibberish with biology and complex system.

If you want proof, please go read their tweets and other posts since March 2020.

This hyper-hedgehogs missed the simplest of lessons from Smith:
Smith, to put it bluntly, knew that there is all the difference in the world between learning how to get ahead in life and learning how to live life well. 


 

Thursday, October 23, 2025

Why Not Mars?

Funny, brilliant and insightful piece:

But fifty years of progress in miniaturization and software changed the balance between robots and humans in space. Between 1960 and 2020, space probes improved by something like six orders of magnitude[6], while the technologies of long-duration spaceflight did not. Boiling the water out of urine still looks the same in 2023 as it did in 1960, or for that matter 1060. Today’s automated spacecraft are not only strictly more capable[7] than human astronauts, but cost about a hundred times less [8] to send (though it’s hard to be exact, since astronauts have not gone anywhere since 1972[9]).

The imbalance between human and robot is so overwhelming that, despite the presence of a $250 billion[10] International Space Station National Laboratory, every major discovery made in space this century has come from robotic spacecraft [11]. In 2023, we simply take it for granted that if a rocket goes up carrying passengers, it’s not going to get any work done.

As for that space station, the jewel of human spaceflight, it exists in a state of nearly perfect teological closure, its only purpose being to teach its creators how to build future spacecraft like it. The ISS crew spend most of their time fixing the machinery that keeps them alive, and when they have a free moment for science, they tend to study the effect of space on themselves. At 22 years old [12], the ISS is still as dependent on fresh meals and clean laundry sent from home as the most feckless grad student.

[---]

If the head of NOAA Ocean Exploration (budget: $25 million) or the U.S. Antarctic Program ($350 million) held a press conference announcing a plan to fulfill human destiny, they’d be carrying their belongings home in a cardboard box before sundown. But our space agency is held to a lower standard.

All this would be fine if it was just talk. But NASA spent more on their Moon and Mars programs in 2022 than the total budget of the National Science Foundation. And in 2024, they plan to start launching pieces of a new space station, the Gateway, which by the laws of orbital bureaucracy will lock us in to decades of having to invent reasons to go visit the thing.

Somehow we’ve embarked on the biggest project in history even though it has no articulable purpose, offers no benefits, and will cost taxpayers more than a good-sized war. Even the builders of the Great Pyramid at Giza could at least explain what it was for. And yet this project has sailed through an otherwise gridlocked system with the effortlessness of a Pentagon budget. Presidents of both parties now make landing on Mars an official goal of US space policy. Even billionaires who made their fortune automating labor on Earth agree that Mars must be artisanally explored by hand.

The whole thing is getting weird.

[---]

The chief technical obstacle to a Mars landing is not propulsion, but a lack of reliable closed-loop life support[48]. With our current capability, NASA would struggle to keep a crew alive for six months on the White House lawn, let alone for years in a Martian yurt.

The technology program required to close this gap would be remarkably circular, with no benefits outside the field of applied zero gravity zookeeping. The web of Rube Goldberg devices that recycles floating animal waste on the space station has already cost twice its weight in gold[49] and there is little appetite for it here on Earth, where plants do a better job for free.

I would compare keeping primates alive in spacecraft to trying to build a jet engine out of raisins. Both are colossal engineering problems, possibly the hardest ever attempted, but it does not follow that they are problems worth solving. In both cases, the difficulty flows from a very specific design constraint, and it’s worth revisiting that constraint one or ten times before starting to perform miracles of engineering.

What makes life support so vexing is that all the subcomponents interact with each other and with the crew. There’s no such thing as a life support unit test; you have to run the whole system in space under conditions that mimic the target mission. Reliability engineering for life support involves solving mysteries like why gunk formed on a certain washer on Day 732, then praying on the next run that your fix doesn’t break on Day 733. The process repeats until the first crew makes it home alive (figuratively speaking), at which point you declare the technology reliable and chill the champagne.

Unlike the medical research, there’s no way to predict how long these trials might take. A typical exploration profile[50] needs two different kinds of life support (for the spacecraft and the surface) that together have to work for about 1000 days. The spacecraft also has to demonstrate that it can go dormant for the time the crew is on Mars and still work when it wakes up.

Twenty years of tinkering with the much simpler systems on the space station have brought them no closer to reliability. And yet to get a crew to Mars, we’d need to get this stuff working like a Swiss watch. Humanity does not need a billion dollar shit dehydrator that can work for three years in zero gravity, but a Mars mission can’t leave Earth without it.


 

Saturday, September 27, 2025

What Keeps the Lights On

The first commercial electric power plant in North America opened in Appleton, Wisconsin, on August 20, 1882. The second, built by the famed inventor Thomas Edison in Manhattan, opened two weeks later, on September 4. The Appleton plant, on the Fox River, was also the world’s first hydroelectric plant. The plant channeled the Fox’s current through the plant to turn a turbine — an axle with waterwheel-style blades. The turbine, spinning, turned a set of gears. The gears rotated a cylinder of conductive metal (the rotor) by six big magnets (the stator). An electric current emerged.

Edison made money from his patents on electric plants — his plant in Manhattan was mainly a demo. The people in Appleton had to cover their costs, which included paying Edison to license his patents, by selling and distributing electricity. This meant putting up big poles all over town and stringing wire on them, a pricey endeavor. Unfortunately, the costs didn’t go down as the customer base grew. Connecting the ten-thousandth home was nearly as expensive as the first.

Equally high were the costs of maintaining the system. The flow of electrons in a power line is not, so to speak, friction-free — it heats the metal, a phenomenon called resistance. (Resistance is why the elements in an electric range glow red and get hot.) If resistance heat can’t dissipate, a metal wire will soften, expand, and lengthen. If the wire is a power line, the heat will make the line sag between its supports. If the line droops too low, it can spark out onto nearby trees or other objects, causing a spike in the current or shorting out the line entirely. Today’s electric cables reduce the risk of flashes by being built with a complex multilayer design and sheathed with insulation. But sagging power lines are still responsible for a large fraction of the big wildfires in the West.

Swamped by the costs of building and maintaining infrastructure, the Appleton plant went belly-up in 1896. So did a host of other early utilities. Customers were furious. The upheaval lasted for decades — one reason why today’s utilities are heavily regulated and, often, publicly owned. It is also why the national grid is a patchwork jumble of older and newer elements, all hooked together higgledy-piggledy as they developed.

[---]

Minute by minute, power plants have to produce just about exactly as much electricity as is being used at the moment. Too much electricity is as much of a problem as too little. Water systems maintain a continuous flow by storing extra water in reservoirs, releasing it when needed, taking it in when there is surplus. For electricity, the equivalent of a reservoir is a battery. Battery costs have fallen in recent years, but storing electricity remains much more difficult and costly than storing water. As a result, electric power is still mostly generated, transported, and used in real time. The energy you use to turn on your monitor was a ray of sunshine or a puff of natural gas just milliseconds before.

[---]

North America’s electric system is a miracle, but also kind of a mess. 

- More Here

I have immense gratitude for being alive in a time and place where I have the comfort of electricity. 

When I say I have gratitude, I thank water, electricity (and other basic needs) everyday. 




Tuesday, September 2, 2025

The Devil Admits - We Are In An AI Bubble

I am in the field and I have been tired of this bullshit for 2/3 years now. I mean unbelievable bullshit, and everyone who doesn't even know the formula for calculating the area of a circle,  using the term AGI.

Now the guy who spread this bullshit admits well it's bullshit (and he continues to make money - pure pay-pal mafia strategy):

First he says AGI is not right term:

OpenAI CEO Sam Altman said artificial general intelligence, or “AGI,” is losing its relevance as a term as rapid advances in the space make it harder to define the concept.

AGI refers to the concept of a form of artificial intelligence that can perform any intellectual task that a human can. For years, OpenAI has been working to research and develop AGI that is safe and benefits all humanity.

“I think it’s not a super useful term,” Altman told CNBC’s “Squawk Box” last week, when asked whether the company’s latest GPT-5 model moves the world any closer to achieving AGI. The AI entrepreneur has previously said he thinks AGI could be developed in the “reasonably close-ish future.”

And he spread this bullshit just last year:

OpenAI CEO Sam Altman says concerns that artificial intelligence will one day become so powerful that it will dramatically reshape and disrupt the world are overblown.

“It will change the world much less than we all think and it will change jobs much less than we all think,” Altman said at a conversation organized by Bloomberg at the World Economic Forum in Davos, Switzerland.

Altman was specifically referencing artificial general intelligence, or AGI, a term used to refer to a form of AI that can complete tasks to the same level, or a step above, humans.

He said AGI could be developed in the “reasonably close-ish future.”

Plus now he is saying this a bubble (even a guy like I knew this for years and he played everyone for so long):

As economists speculate whether the stock market is in an AI bubble that could soon burst, OpenAI CEO Sam Altman has just admitted to believing we’re in one. “Are we in a phase where investors as a whole are overexcited about AI?” Altman said during a lengthy interview with The Verge and other reporters last night. “My opinion is yes.”

In the far-ranging interview, Altman compared the market’s reaction to AI to the dot-com bubble in the ’90s, when the value of internet startups soared before crashing down in 2000. “When bubbles happen, smart people get overexcited about a kernel of truth,” Altman said. “If you look at most of the bubbles in history, like the tech bubble, there was a real thing. Tech was really important. The internet was a really big deal. People got overexcited.”

He added that he thinks it’s “insane” that some AI startups with “three people and an idea” are receiving funding at such high valuations. “That’s not rational behavior,” Altman said. “Someone’s gonna get burned there, I think.”

People got overexcited? Such a snake oil sales man this guy likes his pay-pal peers. 

Yes, AI is extremely useful. Machine learning and Deep learning and other algorithms have brought so many benefits for more than a decade. But promoting LLM (a useful tool) as a panacea was done by a handful of folks like him in the industry. 

The point is they knew it was bullshit and yet, they spread this. 


Thursday, August 28, 2025

How to Be a Good Intelligence Analyst

Such an wonderful piece! highly recommended. 

Please read the whole thing here

Because learning institutionally is hard?

Learning institutionally is hard. Not only is it hard to do, but it's also hard to measure and to affect. But, if nothing else, practitioners became more thoughtful about the profession of intelligence. To me, that was really important. The CIA is well represented by lots of fiction, from Archer to Jason Bourne. It's always good for the brand. Even if we look nefarious, it scares our adversaries. But it's super far removed from reality. Reality in intelligence looks about as dull as reality in general. Being a really good financial or business analyst, any of those kinds of tasks, they're all working a certain part of your brain that you can either train and improve, or ignore and just hope for the best.

[---]

What do American intelligence analysts do if not the fun stuff from the Bourne movies?

They read, they think, they write. They write some more, they edit, they get told their writing sucks. They go back, they start over again. Some manager looks at it and says, "Is this the best you can write?" And they say, “No.” And they hand it back to them, and off they go to write it again. It’s as much of a grind as any other analytic gig. You're reading, thinking, following trends, looking for key variables.

Analysts who are good on their account generally have picked up very specific tips and tricks that they may not even be able to articulate. The best performers in the agency had a very difficult time explaining how it was they went about their analysis, and articulating their expertise. That's not unusual. Experts really aren't very good at articulating why or how they're experts, but we do find that after 10,000-ish cases, they get better, because they're learning what to look for and what not to.

That comes with some penalties. The more hyper-focused you are on topic X, the less likely you are to think that topic Y is going to affect it. And often it's topic Y that comes in orthogonally and makes chaos. “How do you create expert-novice teams?” was a question that we struggled with: finding the right balance between old and new hands, because you wanted the depth of expertise along with the breadth of being a novice. Novices would try anything because nobody told them they couldn't. That's a very valuable thing to learn from. If you're an analyst or an analytic manager, the challenge is how to balance that structure.

[---]

That old model seems more James Bond-y. The character goes more places for the movie at the cost of effectiveness.

A consistent problem is that the effectiveness measures are poorly articulated and poorly understood by both the consumers and the customers. The best consumer of intelligence that I have ever interacted with was Colin Powell. He had a very simple truism: "Tell me what you know, tell me what you don't know, then tell me what you think, so that I can parse out what you're saying and make sense of it.” He was a remarkably savvy consumer of intelligence.

Not all consumers are that savvy. Many of them would benefit from spending a little time learning more about the community, understanding the relationship with their briefers and analysts. The more engaged the policymakers are in learning about intelligence, the more savvy they'll get as consumers. Until then, you're throwing something over the transom and hoping for the best. It's not a great way to operate if you have consumers who want your product.

Who were some relatively poor consumers of intelligence information?

There are so many. Dick Cheney was not a poor consumer of intelligence. He just had an agenda, and he understood the discipline well enough to exercise that agenda. [Donald] Rumsfeld was not good. And [Paul] Wolfowitz was much worse at it than he thought. There were some others in that administration, and I don't mean to pick on them. There were plenty of lousy consumers under Obama and under Clinton. Not a lot of them take enough time to really think about what they're getting.

The biggest problem that I have found with ambassadors, generals, or other consumers is they'll go out into the world, shake hands with their counterpart, and decide based on that interaction that they understand their counterpart better than anybody else does. "I went to lunch with so-and-so, I should know." The problem is that so-and-so is not going to tell you the truth. If so-and-so is going to do something, going to lunch with him probably isn't going to be very revealing. He's probably going to tell you what you want to hear. You'd be surprised how many consumers don't even think about that possibility. It boggles my mind.

It is funny you mention Donald Rumsfeld as a poor consumer of information, because one of his famous truisms was, he wanted you to explain your “known knowns” and your “unknown unknowns.” My first impression would be that he’d be a good consumer.

The problem with the Rumsfelds and the Kissingers is that maybe they are the smartest person in the room, but maybe they should stop believing that for a while. That gets in their way. They just assume from the jump that they're smarter than everybody. Not just everybody individually, but everybody collectively. There's a certain amount of ego that goes along with all of this. When the ego gets sufficiently inflated, you reject information that is contrary to your own values, mental model, and thought processes. You assign outlier status to anything that doesn't conform with the way you think about a problem. That's expertise run amok.

That's where people like Rumsfeld or Kissinger come off the rails. They just assume, "Well, I'm smarter than everybody, so I'll figure it out. You just give me raw data." I have not seen a terribly successful model of that. It's better to walk into a room and assume that you're not remotely the smartest person there. You're doing yourself a cognitive disservice if you think you're cleverer than everybody else. It's a rookie mistake, but you see it over and over, and if it works for you and you keep getting promoted, eventually you start to believe it.

It doesn't seem like a rookie mistake to me. It seems like the mistake of a seasoned professional.

You're right. It is a longevity error.


Saturday, March 22, 2025

Kevin Kelly's Words Of Wisdom On AI, Simulation et al.,

Thinking (intelligence) is only part of science; maybe even a small part. As one example, we don’t have enough proper data to come close to solving the death problem. In the case of working with living organisms, most of these experiments take calendar time. The slow metabolism of a cell cannot be sped up. They take years, or months, or at least days, to get results. If we want to know what happens to subatomic particles, we can’t just think about them. We have to build very large, very complex, very tricky physical structures to find out. Even if the smartest physicists were 1,000 times smarter than they are now, without a Collider, they will know nothing new.

[---]

There is no doubt that a super AI can accelerate the process of science. We can make computer simulations of atoms or cells and we can keep speeding them up by many factors, but two issues limit the usefulness of simulations in obtaining instant progress. First, simulations and models can only be faster than their subjects because they leave something out. That is the nature of a model or simulation. Also worth noting: The testing, vetting and proving of those models also has to take place in calendar time to match the rate of their subjects. The testing of ground truth can’t be sped up.

- More Here


Sunday, May 5, 2024

I Think, I am Amish!

I never read this 2017 post by Carl Newport but as long as I remember, I treated technology I buy and/or use like an Amish. 

This filter helped me buy my first Mac decades ago, Prius (2008), Nest & Hue (when those came out) and even the Airpods.  Other than those mentioned, I pretty much don't have any other technology since it doesn't help with my life nor my values. 

I wish I had such a good filter in other areas of my life. 

The Amish and Technology

“Amish lives are anything but anti-technological,” Kelly writes. “I have found them to be ingenious hackers and tinkers, the ultimate makers and do-it-yourselvers. They are often, surprisingly, pro-technology.”

He explains that the simple notion of the Amish as Luddites vanishes as soon as you approach a standard Amish farm. “Cruising down the road you may see an Amish kid in a straw hat and suspenders zipping by on Rollerblades.”

Some Amish communities use tractors, but only with metal wheels so they cannot drive on roads like cars. Some allow a gas-powered wheat thresher but require horses to pull the “smoking contraption.” Personal phones (cellular or household) are almost always prohibited, but many communities maintain a community phone booth.

Almost no Amish communities allow automobile ownership, but it’s common for Amish to travel in cars driven by others.

Kelly reports that both solar panels and diesel electric generators are common, but it’s usually forbidden to connect to the larger municipal power grid.

Disposable diapers are popular as are chemical fertilizers.

In one memorable passage, Kelly talks about visiting a family that uses a $400,000 computer-controlled precision milling machine to produce pneumatic parts needed by the community. The machine is run by the family’s bonnet-wearing, 10-year old daughter. It’s housed behind their horse stable.

These observations dismiss the common belief that the Amish reject any technology invented after the 19th century. So what’s really going on here?

The Amish, it turns out, do something that’s both shockingly radical and simple in our age of impulsive and complicated consumerism: they start with the things they value most, then work backwards to ask whether a given technology performs more harm than good with respect to these values.

 

Sunday, December 10, 2023

On Charisma

Most of us will have experienced the allure of a charismatic individual in our lives. Few have experienced the feeling of being charismatic, where your desires, beliefs and actions are having a disproportionately powerful influence on those around you. But when people try to break down how it feels to experience it, they veer into cryptic comparisons. “When she [Elizabeth Holmes] speaks to you, she makes you feel like you are the most important person in her world in that moment,” Tyler Shultz, a whistleblower who worked at Theranos, told CBS News. “She almost has this reality distortion field around her that people can just get sucked into.” 

About a meeting with Leo Tolstoy, Maxim Gorky wrote: “I can not express in words what I felt rather than thought at that moment; in my soul there was joy and fear, and then everything blended in one happy thought: ‘I am not an orphan on the earth, so long as this man lives on it.’” 

Reflecting on her rare experiences of charisma across 25 years of interviewing notable figures, the newspaper columnist Maggie Alderson wrote: “I still don’t understand what creates the effect. … If not fame, beauty, power, wealth and glory then what? It must be innate. I find that quite thrilling.”

“Something magical and dangerous, something unfathomable, is afoot when charisma is present.”

[---]

As Mazzarella reminded me, people also use charisma to talk about the most admired and inspiring figures in their lives and the charismatic teachers they’ve had. “There the implication is that this person helped me to become myself or transcend myself in a way that I wouldn’t otherwise have been able to do,” he said. “That’s what’s interesting about charisma: It touches the darkest fundamentals of human impulses while having the capacity to point to our highest potentials. Charisma has these two faces, and it’s the fact that we seem to not be able to have one without the other that is so uncanny and disturbing. Inspiring charismatic figures can become exploitative, manipulative or violent. Violence gives way to liberation, or liberation gives way to violence. The problem is not just that we have a hard time telling the good charisma from the bad charisma, but that one has a way of flipping into the other.”

[---]

For over a decade, Antonakis has been experimenting with ways to break charisma down into its composite parts, therefore making it measurable and teachable. He believes it can be the great leveler in a world obsessed with physical appearance. His resulting definition is that charisma is “values-based, symbolic and emotion-laden leader signaling.” 

Along with a team of researchers, he boiled it all down to 12 “charismatic leadership tactics,” or CLTs for short. The CLTs include nine verbal techniques — like the use of metaphors, anecdotes, contrasts and rhetorical questions — as well as three nonverbal ones like facial expressions and gestures. Anyone trained in these CLTs, he said, can become more “influential, trustworthy and leaderlike in the eyes of others.” He and his team developed an artificial intelligence algorithm, which they trained on almost 100 TED talks, that can identify the charismatic quality of speeches. The algorithm is called “Deep Charisma” but Antonakis calls it his “charismometer.” 

The Secret History And Strange Future Of Charisma


Sunday, November 12, 2023

A City on Mars: Can We Settle Space, Should We Settle Space, and Have We Really Thought This Through?

An Earth with climate change and nuclear war and, like, zombies and werewolves is still a way better place than Mars.

I have been saying this for 2 plus decades and finally, there is a book exposing this fantasy and pure bullshit. 

Max and I came from earth and will go back to earth. Not because we "lived" on earth but we are part of earth and -- "we are earth".   

Review of the new book, A City on Mars: Can We Settle Space, Should We Settle Space, and Have We Really Thought This Through? by Kelly and Zach Weinersmith.

Living on Mars, which has no birds or rain, gets less than half the sunlight per area that Earth does, and is often plagued by dust storms that further blot out the sun, could be a soul-deadening experience.

The book spends several chapters covering space law and governance, which, in the Weinersmiths’ hands, is more interesting than it sounds. They explore the philosophical question of “who owns the universe?” and shoot down a common argument “that all law is pointless because if Elon Musk has a Mars settlement, who’s going to stop him?” (“One of your authors has a brother who makes this argument. His name is Marty and he is wrong.”)

In fact, there are already frameworks that could guide space law, and the book covers them, and their alternatives, in detail. They use Earth-bound examples, like the breakup of the former Socialist Federal Republic of Yugoslavia and the governance of Antarctica to explore how various governance scenarios might play out on other planets.

[---]

They also run through a list of “Bad Arguments for Space Settlement,” which include “Space Will Save Humanity from Near-Term Calamity by Providing a New Home,” and “Space Exploration Is a Natural Human Urge.” These detailed examinations of the stark realities regarding space travel and habitation serve as a foil to the breathlessly optimistic accounts that are so ubiquitous in popular media.

Despite often sounding like a couple of Debbie Downers, they somehow succeed at keeping the narrative upbeat and interesting. They do this with humor, frankness, and Zach’s fun sketches. Even as they shoot down a long list of space fantasies, they explore a lot of really interesting research and anecdotes (“Did you know the Colombian constitution asserts a claim to a specific region of space?”), so there’s rarely a dull moment.

The Weinersmiths view themselves not as “barriers on the road to progress” but as “guardrails” who want us to go to Mars as much as anybody. The trouble is that these self-professed science geeks (who watch late-night rocket launches with their kids) “just cannot convince ourselves that the usual arguments for space settlements are good.”

But they also assert, rather earnestly, that “If you hate our conclusions here, we have excellent news: we are not powerful people.”

And listen to the excellent interview with Zach on Russ's EconTalk.

I saw people "troubled" for wearing masks and staying home during Covid and these are people "excited" to depart to Mars - prepared to live (and poop) with 100 a pound suit in an underground bunker! 

Please depart and leave us alone. Adios!



Wednesday, August 16, 2023

Oppenheimer

I love Christopher Nolan movies; even during the pandemic I went to see Tenant and was disappointed. 

Inception is still my all time favorite Nolan movie. 

Oppenheimer's biopic should have limited Nolan style jumping back and forth the time every 30 seconds. This will lose most people's interest since most don't know Oppenheimer's story. 

I did love the movie. 

Hat's off Oppenheimer for managing some of the brilliant minds of our times in a small space and time and delivering for humanity. No small feat and as far I know - no one before or since has pulled this feat. 

Oppenheimer life can teach us few lessons; nothing new but age old wisdom.

1. Watch out for enemies amongst your friends. Pick your friends wisely and don't get disappointed if they disappoint.

2. Choose your spouse and who you have sex with wisely. Nothing more in life is more important than this. 

3. Eschew ideology. Never even flirt with one else you will face consequences sooner or later. 

4. Understand people don't have gratitude. Learn to accept that as a daily status quo. 

5. People have short term memory outside of their "personal" life (which one of the reasons for # 4).

6. Do the right thing always (the atom bomb was the right thing) and pass on to the next generation.  There are some unique cases where one cannot be responsible for second or third order effects.

Maybe one day his work might destroy this planet. Maybe not. Maybe his work saved the planet and might save it again. 

I don't know.

Sunday, April 2, 2023

Bard - Google's Version Of Chat GPT & My Thoughts On LLMs, AI et al.

I guess since I work in the field, Google last week invited me to experiment with Bard.

My thoughts: 

  • Bard is extremely faster than Chat GPT3. It's obvious since Google has faster servers and more dollars than Open AI. 
  • Bard is probably trained on much richer data than Open AI since Google has decades of "our" data. But yet, Brad is more cautious in its answers than Chat GPT. 
  • Training data is everything in the current state of AI. And also, training data is almost everything for both humans and non-human animals. The sooner we understand this, the better the chances of us solving the problems we face. 
  • I am naturally drawn toward people and models who use the phrase "I don't know". In those terms Chat GPT and Brad are more humble than most humans since these two models have more knowledge than any one human ever.
  • They don't have wisdom; wisdom is knowing when to follow and when to break rules (and much  more). To be clear, one should never use the word wisdom with AI (leave alone LLMs)
  • Both models are highly creative. Creativity has been seen in all non-human animals and now these models are exposing our empty pride. I feel humble and understand I am an organic matter better with these models.
  • I don't ask questions to make myself look smart since I have a sense of what they are good and bad at. 
  • I asked both models about my favorite human, Montaigne. 

Brad's answer: 

Montaigne had a very close relationship with his cat. He wrote about her in his essays, and she was often a source of comfort and companionship for him. Montaigne believed that animals were capable of feeling emotions, and he saw his cat as a friend and a fellow creature. He wrote about how she would follow him around the house, and how she would sleep on his bed at night. He also wrote about how she would play with him, and how she would sometimes get into trouble. Montaigne's relationship with his cat is an example of the deep bond that can exist between humans and animals.

Here is an excerpt from one of Montaigne's essays about his cat:

"I have a cat, who does not much care to be seen, and whom I seldom see myself. She comes and goes as she pleases, and I never trouble her. She lies at my feet when I am at work, and purrs so loudly that she sometimes interrupts my thoughts. But I am not angry with her for that, for I know that it is her way of showing her affection. She is a very clean cat, and takes great care of her fur. She washes herself several times a day, and always seems to be well pleased with herself. She is also a very playful cat, and loves to chase after her tail. She is a very good hunter, and often brings me mice and birds. I am very fond of my cat, and she is very fond of me. We are good friends, and I hope that we will always be together."

Chat GPT's answer: 

What was Montaigne relationship with his cat like?

Michel de Montaigne, the 16th-century French philosopher and essayist, had a very close relationship with his pet cat. In his famous essay "An Apology for Raymond Sebond," Montaigne wrote extensively about his observations of his feline companion and used his observations as a springboard for philosophical inquiry.

Montaigne was known to be a lover of animals, and his cat was a constant presence in his life. He observed the cat closely and was fascinated by its behavior, agility, and independent nature. He saw the cat as a fellow creature with its own unique perspective on the world, and he valued its company and companionship.

Montaigne's relationship with his cat was one of mutual respect and admiration. He marveled at the cat's ability to live in the present moment and to be content with its surroundings. He also recognized the cat's independence and self-sufficiency, which he saw as admirable qualities to emulate.

Overall, Montaigne's relationship with his cat was one of affection, admiration, and intellectual curiosity. His observations of his feline companion led him to reflect deeply on the nature of animal consciousness and the limits of human understanding.

Both answers are good and unique in their own way plus both answers are true. 

For a long time, AI had been monopolized by a handful of big companies. We should give Open AI credit for breaking that trend. 

We are going to immensely benefit when two companies (and more will be joining) competing to train better models. Capitalism has its flaws but this competition is a gift from Capitalism to us. 

Having said that, there will be a huge amount of positive externalities coming out of these models, and conversely, there will be some negative externalities. We cannot predict either one of those. 

The only risk the current version of AI can do will be because of bad people using it for malicious goals.
For centuries people have been using even mundane tools maliciously to back their religious, political and other ideologies. 

So understand - AI cannot be compared to giving a nuclear weapon to a sociopath. 

We have so many lessons learned from industrialization, wars, economics etc., which we can use to define some common sense rules and boundaries for AI. I will write more about this soon.

Both models write better English phrases than I do. But yet, this blog is a place for Max and I. I will never use these models to write a blog post. It will be in my own words until my last breath. 

Monday, November 14, 2022

Billionaires Who Never Grew Out Of Marvel Comics (& No Understanding Of Complex Systems)

Mark McCaughrean, senior adviser for science and exploration at the European Space Agency, admits that sometimes he refuses to watch feats of virtuoso spacefaring from the new space barons, lest he get sucked in by their superficial glamour. It is not just sour grapes about the cool things they get to do with their wealth. It’s about the scientific, social, and philosophical implications of what they are doing and how they are doing it.

[---]

Anyone who thinks that Musk’s priorities align neatly with the needs of space science should ask astronomers what they think of his 1,500 or so active Starlink satellites that are now obstructing the view of telescopes with bright streaks and raising concerns about radio-signal interference. Starlink has filed plans to launch up to 42,000 satellites in total—about five times the total number currently orbiting Earth—and competing services like Amazon’s Project Kuiper plan to add thousands more. There are already 1,600 close encounters in space (within 1 kilometer) a week from Musk’s satellites, risking collisions that could strew debris in low Earth orbit.

“There was a time when I was enthusiastic about commercial space because I saw it as a possible way we could conduct more science,” Porco says. She now concludes that this is not the way it will work. “When you put science, and the way science needs to be conducted, up against commercial interests, the two make very bad bedfellows.”

“People get so wrapped up in wish-fulfilment fantasies about living on Mars that they lose context completely, as if you can just fly away and leave all our troubles behind. It doesn’t solve any problems by going to Mars,” McCaughrean adds. For the goal of survival, we would be much smarter using our knowledge and resources to keep Earth habitable in the face of the inadvertent geoengineering we are already conducting here.

[---]

We might plausibly extend that approach to an international, crewed research base on the moon. But we don’t need space tourism and private industry to get it. This doesn’t mean that big commercial ventures should be banned. But we should be more clear-eyed about their motives and priorities and consider how much we want their already ubiquitous presence in our lives to expand into the heavens too, with barely any regulation to constrain them.

Even if you feel in your marrow that our human destiny lies in the stars, you might want to look closely at what the space billionaires have done down here. Then ask yourself whether they are the best people to take us up there.

- More Here

One of my guilty pleasures is to live long enough to read the obituary of these self proclaimed omnipotent  sapiens. 

And to state the obvious, I will never leave this beautiful planet where Max and I shared precious time together. My last breathe will be in the same place where Max took his last breathe. 


Thursday, June 16, 2022

The Book of Minds - How to Understand Ourselves and Other Beings

Conceiving of a universe of possible minds can discourage human hubris, and advises erring on the side of generosity in considering the rights and dignity of other beings. But it also enables a literally broad-minded view of what other minds could exist. Mindedness needn’t be a club with rigorously exclusive entry rules. We might not (and may never) agree about whether plants, fungi or bacteria have any kind of sentience, but they show enough attributes of cognition to warrant a place somewhere in this space. This perspective also promotes a calmer appraisal of artificial intelligence than the popular fevered fantasies about impending apocalypse at the hands of malevolent, soulless machines. There is no reason to suppose that today’s AI has any more sentience or experience than the rocks from which its silicon is extracted. But it, too, shows intelligence of a kind, including the ability to learn and predict.

To suppose that something like artificial consciousness will emerge simply by making computer circuits bigger and faster is, as one AI expert put it to me, like imagining that if we make an aeroplane fly fast enough, eventually it will lay an egg. Computers and AI are taking off in the “intelligence” direction of mind-space while gaining nothing on the “experience” axis: their trajectory is heading not towards us but somewhere else entirely. If we want AI to be more human-like, many experts believe we will need explicitly to build human qualities into it – which in turn requires that we better understand what those are and how they arise.

Likewise, most of our fantasies about advanced alien intelligence suppose it to be like us but with better tech. That’s not just a sci-fi trope; the scientific search for extraterrestrial intelligence typically assumes that ET carves nature at the same joints as we do, recognising the same abstract laws of maths and physics. But the more we know about minds, the more we recognise that they conceptualise the world according to the possibilities they possess for sensing and intervening in it; nothing is inevitable. We need to be more imaginative about what minds can be, and less fixated on ours as the default. As the biologist JBS Haldane once said: “The universe is not only queerer than we suppose, but queerer than we can suppose.” Our only hope of understanding the universe, he said, “is to look at it from as many different points of view as possible.” We may need those other minds.

- Excerpts from The Book of Minds: How to Understand Ourselves and Other Beings, from Animals to Aliens by Philip Ball