Showing posts with label AI. Show all posts
Showing posts with label AI. Show all posts

Thursday, April 30, 2026

The Social Edge of Intelligence

If AI capability depends on the social complexity of human language production—and if AI deployment systematically reduces that complexity through cognitive offloading, homogenization of creative output, and the elimination of interaction-dense work—then the technology is gradually undermining the conditions for its own advancement. Its successes, rather than failures, create a spiral: a slow attenuation of the very substrate it feeds on, spelling doom.

This is the Social Edge Paradox, and the intellectual tradition it draws from is older and more interdisciplinary than most AI commentary acknowledges.

Michael Tomasello’s evolutionary research establishes that human cognition diverged from other primates by a process other than superior individual processing power. The real impetus came through the capacity for collaborative activity with shared goals and complementary roles. He argues that even private thought is “fundamentally dialogic and social” in structure—an internalization of interaction patterns. Autonomous neural capacity is far from enough to account for the abilities of human thought.

Robin Dunbar’s social brain hypothesis quantifies the link: neocortex ratios predict social group size across primates; language evolved as a mechanism for managing relationships at scales too large for grooming. Two-thirds of conversation is social, relational, reputational. Language is often mistaken as an information pipe, but it is really a social coordination technology.

My own position is that collective intent engineering, found in forms as familiar as simple brainstorming, accounts for most frontier cognitive expansion. The intelligent algorithms of today have not been built with this critical function in mind.

[---]

The AI industry is telling a story about the future of work that goes roughly like this: automate what can be automated, augment what remains, and trust that the productivity gains will compound into a wealthier, more efficient world.

The Social Edge Framework tells a different story. It says: the intelligence we are automating was never ours alone. It was forged in conversation, argument, institutional friction, and collaborative struggle. It lives in the spaces between people, and it shows up in AI capabilities only because those spaces were rich enough to leave linguistic traces worth learning from.

Every time a company automates an entry-level role, it saves a salary and loses a learning curve, unless it compensates. Every time a knowledge worker delegates a draft to an AI without engaging critically, the statistical thinning of the organizational record advances by an imperceptible increment. Every time an organization mistakes polished output for strategic progress, it consumes cognitive surplus without generating new knowledge.

None of these individual acts is catastrophic. However, their compound effect may be.

The organizations that will thrive in the next decade are not those with the highest AI utilization rates. They are those that understand something the epoch-chaining thought experiment makes vivid: that AI’s capabilities are an inheritance from the complexity of human social life. And inheritances, if consumed without reinvestment, eventually run out. This is particularly critical as AI becomes heavily customized for our organizational culture.

[---]

The Social Edge is more than a metaphor. It is the literal boundary between what AI can do well and what it will keep struggling with due to fundamental internal contradictions. Furthermore, the framework asks us all to pay attention to how the very investment thesis behind AI also contains the seeds of its own failure. And it reminds leaders that AI’s frontier today is set by the richness of the social world that produced the data it learned from.

- More Here



Tuesday, April 28, 2026

Golden Retriever Lifetime Study - Update From Morris Animal Foundation

Got this poignant email from Morris Animal Foundation today: 

As we approach the 15th year of the Golden Retriever Lifetime Study, we are entering a new, exciting stage every pet owner will appreciate. To date, 386 of our dogs have lived to age 13 or older, including three who have reached the remarkable milestone of 15 years. As a lifelong golden retriever owner, it warms my heart to see these dogs thrive. As a veterinarian and epidemiologist, I am eager to leverage this unique dataset to understand what sets these “super-seniors” apart. After all, that is our ultimate goal: we don’t just want dogs to avoid cancer, we want dogs that remain healthy and vibrant well into their golden years.

To capture the shifting challenges these dogs may face as they age, the Study utilizes supplemental surveys that participants can opt into every six months. These provide vital data on mobility and cognition. This initiative began when most dogs in the Study were approximately 8 years old and is rapidly becoming a robust dataset that will aid researchers for decades. Current research suggests dogs fall into two categories: "cognitive maintainers" and "cognitive decliners." Our data is uniquely positioned to help us identify the specific factors that contribute to prolonged cognitive health.

Because the Golden Retriever Lifetime Study is longitudinal, scientific interest has accelerated alongside the Study’s progress. While we have sadly said goodbye to 1,780 heroes, the information they contributed from puppyhood onward is of historic importance. As I write this, more than 100 studies have leveraged our data to investigate a wide variety of health topics. We recently closed our annual call for canine research proposals, and of the 142 pre-proposals submitted, 21 plan to incorporate Study data.

While the Study’s evolution into aging is exciting, our primary objective — to make progress against canine cancer — remains unchanged. The Foundation recently invested in two cancer studies that showed promising initial results. Both successfully identified genetic regions related to hemangiosarcoma and histiocytic sarcoma, respectively. Researchers are now building on these findings using Study data, which could lead to life-saving genetic tests. These are just two examples of the many promising studies currently underway that have the potential to change the future of canine health.

From all of us at Morris Animal Foundation, thank you for making this work possible and supporting the research that will help dogs run, play and be with us to create more memories well into their golden years.

Please keep up the good work; your team will always have wishes from Max and I.  

I said this when Max had cancer and I am saying this now - a lot of insights will come from this study and the Dog Aging Project which will help Sapiens although my moronic species refuse to give data. 

Researchers need a lot of data from healthy people to understand what it looks like not having cancer - a fundamental machine learning common sense. 


Sunday, March 29, 2026

Grounded In Reality Piece On AI Mania

I don’t say that because I think that AI models are bad or because I think they won’t get better; I think that AI models are very good and will get much better. No. The fault is not with the models, but with us. The world is run by humans, and because it’s run by humans—entities that are smelly, oily, irritable, stubborn, competitive, easily frightened, and above all else inefficient—it is a world of bottlenecks. And as long as we have human bottlenecks, we’ll need humans to deal with them: we will have, in other words, complementarity.

People frequently underrate how inefficient things are in practically any domain, and how frequently these inefficiencies are reducible to bottlenecks caused simply by humans being human. Laws and regulations are obvious bottlenecks. But so are company cultures, and tacit local knowledge, and personal rivalries, and professional norms, and office politics, and national politics, and ossified hierarchies, and bureaucratic rigidities, and the human preference to be with other humans, and the human preference to be with particular humans over others, and the human love of narrative and branding, and the fickle nature of human preferences and tastes, and the severely limited nature of human comprehension. And the biggest bottleneck is simply the human resistance to change: the fact that people don’t like shifting what they’re doing. All of these are immensely powerful. Production processes are governed by their least efficient inputs: the more efficient the most efficient inputs, the more important the least efficient inputs.

In the long run, we should expect the power of technology to overcome these bottlenecks, in the same way that a river erodes a stone over many years and decades—just as how in the early decades of the twentieth century, the sheer power of what electricity could accomplish gradually overcame the bottlenecks of antiquated factory infrastructure, outdated workflows, and the conservatism of hidebound plant managers. This process, however, takes time: it took decades for electricity, among the most powerful of all general-purpose technologies, to start impacting productivity growth. AI will probably be much faster than that, not least because it can be agentic in a way that electricity cannot. But these bottlenecks are real and important and are obvious if you look at any part of the real world. And as long as those bottlenecks exist, no matter the level of AI capabilities, we should expect a real and powerful complementarity between human labor and AI, simply because the “human plus AI” combination will be more productive than AI alone.

- More Here


Saturday, February 14, 2026

"Surprisingly Popular" Algorithm - Better Wisdom From Crowds

The new method is simple. For a given question, people are asked two things: What they think the right answer is, and what they think popular opinion will be. The variation between the two aggregate responses indicates the correct answer.

“In situations where there is enough information in the crowd to determine the correct answer to a question, that answer will be the one [that] most outperforms expectations,” says paper co-author Drazen Prelec, a professor at the MIT Sloan School of Management as well as the Department of Economics and the Department of Brain and Cognitive Sciences.

[---]

Across all these areas, the researchers found that the “surprisingly popular” algorithm reduced errors by 21.3 percent compared to simple majority votes, and by 24.2 percent compared to basic confidence-weighted votes (where people express how confident they are in their answers). And it reduced errors by 22.2 percent compared to another kind of confidence-weighted votes, those taking the answers with the highest average confidence levels.

The paper, “A solution to the single-question crowd wisdom problem,” is being published today in Nature. The authors are Prelec; John McCoy, a doctoral student in the MIT Department of Brain and Cognitive Sciences; and H. Sebastian Seung, a professor of neuroscience and computer science at Princeton University and a former MIT faculty member. Prelec and McCoy are also researchers in the MIT Neuroeconomics Laboratory, where Prelec is the principal investigator.

[---]

In this sense, the “surprisingly popular” principle is not simply derived from the wisdom of crowds. Instead, it uses the knowledge of a well-informed subgroup of people within the larger crowd as a diagnostically powerful tool that points to the right answer.

“A lot of crowd wisdom weights people equally,” McCoy explains. “But some people have more specialized knowledge.” And those people — if they have both correct information and a correct sense of public perception — make a big difference.

- More Here



Friday, February 13, 2026

No-Technological-Solution Problem

Bingo! What an insight!

We sapiens fucked things up, are still fucking things up, and promise, to continue fucking things up in future. 

Changing their mind and behavior is not in the equation but my species is planning to  innovate the fuck of technologies to clean up the mess they created while they continue to fuck things up. 

Hmm, god bless my species. 

Wonderful, wonderful interview with Dan Brooks about his new book A Darwinian Survival Guide: Hope for the Twenty-First Century:

Well, the primary thing that we have to understand or internalize is that what we’re dealing with is what is called a no-technological-solution problem. In other words, technology is not going to save us, real or imaginary. We have to change our behavior. If we change our behavior, we have sufficient technology to save ourselves. If we don’t change our behavior, we are unlikely to come up with a magical technological fix to compensate for our bad behavior. 

This is why Sal and I have adopted a position that we should not be talking about sustainability, but about survival, in terms of humanity’s future. Sustainability has come to mean, what kind of technological fixes can we come up with that will allow us to continue to do business as usual without paying a penalty for it? As evolutionary biologists, we understand that all actions carry biological consequences. We know that relying on indefinite growth or uncontrolled growth is unsustainable in the long term, but that’s the behavior we’re seeing now.

Stepping back a bit. Darwin told us in 1859 that what we had been doing for the last 10,000 or so years was not going to work. But people didn’t want to hear that message. So along came a sociologist who said, “It’s OK; I can fix Darwinism.” This guy’s name was Herbert Spencer, and he said, “I can fix Darwinism. We’ll just call it natural selection, but instead of survival of what’s-good-enough-to-survive-in-the-future, we’re going to call it survival of the fittest, and it’s whatever is best now.” Herbert Spencer was instrumental in convincing most biologists to change their perspective from “evolution is long-term survival” to “evolution is short-term adaptation.” And that was consistent with the notion of maximizing short term profits economically, maximizing your chances of being reelected, maximizing the collection plate every Sunday in the churches, and people were quite happy with this.

Well, fast-forward and how’s that working out? Not very well. And it turns out that Spencer’s ideas were not, in fact, consistent with Darwin’s ideas. They represented a major change in perspective. What Sal and I suggest is that if we go back to Darwin’s original message, we not only find an explanation for why we’re in this problem, but, interestingly enough, it also gives us some insights into the kinds of behavioral changes we might want to undertake if we want to survive.

To clarify, when we talk about survival in the book, we talk about two different things. One is the survival of our species, Homo sapiens. We actually don’t think that’s in jeopardy. Now, Homo sapiens of some form or another is going to survive no matter what we do, short of blowing up the planet with nuclear weapons. What’s really important is trying to decide what we would need to do if we wanted what we call “technological humanity,” or better said “technologically-dependent humanity,” to survive.

Put it this way: If you take a couple of typical undergraduates from the University of Toronto and you drop them in the middle of Beijing with their cell phones, they’re going to be fine. You take them up to Algonquin Park, a few hours’ drive north of Toronto, and you drop them in the park, and they’re dead within 48 hours. So we have to understand that we’ve produced a lot of human beings on this planet who can’t survive outside of this technologically dependent existence. 

[---]

That’s actually a really good analogy to use, because of course, as you probably know, the temperatures around the Norwegian Seed Bank are so high now that the Seed Bank itself is in some jeopardy of survival. The place where it is was chosen because it was thought that it was going to be cold forever, and everything would be fine, and you could store all these seeds now. And now all the area around it is melting, and this whole thing is in jeopardy. This is a really good example of letting engineers and physicists be in charge of the construction process, rather than biologists. Biologists understand that conditions never stay the same; engineers engineer things for, this is the way things are, this is the way things are always going to be. Physicists are always looking for some sort of general law of in perpetuity, and biologists are never under any illusions about this. Biologists understand that things are always going to change.

[---]

One of the things that’s really important for us to focus on is to understand why it is that human beings are so susceptible to adopting behaviors that seem like a good idea, and are not. Sal and I say, here are some things that seem to be common to human misbehavior, with respect to their survival. One is that human beings really like drama. Human beings really like magic. And human beings don’t like to hear bad news, especially if it means that they’re personally responsible for the bad news. And that’s a very gross, very superficial thing, but beneath that is a whole bunch of really sophisticated stuff about how human brains work, and the relationship between human beings’ ability to conceptualize the future, but living and experiencing the present.

There seems to be a mismatch within our brain — this is an ongoing sort of sloppy evolutionary phenomenon. So that’s why we spend so much time in the first half of the book talking about human evolution, and that’s why we adopt a nonjudgmental approach to understanding how human beings have gotten themselves into this situation.


 

Sunday, January 4, 2026

US, China, AI & More - Dan Wang 2025 Letter

Dan Wang 2025 letter as usual is full of insights plus its funny when he pokes at Paypal mafia morons.

Narrowness of mind is something that makes me uneasy about the tech world. Effective altruists, for example, began with sound ideas like concern for animal welfare as well as cost-benefit analyses for charitable giving. But these solid premises have launched some of its members towards intellectual worlds very distant from moral intuitions that most people hold; they’ve also sent a few into jail. The well-rounded type might struggle to stand out relative to people who are exceptionally talented in a technical domain. Hedge fund managers have views about the price of oil, interest rates, a reliably obscure historical episode, and a thousand other things. Tech titans more obsessively pursue a few ideas — as Elon Musk has on electric vehicles and space launches — rather than developing a robust model of the world.

So the 20-year-olds who accompanied Mr. Musk into the Department of Government Efficiency did not, I would say, distinguish themselves with their judiciousness. The Bay Area has all sorts of autistic tendencies. Though Silicon Valley values the ability to move fast, the rest of society has paid more attention to instances in which tech wants to break things. It is not surprising that hardcore contingents on both the left and the right have developed hostility to most everything that emerges from Silicon Valley. 

[---]

One of the things I like about the finance industry is that it might be better at encouraging diverse opinions. Portfolio managers want to be right on average, but everyone is wrong three times a day before breakfast. So they relentlessly seek new information sources; consensus is rare, since there are always contrarians betting against the rest of the market. Tech cares less for dissent. Its movements are more herdlike, in which companies and startups chase one big technology at a time. Startups don’t need dissent; they want workers who can grind until the network effects kick in. VCs don’t like dissent, showing again and again that many have thin skins. That contributes to a culture I think of as Silicon Valley’s soft Leninism. When political winds shift, most people fall in line, most prominently this year as many tech voices embraced the right. 

The two most insular cities I’ve lived in are San Francisco and Beijing. They are places where people are willing to risk apocalypse every day in order to reach utopia. Though Beijing is open only to a narrow slice of newcomers — the young, smart, and Han — its elites must think about the rest of the country and the rest of the world. San Francisco is more open, but when people move there, they stop thinking about the world at large. Tech folks may be the worst-traveled segment of American elites. People stop themselves from leaving in part because they can correctly claim to live in one of the most naturally beautiful corners of the world, in part because they feel they should not tear themselves away from inventing the future. More than any other topic, I’m bewildered by the way that Silicon Valley talks about AI.

[---]

It’s easy for conversations in San Francisco to collapse into AI. At a party, someone told me that we no longer have to worry about the future of manufacturing. Why not? “Because AI will solve it for us.” At another, I heard someone say the same thing about climate change. One of the questions I receive most frequently anywhere is when Beijing intends to seize Taiwan. But only in San Francisco do people insist that Beijing wants Taiwan for its production of AI chips. In vain do I protest that there are historical and geopolitical reasons motivating the desire, that chip fabs cannot be violently seized, and anyway that Beijing has coveted Taiwan for approximately seven decades before people were talking about AI.

[---]

By being the site of production, they have a keen sense of how to make technical improvements all the time. American scientists may be world leaders in dreaming up new ideas. But American manufacturers have been poor at building industries around these ideas. The history books point out that Bell Labs invented the first solar cell in 1957; today, the lab no longer exists while the solar industry moved to Germany and then to China. While Chinese universities have grown more capable at producing new ideas, it’s not clear that the American manufacturing base has grown stronger at commercializing new inventions.

[---]

So here’s a potential way that China succeeds. Beijing’s goal is to make nearly every important product in the world, while everyone else supplies its commodities and services. By making the country mostly self-sufficient, and by vigorously policing the outputs of LLMs and social media, Xi might hope to make China resilient. He is building Fortress China stone by stone in order to outlast the adversary. Beijing doesn’t have to replicate American diplomatic, cultural, and financial superpowerdom. It might hope that its prowess in advanced manufacturing might deter the US. And its success in manufacturing might directly destabilize the US: by delivering the coup de grace to the rustbelt, the US might shed a few million more manufacturing jobs over the next decade. The job losses combined with AI psychosis, social media, and all the problems with phones could make national politics meaningfully worse.

I don’t think this scenario is likely to be successful. Authoritarian systems have always hoped for the implosion of liberal democracies, while it is the liberal democracies that have a better track record of endurance. But I also don’t think that authoritarian countries are obviously wrong to bet that western polarization will get worse. So it’s up to the US and Europe to show that they can hold on to their values while absorbing the technological changes coming their way. 

[---]

I wish that the tech world could learn to present broader cultural appeal. I hope that Silicon Valley could learn some of the humorousness of New York (or at least LA.) It’s unfortunate that any show or movie made about Silicon Valley is full of awkward nerds; by contrast, Hollywood reliably finds attractive leads when it makes movies about Wall Street. So long as the tech world is talking about the Machine God and the Antichrist, so long as it declines to read more broadly, so long as it is mostly inward looking, it will continue to alienate big parts of the world.



Friday, November 21, 2025

AI Will Never Be A Shortcut To Wisdom

After nearly forty years teaching graduate students and advising some of the most inventive companies on the planet, I’ve earned the right to sigh a bit. But this isn’t about “kids these days.” In fact, it’s not about youth at all. The shift I’m seeing — this collapse of intellectual agility — is striking all generations. All cultures. All walks of life.

Studies on cognitive flexibility, coupled with anecdotal observations about the death of long-form journalism and the slow drift of reader attention, suggest something dire: We are growing unable to sit still with ambiguity. We no longer walk through the fog of a complex question — we skip across it, like stones. Our thoughts sprint, but the world is a marathon. And so, we are left with answers to the wrong questions.

What happens when we can no longer think through contradiction, paradox, tension? When climate change, homelessness, political division, and regional conflict are seen as disconnected problems with easy answers — when, in truth, they are tangled systems that resist simplicity?

The answer is only simple if you don’t understand the question.

This is the danger of living in a world where thinking is outsourced. Where cognition becomes project management. Where uncertainty is eliminated, not explored. Where truth is boxed and shelved, not wrestled with. If the world is a box of nails — individual facts, sharp and ready — then our minds become hammers. Tools of force and certainty. Banging out conclusions. Flattening nuance. And who builds a cathedral with a hammer? Who composes a symphony with a hammer?

This is no way to live. Because if you see the world as nails, you’ll mistake noise for knowledge. You’ll assume volume means validity. And when you no longer know how to recognize true expertise — because you yourself have never gained any — you will fall for the confident fool. The YouTube doctor. The Instagram monk. The LinkedIn philosopher.

[---]

If you want to reclaim your mind — not as a hammer, but as a compass, or a loom, or a garden — start here:

  • Ask better questions.
  • Be suspicious of certainty.
  • Practice long-form attention.
  • Sit with something confusing until it teaches you something.

We are not meant to be hammerheads in a world of nails. We are meant to wonder, to wander, to build. The true mind does not pound — it inquires, connects, reshapes. It listens to contradiction without collapsing. It plays. And most of all, it remembers that the world was never simple. It was just, for a while, flattened by search engines. 

- More Here


Sunday, November 9, 2025

Misusing Wisdom From Books via Motivated Misreading (a.ka. Using It As A How To Do Manual)

In a letter to investors earlier this year, he even approvingly quoted Samuel Huntington of “clash of civilisations” fame, highlighting his claim that the rise of the West was not made possible “by the superiority of its ideas or values or religion… but rather by its superiority in applying organised violence”.

- More Here review of the new book The Philosopher in the Valley: Alex Karp, Palantir and the Rise of the Surveillance State by Michael Steinberger

And what does Palantir actually do? (hint: nada on innovation nor any ground breaking AI) 

What does Palantir actually do? 

It’s a question that comes up time and time again in social media. 

It’s also surprisingly easy to address, despite the company’s occult reputation: Palantir collates disparate sources of data and makes them easy to search. It is Google for chaotic organisations, whose software connects various databases and computer systems into a single unified platform. 

If the company’s services could be applied to your life, it would look like a team of specialists who arrive at your house and rifle through your desk, updating your to-do lists, contacts and calendars; syncing and sorting the files you have scattered across a half-dozen old phones and and hard drives, and generally Making Things Organised. Wouldn’t you pay good money for such a service? Of course you would. 

Now, imagine you’re a country and this pandemonium is not personal but institutionalised – encompassing not just a few email inboxes and old USBs, but, say, an entire healthcare system, including payroll, procurement, and insurance, or a medium-sized war. Wouldn’t you then pay a lot of money? Wouldn’t you in fact pay millions and millions and be extremely thankful to whoever sorted this mess on your behalf? Thus: Palantir’s rise.

 


Sunday, October 26, 2025

Andrew Interview With Karen Hao - On AI

A long long time ago, Micheal Lewis in an interview said something very wise about his first book Liar's Poker. I am paraphrasing here: 

"I wrote Liars Poker to expose all the bad things Wall Street is doing but little did I realize, lot of people were using this book as a 'how to guide' ! " 

I personally have lived and worked in the same industries during the dot-com bubble, real estate crisis and irony has it - in AI. 

What I am seeing is deja vu with AI - millions are using it as a ‘how to guide' to make quick buck although most know this is pure snake oil and is going to come down sooner or later. 

Tulips to AI - human freaking beings never learn.. well actually they are freaking so good at self deception (hence, I love love Robert Trivers' work) 

Brilliant interview (albeit they missed an important technical stuff - none of these was pioneered by Open AI) with Karen Hao author the new book Empire of AI: Dreams and Nightmares in Sam Altman's OpenAI

Not many people make this connection but Andrew is not like other people - He puts Thiel, Musk and Altman in the same bucket as troubled creatures without morals. 

It's not clear to me that scaling AI models aggressively somehow makes it more dangerous in terms of military applications. Like, to me, the things that are dangerous for military applications are actually extremely simple AI models. Like, the reporting on Lavender that the Israelis were using to identify targets in Palestine, that was, like, an extremely basic approach. machine learning model that was practically just linear algebra. And the other thing that people worry about is autonomous weapons, which you do not use large language models to develop autonomous weapons. You use things like computer vision for identifying a target and then autonomously operating the weapon.

[---]

A lot of the concerns that people have about AI and military is they're actually talking about totally different types of technologies than what these companies are building. But then the companies are using the confusion to their advantage to say, oh, yeah, like keep giving us all the resources to build this completely wholly unrelated AI technology.

[---]

I'm not that interested in boardroom struggles and all that stuff, I have to say. But the one thing that does... That I did get from that is that we forget these people are humans. They have, the very brilliant ones, the ones that are making a lot of these decisions, Altman, Susqueva, Mirata, Musk, Teal, they're all complicated, flawed people. human beings. And they don't, because they're working in this industry, doesn't mean they have some sort of super intelligence or super morality. They don't. Altman himself, and this brings us to another, sort of just upsetting thing, which is his sister, which is this other story that comes out later, which also begins to just create a general sense of unease about this guy, because she has claimed publicly on many occasions that she was abused sexually by her family, including her brother. for many years, she ended up in a pretty rough state. I mean, she was, she was, she basically reduced only fans to keep herself going.

And she's the sister of this person. And they, of course, anybody involved in other family dynamics is going to, is, is, is dumb. It's obviously something I can't understand. I don't want to stand, but nonetheless, the disparity between this poor woman's utter cutoff, utter isolation, despair in such a massive enterprise that her brother is undergoing is,

[---]

We need to focus on AI development in the future is moving away from large scale models that are intending to be some kind of general purpose tool. And we should really be focusing on small task specific models again, which is what used to be what AI actually was. And the reason is it's so much less energy intensive.

You can train, you know, a cancer detection AI model on something like a powerful computer. You do not actually need cities of iPhones, as you so eloquently put it. And, like, that's, you know, that's very little cost for an extraordinary benefit. We want more cancer-detecting AI.

We also want more AI that can reduce the energy consumption of a building. We want more AI that can help do more accurate weather prediction and climate crisis prediction so that we can evacuate people more accurately when climate disasters strike.

But what Altman might say in return is, but you don't understand, AGI will solve climate change. Of course, which she says all the time. Which we'll get nuclear fission within a few minutes once Big Brain comes on. What are we going to call this thing?

What the fuck are we going to call this giant bloody thing that we all have to worship or that has the supreme intelligence? But yeah, that is the ultimate win-all argument, which is that, look, what we're developing is so smart, it will solve all the problems it creates.

And I have a facetious answer and a more legitimate answer. My facetious answer is throughout history, there have been people that have promised some kind of thing that will solve all your problems. And they have always been charlatans. Like if someone knocked on your door in the medieval ages and was like, I have this potion that's going to solve all your problems, you just have to give me everything, like your firstborn child and everything. Like you would be like, wait a minute, something's not quite adding up here. And now fast forward to today, that is essentially what these AI companies are saying.

They're like, give us everything and then we will give you a solution to all of your problems. I mean, if you just abstract it to that level, it suddenly becomes blatantly obvious what's actually happening. This is entirely a scam. But the less facetious argument is like, they are telling us, ignore all of the current, real, present-day problems based on the promise of something potentially arriving in the future. They've never actually, you know, we cannot guarantee that this technology is going to deliver all these things that they say they will. So how long are we willing to burn down our planet and run down our resources and gouge out our economy and do all of these things for the speculative payoff? Like, at what point do we decide, wait a minute, why don't we actually just reinvest all this capital in solutions that we know will pay off?

[---]

It needs to be dealt with by people who just live ordinary lives. And it needs to be brought back to the human. And what some of these individuals, I think of Thiel particularly, their aspirations are truly, truly important. beyond responsible in my my view and and inhuman and you see in their desire to live forever the obvious natural conclusion to where they go they want to be gods yeah and and ai and hei is really their pathway to become gods and and we're not And it is insane to do so. And we're going to destroy ourselves if we do it.

[---]

And the one possible solace, which is the people leading these companies, are actually solid, moral, sane people. It seems to be lacking. I mean, honestly, I mean... You just observe Elon Musk's tweets and you're like, I understand this man is obviously a genius in many ways, right? The evidence of his achievements are overwhelming.

But he's out of his fucking mind. And the things he's saying are just so loony. The story you tell of Sam Altman is of a deeply disturbed person. A really fucked up person. I'm sorry. I don't know where he's coming from. I feel... I feel kind of proud that a young gay man, openly gay man, has done this. But we gays, we often spend a lot of time in childhood alone looking at computers and things. I mean, it's not an accident that we're overrepresented in many ways at the top of many companies.

But at the same time, boy, are they not that well. And they don't have... values, structures, morals that most of us would understand as solid. I mean, Peter Thiel says he's a Christian because you've read René Girard, but I'm sorry, but no, I don't see it that way at all.


Saturday, October 25, 2025

Well-Defined Problems vs. Poorly-Defined Problems

I hate compliments. This is not fake-humility but I really hate compliments and to make it worse, my red flags light up about the person who compliments me. In other words, I don't trust the humans who compliment me. 

A few times in my life I received a compliment, I liked it since I work hard for it. 

That word is - wisdom. A few times in my life, I heard someone utter the phrase - you are wise. 

And I gladly took that compliment as a commitment to work harder.

Word-hard for what? To be not bad at poorly defined problems a.k.a trying to be little less stupid tomorrow than I am today. 

This such an wonderful article on the same - Why aren't smart people happier?

I think all of our various tests of intelligence aren’t as different as they seem. They’re all full of problems that have a few important things in common:

  • There are stable relationships between the variables.
  • There’s no disagreement about whether the problems are problems, or whether they’ve been solved.
  • There have clear boundaries; there is a finite amount of relevant information and possible actions.
  • The problems are repeatable. Although the details may change, the process for solving the problems does not.

I think a good name for problems like these is well-defined. Well-defined problems can be very difficult, but they aren’t mystical. You can write down instructions for solving them. And you can put them on a test. In fact, standardized tests items must be well-defined problems, because they require indisputable answers. Matching a word to its synonym, finding the area of a trapezoid, putting pictures in the correct order—all common tasks on IQ tests—are well-defined problems.

Spearman was right that people differ in their ability to solve well-defined problems. But he was wrong that well-defined problems are the only kind of problems. “Why can’t I find someone to spend my life with?” “Should I be a dentist or a dancer?” and “How do I get my child to stop crying?” are all important but poorly defined problems. “How can we all get along?” is not a multiple-choice question. Neither is “What do I do when my parents get old?” And getting better at rotating shapes or remembering state capitals is not going to help you solve them.

We all share some blame with Spearman, of course, because everybody talks about smarts as if they’re one thing. Google “smartest people in the world” and most of the results will be physicists, mathematicians, computer scientists, and chess masters. These are all difficult problems, but they are well-defined, and that makes it easy to rank people. The best chess player in the world is the one who can beat everybody else. The best mathematician is the one who can solve the problems that nobody else could solve. That makes it seem like the best chess players and mathematicians are not just the smartest in their fields, but the smartest in the whole world.

THE POORLY DEFINED PROBLEM OF BEING ALIVE

There is, unfortunately no good word for “skill at solving poorly defined problems.” Insight, creativity, agency, self-knowledge—they’re all part of it, but not all of it. Wisdom comes the closest, but it suggests a certain fustiness and grandeur, and poorly defined problems aren’t just dramatic questions like “how do you live a good life”; they’re also everyday questions like “how do you host a good party” and “how do you figure out what to do today.”

One way to spot people who are good at solving poorly defined problems is to look for people who feel good about their lives; “how do I live a life I like” is a humdinger of a poorly defined problem. The rules aren’t stable: what makes you happy may make me miserable. The boundaries aren’t clear: literally anything I do could make me more happy or less happy. The problems are not repeatable: what made me happy when I was 21 may not make me happy when I’m 31. Nobody else can be completely sure whether I’m happy or not, and sometimes I’m not even sure. In fact, some people might claim that I’m not really happy, no matter what I say, unless I accept Jesus into my heart or reach nirvana or fall in love—if I think I’m happy before all that, I’m simply mistaken about what happiness is!

This is why the people who score well on intelligence tests and win lots of chess games are no happier than the people who flunk the tests and lose at chess: well-defined and poorly defined problems require completely different problem-solving skills. Life ain’t chess! Nobody agrees on the rules, the pieces do whatever they want, and the board covers the whole globe, as well as the inside of your head and possibly several metaphysical planes as well.

[---]

So if you’re really looking for a transformative change in your happiness, you might be better off reading something ancient. The great thinkers of the distant past seemed obsessed with figuring out how to live good lives: Socrates, Plato, Aristotle, Epicurus, Buddha, Confucius, Jesus, Marcus Aurelius, St. Augustine, even up through Thoreau and Vivekananda. But at some point, this kind of stuff apparently fell out of fashion.

And hey, maybe that’s because there’s just no more progress to make on the poorly defined problem of “how do we live.” But most well-defined problems were once defined poorly. For example, “how do we land on the moon” was a hopelessly poorly defined problem for most of human history. It only makes sense if you know that the moon is a big rock you can land on and not, say, a god floating in the sky. We slowly put some definitions around that problem, and then one day we sent an actual dude to the moon and he walked around and was like “I’m on the moon now.” If we can do that, maybe we can also figure out how to live good lives. It certainly seems worth it to keep trying.


 

Monday, October 20, 2025

Most Important Sentences... To Stop An Intellectual Bullshit

The idea of AI sentience remains trapped in the misguided paradigm of evaluating non-human intelligence by its resemblance to human behavior. 
It is sad that our society is so generous in considering the sentience of machines, yet so skeptical of other creatures. 
We sympathize with software that prints “I don’t want to die,” without bothering to learn the languages others use to make the same plea.

[---]

All life has value. Even if they aren’t sentient, the endangered wildflower and the ancient coastal redwood should not be cut. However, it is logical and noble to extend special protections to animals, whom we know can suffer pain. It is natural to be partial to our fellow humans and to feel an indescribable connection to our favorite animals. But we must acknowledge that there is no objective basis to these preferences. It is equally valid to appreciate and value dogs as it is cats, or for that matter pigs, chickens, anchovies, or oysters. Founding the case for animal rights upon the universal value of all life imparts a more robust epistemology that does not undermine itself by ranking the value of species against one another.

We all know how it feels to be hurt, perhaps even in a way that no one else seems to understand. In these moments, we wish for nothing more than someone to acknowledge our pain. Sentience imparts us visceral, universal signals which we innately recognize in others, but have been conditioned to disbelieve. Other life forms cannot describe their pain to us, yet we can still listen. If there is a line of moral worth to be drawn across our tree of life, it should be below, through the common roots from which we all grow. Our world is so much more complex and wondrous than the myth of human supremacy would have us believe.

- More Here

In other words, morons are talking about "pain" in AI while feeding by beautiful and sentinel animal dead bodies. 


Sunday, September 28, 2025

Lessons from a Chimp: AI ‘Scheming’ and the Quest for Ape Language

I am in the field. 

I find it nauseating to observe people who know zilch about how much benefits machine learning has bought and could bring and these same people go gaga over LLMs. It's a sad state for a field with so much potential. 

Secondly, this paper hits the nail. For decades till today, non-human animal intelligence has been dismissed because of anthropomorphism (while the current AI/LLM  Ponzi scheme is built on again this misguided anthropomorphism and god knows how much financial damage it will cause when the marker crashes).

Here's my take: 

  • Humans and AI face the opposite "Qualia" problem. Human qualia problems are easy for AI to bullshit and explain (and sometimes non-bullshit with good explanation) and conversely,  AI qualia problems are almost always extremely easy for humans. 
  • Stop fucking talking about AGI. There hasn't been even a dent in cyberattacks based on AI. If there is AGI coming, trust me - our bank accounts to everything in our digital life will be in jeopardy.  We will not only know but it will hit hard like a tsunami in every aspect of your life.  

Read the whole thing here

The UK AI Security Institute1 published a new paper: “Lessons from a Chimp: AI ‘Scheming’ and the Quest for Ape Language.” It criticizes the “recent research that asks whether AI systems may be developing a capacity for scheming.”

“Scheming” means strategically pursuing misaligned goals. These “deceptive alignment” studies examine, for example, strategic deception, alignment faking, and power seeking.

The team, which consists of a dozen AI safety researchers, warns that recent AI ‘scheming’ claims are based on flawed evidence.

The paper identifies four methodological flaws in studies conducted by Anthropic, MTER, Apollo Research, and others:

1. Overreliance on striking anecdotes.

2. Lack of hypotheses or control conditions.

3. Insufficient or shifting theoretical definitions.

4. Invoking mentalistic language that is unsupported by data.

Accordingly, these are AISI’s conclusions:

“We call researchers studying AI ‘scheming’ to minimise their reliance on anecdotes, design research with appropriate control conditions, articulate theories more clearly, and avoid unwarranted mentalistic language.”

The AISI researchers drew a historic parallel to previous excitement about “the linguistic ability of non-human species.” “The story of the ape language research of the 1960s and 1970s is a salutary tale of how science can go awry.”

“There are lessons to be learned from that historical research endeavour, which was characterised by an overattribution of human traits to other agents, an excessive reliance on anecdote and descriptive analysis, and a failure to articulate a strong theoretical framework for the research.”

“Many of the same problems plague research into AI ‘scheming’ today,” stated Christopher Summerfield, AISI Research Director, when he posted the article (on July 9, 2025).

Broader lesson: Non-human intelligence (biological or artificial) requires extra-strong evidence, not extra-lax standards.

[---]

“Most AI safety researchers are motivated by genuine concern about the impact of powerful AI on society. Humans often show confirmation biases or motivated reasoning, and so concerned researchers may be naturally prone to over-interpret in favour of ‘rogue’ AI behaviours. The papers making these claims are mostly (but not exclusively) written by a small set of overlapping authors who are all part of a tight-knit community who have argued that artificial general intelligence (AGI) and artificial superintelligence (ASI) are a near-term possibility. Thus, there is an ever-present risk of researcher bias and ‘groupthink’ when discussing this issue.”


Tuesday, September 2, 2025

The Devil Admits - We Are In An AI Bubble

I am in the field and I have been tired of this bullshit for 2/3 years now. I mean unbelievable bullshit, and everyone who doesn't even know the formula for calculating the area of a circle,  using the term AGI.

Now the guy who spread this bullshit admits well it's bullshit (and he continues to make money - pure pay-pal mafia strategy):

First he says AGI is not right term:

OpenAI CEO Sam Altman said artificial general intelligence, or “AGI,” is losing its relevance as a term as rapid advances in the space make it harder to define the concept.

AGI refers to the concept of a form of artificial intelligence that can perform any intellectual task that a human can. For years, OpenAI has been working to research and develop AGI that is safe and benefits all humanity.

“I think it’s not a super useful term,” Altman told CNBC’s “Squawk Box” last week, when asked whether the company’s latest GPT-5 model moves the world any closer to achieving AGI. The AI entrepreneur has previously said he thinks AGI could be developed in the “reasonably close-ish future.”

And he spread this bullshit just last year:

OpenAI CEO Sam Altman says concerns that artificial intelligence will one day become so powerful that it will dramatically reshape and disrupt the world are overblown.

“It will change the world much less than we all think and it will change jobs much less than we all think,” Altman said at a conversation organized by Bloomberg at the World Economic Forum in Davos, Switzerland.

Altman was specifically referencing artificial general intelligence, or AGI, a term used to refer to a form of AI that can complete tasks to the same level, or a step above, humans.

He said AGI could be developed in the “reasonably close-ish future.”

Plus now he is saying this a bubble (even a guy like I knew this for years and he played everyone for so long):

As economists speculate whether the stock market is in an AI bubble that could soon burst, OpenAI CEO Sam Altman has just admitted to believing we’re in one. “Are we in a phase where investors as a whole are overexcited about AI?” Altman said during a lengthy interview with The Verge and other reporters last night. “My opinion is yes.”

In the far-ranging interview, Altman compared the market’s reaction to AI to the dot-com bubble in the ’90s, when the value of internet startups soared before crashing down in 2000. “When bubbles happen, smart people get overexcited about a kernel of truth,” Altman said. “If you look at most of the bubbles in history, like the tech bubble, there was a real thing. Tech was really important. The internet was a really big deal. People got overexcited.”

He added that he thinks it’s “insane” that some AI startups with “three people and an idea” are receiving funding at such high valuations. “That’s not rational behavior,” Altman said. “Someone’s gonna get burned there, I think.”

People got overexcited? Such a snake oil sales man this guy likes his pay-pal peers. 

Yes, AI is extremely useful. Machine learning and Deep learning and other algorithms have brought so many benefits for more than a decade. But promoting LLM (a useful tool) as a panacea was done by a handful of folks like him in the industry. 

The point is they knew it was bullshit and yet, they spread this. 


Thursday, August 28, 2025

How to Be a Good Intelligence Analyst

Such an wonderful piece! highly recommended. 

Please read the whole thing here

Because learning institutionally is hard?

Learning institutionally is hard. Not only is it hard to do, but it's also hard to measure and to affect. But, if nothing else, practitioners became more thoughtful about the profession of intelligence. To me, that was really important. The CIA is well represented by lots of fiction, from Archer to Jason Bourne. It's always good for the brand. Even if we look nefarious, it scares our adversaries. But it's super far removed from reality. Reality in intelligence looks about as dull as reality in general. Being a really good financial or business analyst, any of those kinds of tasks, they're all working a certain part of your brain that you can either train and improve, or ignore and just hope for the best.

[---]

What do American intelligence analysts do if not the fun stuff from the Bourne movies?

They read, they think, they write. They write some more, they edit, they get told their writing sucks. They go back, they start over again. Some manager looks at it and says, "Is this the best you can write?" And they say, “No.” And they hand it back to them, and off they go to write it again. It’s as much of a grind as any other analytic gig. You're reading, thinking, following trends, looking for key variables.

Analysts who are good on their account generally have picked up very specific tips and tricks that they may not even be able to articulate. The best performers in the agency had a very difficult time explaining how it was they went about their analysis, and articulating their expertise. That's not unusual. Experts really aren't very good at articulating why or how they're experts, but we do find that after 10,000-ish cases, they get better, because they're learning what to look for and what not to.

That comes with some penalties. The more hyper-focused you are on topic X, the less likely you are to think that topic Y is going to affect it. And often it's topic Y that comes in orthogonally and makes chaos. “How do you create expert-novice teams?” was a question that we struggled with: finding the right balance between old and new hands, because you wanted the depth of expertise along with the breadth of being a novice. Novices would try anything because nobody told them they couldn't. That's a very valuable thing to learn from. If you're an analyst or an analytic manager, the challenge is how to balance that structure.

[---]

That old model seems more James Bond-y. The character goes more places for the movie at the cost of effectiveness.

A consistent problem is that the effectiveness measures are poorly articulated and poorly understood by both the consumers and the customers. The best consumer of intelligence that I have ever interacted with was Colin Powell. He had a very simple truism: "Tell me what you know, tell me what you don't know, then tell me what you think, so that I can parse out what you're saying and make sense of it.” He was a remarkably savvy consumer of intelligence.

Not all consumers are that savvy. Many of them would benefit from spending a little time learning more about the community, understanding the relationship with their briefers and analysts. The more engaged the policymakers are in learning about intelligence, the more savvy they'll get as consumers. Until then, you're throwing something over the transom and hoping for the best. It's not a great way to operate if you have consumers who want your product.

Who were some relatively poor consumers of intelligence information?

There are so many. Dick Cheney was not a poor consumer of intelligence. He just had an agenda, and he understood the discipline well enough to exercise that agenda. [Donald] Rumsfeld was not good. And [Paul] Wolfowitz was much worse at it than he thought. There were some others in that administration, and I don't mean to pick on them. There were plenty of lousy consumers under Obama and under Clinton. Not a lot of them take enough time to really think about what they're getting.

The biggest problem that I have found with ambassadors, generals, or other consumers is they'll go out into the world, shake hands with their counterpart, and decide based on that interaction that they understand their counterpart better than anybody else does. "I went to lunch with so-and-so, I should know." The problem is that so-and-so is not going to tell you the truth. If so-and-so is going to do something, going to lunch with him probably isn't going to be very revealing. He's probably going to tell you what you want to hear. You'd be surprised how many consumers don't even think about that possibility. It boggles my mind.

It is funny you mention Donald Rumsfeld as a poor consumer of information, because one of his famous truisms was, he wanted you to explain your “known knowns” and your “unknown unknowns.” My first impression would be that he’d be a good consumer.

The problem with the Rumsfelds and the Kissingers is that maybe they are the smartest person in the room, but maybe they should stop believing that for a while. That gets in their way. They just assume from the jump that they're smarter than everybody. Not just everybody individually, but everybody collectively. There's a certain amount of ego that goes along with all of this. When the ego gets sufficiently inflated, you reject information that is contrary to your own values, mental model, and thought processes. You assign outlier status to anything that doesn't conform with the way you think about a problem. That's expertise run amok.

That's where people like Rumsfeld or Kissinger come off the rails. They just assume, "Well, I'm smarter than everybody, so I'll figure it out. You just give me raw data." I have not seen a terribly successful model of that. It's better to walk into a room and assume that you're not remotely the smartest person there. You're doing yourself a cognitive disservice if you think you're cleverer than everybody else. It's a rookie mistake, but you see it over and over, and if it works for you and you keep getting promoted, eventually you start to believe it.

It doesn't seem like a rookie mistake to me. It seems like the mistake of a seasoned professional.

You're right. It is a longevity error.


Sunday, June 15, 2025

Shadow-Boxing with AI Safety

Notice the moral ambiguity in this problem, and how much more difficult that makes it to work with. Some people, including many who gravitate to technical fields like AI research, would prefer to stick to engineering problems, where there’s a clear right answer. To some extent, that’s ok, but they have to then admit that they don’t have any say in how their work will affect the world. They’re essentially a pawn in the hands of whoever decides what topic they work on, and that’s normally whoever supplies the money.

AI safety, as it currently stands, allows the AI world to feel that they do have control over what they’re creating. Of course, they say AI safety is a hard problem and requires more work, but they feel they have basically pinned down what they need to do to avoid things turning out badly and it is at this point an engineering problem. 

For those AI researchers that are not comfortable with being a pawn in the game, the right place to begin is with the high-level questions of what we want AI to be, and what we don’t want it to be, and clearly, the ethics of building killing machines is a big part of these questions. This means spending the time to understand parts of the world outside your familiar culture, broaching topics that make people uncomfortable, and tackling morally ambiguous questions that can’t be solved as cleanly as technical ones can. 

AI safety blocks people from doing these things, because it gives the illusion that the matter is already being dealt with. That’s the whole purpose of this sort of shadow-boxing: to allow people the comforting but false belief that they’re wrestling with the big issues. Oh, you’re concerned about how AI is shaping the world? Great, join the AI safety team, we’ve already identified the key areas for you to work on. We even have metrics and benchmarks and datasets, so just engineer a way to make one of these scores higher and you’re doing your bit to make AI safe.

[---]

AI is a subject that I came to out of a quasi-spiritual impulse to understand the nature of the mind and the self, and it’s now getting roped into the most powerful and destructive systems on the planet, including the military-industrial complex, and, potentially, the outbreak of the next major global conflicts. Trying to navigate this central and rapidly changing position brings a host of new questions–questions that are unfamiliar, controversial and ambiguous–and so far the AI world has barely found the courage even to ask them.

- More Here


Saturday, March 22, 2025

Kevin Kelly's Words Of Wisdom On AI, Simulation et al.,

Thinking (intelligence) is only part of science; maybe even a small part. As one example, we don’t have enough proper data to come close to solving the death problem. In the case of working with living organisms, most of these experiments take calendar time. The slow metabolism of a cell cannot be sped up. They take years, or months, or at least days, to get results. If we want to know what happens to subatomic particles, we can’t just think about them. We have to build very large, very complex, very tricky physical structures to find out. Even if the smartest physicists were 1,000 times smarter than they are now, without a Collider, they will know nothing new.

[---]

There is no doubt that a super AI can accelerate the process of science. We can make computer simulations of atoms or cells and we can keep speeding them up by many factors, but two issues limit the usefulness of simulations in obtaining instant progress. First, simulations and models can only be faster than their subjects because they leave something out. That is the nature of a model or simulation. Also worth noting: The testing, vetting and proving of those models also has to take place in calendar time to match the rate of their subjects. The testing of ground truth can’t be sped up.

- More Here


Wednesday, February 14, 2024

Is AI Anti-Animal?

I am in the field and stupid me,  I missed this. 

The data behind training the models are created by humans. And humans look down on animals. This is not good for the future of animals. We need a way to minimize this biased data. 

In general, AI should go beyond data but we are not there yet. So we are stuck with "cleaning" the data to remove bias (and good luck with that). 

It should come as no surprise, then, that AI is also a replicator, perpetuator, and normalizer of speciesism.  

We need to work with big tech to eradicate this bias before it spreads into the future this anti-animal virus infected human brains

Speciesism – or “the belief that a mere difference in species justifies us in giving more weight to the interests of members of one species (usually our own . . . ) than the similar interests of members of other species” – is a prejudice, similar to sexism and racism” that underlies all human exploitation of other-than-human animals, including inside laboratories. 

Unlike other forms of human-on-human discrimination like sexism and racism, however, speciesism is not “widely accepted” to be “wrong”, and its “biased views and actions [] are shared, accepted, and performed by a large majority of society”. As a result, its elimination from AI is not a “high priority” (if it’s even on the list at all…):

“Massive efforts are made to reduce biases in both data and algorithms to render AI applications fair. These efforts are propelled by various high-profile cases where biased algorithmic decision-making caused harm to women, people of color, minorities, etc. However, the AI fairness field still succumbs to a blind spot, namely its insensitivity to discrimination against animals.”

[---]

To spark change ourselves, all we have to do is change the world (something we in the animal rights movement were already planning to do anyway, right?!)!

At present, “none of the major AI companies [] have any mention of animals in their ethical guidelines, and they’re not instructing data workers to consider how responses affect animals”. This means that speciesism will continue to be “hardwired into algorithms running our lives”.

And, this means that those of us in the animal rights movement must remain vigilant in our opposition to oppression in all of its forms – for changing our machines’ reflections of our world requires changing our world itself; and changing our world itself requires each and every one of us taking action. 

Monday, October 23, 2023

The Ends of Knowledge - Outcomes and Endpoints Across the Arts and Sciences

The greatest error of all is the mistaking or misplacing of the last or furthest end of knowledge. Its ‘true ends’ were not professional reputation, financial gain, or even love of learning but rather ‘the uses and benefits of life, to improve and conduct it in charity’.

- Francis Bacon, The Advancement of Learning (1605)

I think this ends of knowledge piece is a good follow up to Hanno Sauer's paper End Of Philosophical Historiography

This is a much needed call to stop spinning the wheels on endless abstractions and start working towards consilience; E.O Wilson's famous call for unity of knowledge.  For people in love with "abstract" philosophy this is a call to not read them but incorporate them into other disciplines so that we can act on insights in everyday life. 

I mean, what makes me a little less dumb everyday is whatever way possible I try to bring all the little knowledge I have together. Say, how can the moral philosophy of Buddha to Stoics to Montaigne can help AI become ethically little better aware in the future? or something even simpler such as learning to experience a little discomfort by getting rid of plastics.

In this way, the Enlightenment offers a model of how the end of one view of knowledge production can be a launchpad for new ideas, methods and paradigms. The fracturing and decline of Aristotelian scholasticism during the Renaissance gave rise to a host of philosophies devised to replace it. The conflicts of the Thomists and Scotists, the inadequacies of revived Hellenistic doctrines, the discomforting mysticism of Rosicrucianism and Kabbalah, and even the failed promise of Platonism to provide a modern, comprehensive alternative to Aristotle led thinkers like Bacon to seek answers in other fields.

Bacon’s terms – exitus, finis, terminus – suggest a focus on endpoints as well as outcomes. Knowledge, in his philosophy, had ends (ie, purposes) as well as an end (a point at which the project would be complete). The new science, he believed, would lead to ‘the proper end and termination of infinite error’ and was worth undertaking precisely because an end was possible: ‘For it is better to make a beginning of a thing which has a chance of an end, than to get caught up in things which have no end, in perpetual struggle and exertion.’ Bacon believed scientists could achieve their ends.

[---]

The first two definitions relate most directly to the work of a discipline or an individual scholar: what is the knowledge project being undertaken, and what would it mean for it to be complete? Most scholars are relatively comfortable asking the former question – even if they do not have clear answers to it – but have either never considered the latter or would consider the process of knowledge production to be always infinite, because answering one question necessarily leads to new ones. We argue that even if this were true, and a particular project could never be completed within an individual’s lifetime, there is value in having an identifiable endpoint. The third meaning – termination – refers to the institutional pressures that many disciplines are facing: the closure of centres, departments and even whole schools, alongside political pressure and public hostility.

How can we get anywhere if we cannot even say where we want to go?

Over all this looms the fourth meaning, primarily in the context of the approaching climate apocalypse, which puts the first three ends into perspective: what is the point of all this in the face of wildfires, superstorms and megadrought? For us, this is not a rhetorical question. What is the point of literary studies, physics, history, the liberal arts, activism, biology, AI and, of course, environmental studies in the present moment? The answers even for the latter field are not obvious: as Myanna Lahsen shows in her contribution to our volume, although the scientific case is closed as far as proving humans’ effect on the climate, governments have nevertheless not taken the action needed to avoid climate catastrophe. Should scientists then throw up their hands at their inability to influence political trends – indeed, some have called for a moratorium on further research – or must they instead engage with social scientists to pursue research on social and political solutions? What role do disciplinary norms separating the sciences, social sciences and humanities play in maintaining the apocalyptic status quo?

To some extent, then, particular ends are less important than the possibility of discovering a shared sense of purpose. Ultimately, we hope to show what the benefits would be of knowledge projects starting with their end(s) in mind. How can we get anywhere if we cannot even say where we want to go? And even if we think we have goals, are we actually working toward them? Ideally, a firm sense of both purpose and outcome could help scholars demonstrate how they are advancing knowledge rather than continuing to spin their wheels

[---]

At the same time, these ends are necessarily interconnected, and individual research projects would likely fit into several at once. As Hong Qu argues in his contribution to our book, for example, individual researchers and teams working towards autonomously learning AI systems, or artificial general intelligence (AGI), will need more deliberate exposure to moral philosophy, political science and sociology to ensure that ethical concerns and unintended consequences are not addressed on an ad hoc basis or after the fact but are anticipated and made integral to the technology’s development. Educators, activists and policymakers will concordantly need more practical knowledge about how AI works and what it can or cannot do. Achieving the immediate end of AGI entails the pursuit of a new and more abstract end greater than the sum of its disciplinary parts: ‘a governance framework delineating rules and expectations for configuring artificial intelligence with moral reasoning in alignment with universal human rights and international laws as well as local customs, ideologies, and social norms.’ Qu explores potential dystopian scenarios as he argues that, if the end of creating ethical AGI is not achieved, humanity may face a technological end. In this way, current disciplinary divides are driving a society-wide sense of potential doom.

 

Sunday, July 23, 2023

Mission Impossible - DR Part 1

Watched the movie last night in theatre and it was brilliant! 

A non-super hero and non-marvel comic bullshit movie. I miss these kind of traditional Hollywood movie.

Kudos to Tom Cruise - spectacular action scenes; a fun 2.5 hours! 

I watched the first MI in Madras back in 1996 before moving to US; Tom Cruise looks fit and that bike jump he did it himself. 

Active Learning AI is the villian and that was funny. 




If you haven't watched the behind scenes video of that classic bike jump - check it out: