Showing posts with label History. Show all posts
Showing posts with label History. Show all posts

Thursday, May 7, 2026

Culture - The Word That Fucked Up Our Species

I have written so many times about how almost all atrocities committed against our fellow animal family members is not considered immoral since people take umbrage behind the fucked up excuses of “culture”. 

People use culture to macro bullshit and not focus on micro morality which is precious for life on earth. 

Alex Nowrasteh’s wonderful piece is looking at this monster at a different angle. Different angle but same monster. 

The cleanest test is the divided-country natural experiment. North Korea and South Korea share a language, ethnicity, history, and culture up to 1945. One is among the richest countries on earth, the other among the poorest. East and West Germany diverged dramatically under different institutions and converged after reunification. Mainland China stagnated under Mao while Taiwan, Singapore, and Hong Kong prospered, all four sharing Chinese culture. In every case, the culture was identical on both sides of the border. The incentives, shaped by the institutions, are what changed. The outcome followed the institution, not the culture. Untangling causality is difficult, sometimes impossible, but that’s no reason to embrace a false explanation like “the culture made them do it.”

At its root, the culture discourse is anti-intellectual. Culture is a faux explanation for social behavior and outcomes that have real explanations. Think harder. Use AI to search the literature if you have to because other researchers have probably already written about the issue you claim is just caused by culture. The cultural explanation is the one you reach for when you’ve decided the search isn’t worth your time. Better to remain quiet if culture is the only explanation you’ve got. Here are some examples.

[---]

If a country is poor because of its culture, nobody has to examine the bad incentives facing members of that society. Intellectual laziness explains the rest. Finding the price, the constraint, the institutional mechanism that creates an incentive is hard, but invoking culture as if it’s a magical exogenous decider lets you stop searching. Cultural explanations are cheap to produce, requiring only anecdotes rather than data, prices, or evidence. It feels like an answer because it has the grammatical structure of one. “Japanese people ride trains because of their culture” masquerades as an explanation, but it’s just a tautology.

Culture is endogenous to everything. Claiming culture causes an outcome without first ruling out that the outcome’s causes also produced the culture is circular reasoning. Every cultural explanation must first survive a price, incentive, and institutional audit. Few of them do, but those that do are extraordinary findings, which is perhaps another explanation why so many claim it. Nobody would let economists get away with explaining a recession of high unemployment with the explanation, “It’s the economy.” We shouldn’t let others get away with the equally lazy non-explanation of “it’s the culture.”

 

Tuesday, May 5, 2026

Derek Parfit - What Is The Impact Of Thousands Of Small Environmental Or Personal Abuses Over Time?

One particular example I’ve always liked (especially since as a kid I had similar thoughts) provides a vivid illustration of the psychology underlying the dismissal of global warming. It shows that the consequences of our decisions need not occur in the distant future for us to discount them. They can occur out of sight or after so many steps as to seem distant. The example (embroidered a bit here) appears in Derek Parfit’s book “Reasons and Persons,” where he discusses the case of a man strapped to a hospital bed, say by a psychopath, in some indeterminate place with electrodes attached to his heart. Rotation of a dial on the other side of the world minusculely and imperceptibly increases the current in the electrodes and the stress on the man’s heart.

Perhaps a free piece of candy, a pleasant buzz, and a snapshot with the dial are on offer from a mysterious donor as an incentive to anyone in the distant location who twists the dial. Assuming it takes 10,000 people, each rotating the dial once to electrocute the victim, what degree of guilt, if any, do we assign to each individual dial-twister? After all, none of the dial-twisters know the poor man in question nor have they ever been in his part of the world. They might well doubt there is such a man if the situation isn’t clearly communicated to them or if it is ridiculed by a few influential people. Whatever their excuses, however, they are likely to be at least vaguely aware of rumors about the situation. How then do we deposit all these tiny bits of personal guilt into some moral bank account to save the victim. Or do we just shrug and dismiss the significant probability of ordinary indifferent people killing the distant stranger?

The real question of course is, What is the impact of thousands of small environmental or personal abuses over time? In the context of this rather morbid tale of a psychopath, most environmentalists would probably opt to stop rotating the dial or at least to rotate it very infrequently. 

- More Here


Sunday, May 3, 2026

Curiosity Is No Solo Act

The Foucauldian assumption that networks of information precondition ways of thinking, doing, and being has an ancient, rich, and still robust precedent in Indigenous philosophy. Rooted in the wisdom that everything that exists is connected to everything else, Indigenous philosophy foregrounds the vast and complex system of relational networks. While Western philosophy, especially post-Enlightenment, has typically emphasized the individual nodes of knowers and knowns, Indigenous philosophy has consistently contributed to a thinking on the edge, or edgework. (It is not insignificant that the English language is 70 percent nouns, while Potawatomi is 70 percent verbs. Or that Western settlers conceptualize land as private property and commodity capital, while Indigenous peoples understand it as a connective tissue in a larger gift economy.) The difference in ethos between piecemeal and of a piece with could not be more pronounced.

In an Indigenous onto-epistemology, one is always coming to know in intimate relationship with other knowers, including not only community members, but also all the components of the earth itself. In “Braiding Sweetgrass,” Potawatomi botanist Robin Wall Kimmerer tells the story of her own Indigenous curiosity. Growing up surrounded by “shoeboxes of seeds and piles of pressed leaves,” she knew the plants had chosen her. Declaring a botany major in college, she soon learned to stockpile taxonomic names and functional facts, all while letting her capacities to attend to energetic relationships fall into disuse. It was not until rekindling her connections with Indigenous communities — and specifically Indigenous scientists — that she remembered how “intimacy gives us a different way of seeing.” Her scholarship and outreach are now focused on honoring this ray of scientific and social wisdom.

What is perhaps most distinctive about Indigenous philosophy is its imbrication of a relational cosmology with a relational epistemology. At the heart of this worldview is “the eternal convergence of the world within any one thing,” writes Carl Mika, such that “one thing is never alone and all things actively construct and compose it.” From this perspective of deep holism, talk of knowing any one thing is “minimally useful.” As such, knowledge is not properly propositional but instead procedural; it is less concerned with knowing what than with knowing how. And its wisdom lies in “sharing” more than “stating.”

- More Here

Thursday, April 30, 2026

The Social Edge of Intelligence

If AI capability depends on the social complexity of human language production—and if AI deployment systematically reduces that complexity through cognitive offloading, homogenization of creative output, and the elimination of interaction-dense work—then the technology is gradually undermining the conditions for its own advancement. Its successes, rather than failures, create a spiral: a slow attenuation of the very substrate it feeds on, spelling doom.

This is the Social Edge Paradox, and the intellectual tradition it draws from is older and more interdisciplinary than most AI commentary acknowledges.

Michael Tomasello’s evolutionary research establishes that human cognition diverged from other primates by a process other than superior individual processing power. The real impetus came through the capacity for collaborative activity with shared goals and complementary roles. He argues that even private thought is “fundamentally dialogic and social” in structure—an internalization of interaction patterns. Autonomous neural capacity is far from enough to account for the abilities of human thought.

Robin Dunbar’s social brain hypothesis quantifies the link: neocortex ratios predict social group size across primates; language evolved as a mechanism for managing relationships at scales too large for grooming. Two-thirds of conversation is social, relational, reputational. Language is often mistaken as an information pipe, but it is really a social coordination technology.

My own position is that collective intent engineering, found in forms as familiar as simple brainstorming, accounts for most frontier cognitive expansion. The intelligent algorithms of today have not been built with this critical function in mind.

[---]

The AI industry is telling a story about the future of work that goes roughly like this: automate what can be automated, augment what remains, and trust that the productivity gains will compound into a wealthier, more efficient world.

The Social Edge Framework tells a different story. It says: the intelligence we are automating was never ours alone. It was forged in conversation, argument, institutional friction, and collaborative struggle. It lives in the spaces between people, and it shows up in AI capabilities only because those spaces were rich enough to leave linguistic traces worth learning from.

Every time a company automates an entry-level role, it saves a salary and loses a learning curve, unless it compensates. Every time a knowledge worker delegates a draft to an AI without engaging critically, the statistical thinning of the organizational record advances by an imperceptible increment. Every time an organization mistakes polished output for strategic progress, it consumes cognitive surplus without generating new knowledge.

None of these individual acts is catastrophic. However, their compound effect may be.

The organizations that will thrive in the next decade are not those with the highest AI utilization rates. They are those that understand something the epoch-chaining thought experiment makes vivid: that AI’s capabilities are an inheritance from the complexity of human social life. And inheritances, if consumed without reinvestment, eventually run out. This is particularly critical as AI becomes heavily customized for our organizational culture.

[---]

The Social Edge is more than a metaphor. It is the literal boundary between what AI can do well and what it will keep struggling with due to fundamental internal contradictions. Furthermore, the framework asks us all to pay attention to how the very investment thesis behind AI also contains the seeds of its own failure. And it reminds leaders that AI’s frontier today is set by the richness of the social world that produced the data it learned from.

- More Here



Wednesday, April 29, 2026

The Rise And Fall Of ‘Petty Tyrants’

Petty tyrants are more focused on personal victories than on national priorities. The good news is that they carry within them the seeds of their own destruction. Once we understand their common flaws, it becomes apparent why they eventually fall rapidly from power and leave few changes to government that last. Understanding this pattern can help us recognize a critical feature that distinguishes leaders who damage their nations from those who create lasting good: their relationship to truth.

[---]

One of the worst mistakes the opposition can make is extending contempt for the tyrant into contempt for the tyrant’s supporters. Most of these supporters sincerely believed that the tyrant would be more likely to solve their problems — often real grievances that the opposition had failed to address. Blaming the supporters denies the reality of the failures and reinforces their support for the tyrant. 

As Napoleon consolidated his power, his critics described the farmers who supported him as “a sack of potatoes” and Parisian workers as having “their minds crammed with vain theories and visionary hopes.” This attitude of condescension made it easier for Napoleon to position his opposition as arrogant elites and himself as the champion of ordinary people.

When the opposition makes it socially acceptable to show contempt for anyone who disagrees, they cooperate with the tyrant in creating a cycle of divisiveness that distracts from reality. That cycle sustains the tyrant’s hold on power. 

[---]

Once they had disabled democracy, these tyrants managed to hold onto power long after their popularity faded. Even removing the tyrant was not a guarantee of short-term success. In the Philippines, democracy has still not fully recovered.

It is much easier to stop the rise of a tyrant than to accelerate their fall. It would have been far better for each nation if the leaders of the opposition had learned from their failures, postponed their short-term ambitions and concentrated on preserving the democracy.

[---]

The legacies of these truth-based leaders have long outlived the leaders themselves, and they continue to benefit us in the 21st century. Bismarck’s social safety nets are still thriving in Germany, and they have been widely copied. Singapore is now a prosperous nation, and a Singaporean passport will get you visa-free entry into more countries than any other. Roosevelt’s Social Security is so successful that politicians on both sides of the aisle now compete to take credit for protecting it.

Look at what endures from these six stories: not the propaganda, the posters and parades, but the institutions that continue to serve their nations decade after decade. The children who are healthy and literate. The elderly and disabled who live in security and dignity. The deposits, safe in the bank. The honest civil services that provide real protections and solve real problems. These are the legacies that matter.

- More Here


Sunday, April 26, 2026

Ideas Of Slavery

Now a new book, John Samuel Harpham’s The Intellectual Origins of American Slavery, asks us to reconsider that standard account of events. Harpham does not discount economic or imperial explanations for the rise of New World slavery; what he suggests, instead, is that those explanations can make sense only within a culture where “slavery was available as an option.” His goal, as he puts it, is to discover “the reasons for which slavery was understood to be a status about which narrow-minded men could make calculations.”

The result is ironic and tragic in the way of the best history. Initially, Harpham claims, the English hesitated to embrace African slavery. Then, when they did, their decision was not based on any perceived racial difference or inferiority. It was based, instead, on something even more troubling: Harpham believes that English people enslaved Africans not because they were seen as different but because they seemed so very similar.

[—]

Harpham’s history reconsiders Jordan’s account of that “unthinking decision.” If the keynote of Jordan’s book was that early English observers saw Africans as different, the keynote of Harpham’s is that English people had a lot of different ideas: about Africa, about Africans, about skin color and about slavery. Nowhere was there broad agreement, he claims, except perhaps about the essence of slavery. But early English ideas about slavery were also different from what we might expect.

Throughout the period when colonial slavery was taking shape, Harpham explains, English writers still relied heavily on a conception of slavery that they inherited from ancient Rome. In contrast to the ancient Greek idea that some people could be “natural slaves,” a view most commonly associated with Aristotle, Roman law defined slavery as the product of convention. Individuals were naturally free, in this view, but could be reduced to slavery if they committed a crime or, more commonly, were captured in war. “In short,” Harpham writes, “slavery arose in Roman law as the result of history rather than nature, as a fact of modern life rather than a timeless feature of the universe.”

Accordingly, the central question for English writers in the late sixteenth and seventeenth centuries was not what qualities made a person a natural slave—a question that might lead to a racial answer—but instead what circumstances allowed for enslavement. The English showed a special interest in this question, Harpham suggests, because they were simultaneously forging a national self-identity based on “the conviction that theirs was a nation dedicated to freedom.” This conviction grew out of internal developments, such as the decline of villeinage (a kind of serfdom), but it also took shape in direct contrast to England’s chief international rivals, the Spanish and the Portuguese.

- More Here


Saturday, April 25, 2026

Rebel - Refuses To Consent To Falseness, Injustice, Or Mediocrity

Rebellion is not merely reactive but creative. It doesn’t only tear down — it seeks to reimagine. Albert Camus understood this when he wrote that “I rebel — therefore we exist.” For Camus, rebellion was the refusal to accept absurdity passively. It was the insistence that life and justice still matter even in a godless world. To rebel, then, is to affirm the possibility of meaning precisely where meaning seems most threatened. It is to insist that one’s freedom and integrity are worth defending, even when doing so brings discomfort or risk.

Rebellion typically begins in solitude but inevitably reaches toward solidarity. The solitary rebel says no to hypocrisy, cruelty, or exploitation; yet the truest form of that no is said on behalf of all. 

[—]

To live rebelliously in this deeper sense requires courage of a particular kind — the courage to trust one’s perception of what is wrong and to act in accordance with one’s conscience. Many people lose meaning because they no longer believe their own perceptions. They feel what is off — at work, in politics, in relationships — but they suppress that intuition in order to get by. Over time, this suppression breeds cynicism and fatigue.

Rebellion restores vitality by reuniting perception with action. It says: “I see what I see, I know what I know, and I will live in truth.” That alignment itself is deeply meaningful.

The pathway of rebellion does not exclude tenderness or humility. The most enduring rebels — figures like Rosa Parks, Mahatma Gandhi, or the many artists and thinkers who defied oppressive norms — rebelled not out of hatred but out of love: love for justice, for humanity, for the sanctity of truth. Rebellion, rightly understood, is a form of devotion. It refuses to let meaning be trampled by fear or conformity. It honors life enough to resist what diminishes it.

For the individual seeking reenchantment, rebellion may take quieter, more personal forms. It might mean refusing to keep up a façade of perpetual busyness or success. It might mean declining to participate in conversations that are mean-spirited or false. It might mean leaving a career that pays well but deadens the heart. In each case, rebellion functions as a reclamation of self. By saying “no” to what is meaningless, one makes room for what is real to appear. The act of refusal becomes the act of awakening.

This pathway, however, carries hazards. A rebel without an anchoring vision and a sense of humanity can become a cynic or destroyer, mistaking constant opposition for depth. To avoid this, it would be wise to tether rebellion to love, to beauty, to some image of the world as it could be. The purpose of rebellion is not to stay angry forever but to clear space for creation, renewal, and joy. Rebellion that remains open-hearted is not corrosive but cleansing; it removes what is false so that truth can breathe again.

In this way, rebellion restores the pulse of meaning through the experience of agency. The disenchantment of modern life often stems from powerlessness — feeling that one’s choices make no difference, that the world is too vast or corrupted to be changed. To rebel, even in a small and symbolic way, is to reclaim a measure of agency. It reignites the sense that one’s voice, one’s actions, one’s very stance toward the world still matter. That sense of mattering is one of the foundations of meaning itself.

Finally, rebellion reenchants because it reconnects us to the moral dimension of existence. It reminds us that life is not neutral or arbitrary but charged with value. Each act of rebellion is, at its core, an assertion of value: this matters; I matter; truth matters. That moral clarity dispels the fog of meaninglessness more effectively than any abstract philosophy. It returns us to the felt conviction that life is worth the trouble, that the struggle itself is vital.

- More Here


Monday, April 20, 2026

The Ideology, Economics, & Psychology Behind The Modern World's Draining Of Color From Homes, Cars, & Everyday Objects.

If you go to slums of Bombay to Brazil to Mexico to Kenya, you will notice a riot of colors. Yes, there is crime there, most people who live on day to day paycheck are content and happy. 

Color helps psychologically! It’s “Biophilia" of living in a rainforest - its one of the least studied simple psychological boosters. 

Max’s home is a riot of colors - living room is yellow, basement is pink, bed rooms are other colors - no no to neutral colors. I learned this a long time ago and even my dress has a variety of colors. 

I noticed something weird maybe a year or two ago, 5 plus years since Max passed away - a lot of my new t-shirts etc., were greyish… so I cleaned up my wardrobe and brought back color to my life. I was subconsciously depressed without Max. 

Color is the simplest and easiest confidence and psychological booster we have but alas we, sapiens even tend to ignore it. 

Very good history of why this transformation of color to grey happened in US and spread to across the globe: 

From Hawaii to Maine, from Alaska to Florida, the most popular shade for your home’s exterior is some variation of gray, off-white, beige, or greige — a hue so existentially undecided that it can’t commit to being either gray or beige, and so ends up neither, and both.

But how can this be? America is anything but monochrome. It contains multitudes of cultures, climates, and landscapes, and people who disagree, loudly and publicly, about nearly everything. So why, when Americans need a tin of house paint, do they so often reach for the neutral shelf? Why does the average house in this great and varied nation look like it’s been dipped in a vat of Resigned Indifference®?

The answer is a phenomenon dubbed “the grayening”: a gradual but relentless draining of pigment, not just from exteriors but also from interiors and from the stuff of everyday life, like cars and phones. In 2020, researchers at the Science Museum Group in London found evidence of the trend’s longevity. Feeding roughly 7,000 photographs of everyday objects — kettles, lamps, cameras — from the late 1800s to 2020 into an algorithm, they then asked it to track color distribution over time.

The result: a striking shift toward achromatic — that is, neutral — colors in material culture.

[---]

In his 1908 essay “Ornament and Crime,” Austrian architect Adolf Loos argued that ornamentation was not merely unnecessary, but a sign of arrested moral development. Truly evolved people, he suggested, would gravitate toward clean lines and plain surfaces. Applied ornament, including the use of color as decoration, didn’t enhance; it cluttered and distracted.

Loos’s polemical target was Art Nouveau, then in full frothy bloom. His arguments were influential on the Bauhaus school of art, which canonized restraint and straight lines. It, in turn, informed the International Style that swept global architecture from the 1930s onward, a style that favored glass, steel, and concrete. All gray: not just by default, but as a statement of seriousness.

Le Corbusier, pioneer of what we now simply call modern architecture, made the point with characteristic charm, declaring that color “is suited to simple races, peasants and savages.” Ouch.

The desaturation didn’t stop at buildings. Car colors have been meticulously catalogued since the dawn of the automotive age, making them a useful proxy for the broader culture’s chromatic pulse. Black had its first heyday as a car color about a century ago, when Henry Ford famously quipped that his Model T was available “in any color the customer wants, as long as it’s black.”

Sunday, April 19, 2026

How Not To Save The Planet

Wendell Berry, one of the few remaining writers in the older topophilic tradition, understands this better than anyone. In 1991, he wrote an essay for the Atlantic—a magazine for which Thoreau had written—in response to the then-common slogan “Think globally, act locally”:

Properly speaking, global thinking is not possible. Those who have “thought globally” (and among them the most successful have been imperial governments and multinational corporations) have done so by means of simplifications too extreme and oppressive to merit the name of thought. Global thinkers have been, and will be, dangerous people.

Global thinking is, for Berry, intrinsically and necessarily destructive of actual places:

Unless one is willing to be destructive on a very large scale, one cannot do something except locally, in a small place…. If we want to put local life in proper relation to the globe, we must do so by imagination, charity, and forbearance, and by making local life as independent and self-sufficient as we can—not by the presumptuous abstractions of “global thought.”

I would add to this that when global thought is not actively destructive it nevertheless tends to encourage depression in those who attempt it—which accounts, I think, for the gloomy and finger-wagging tone to which we have become accustomed.

[---]

This, I think, is an object lesson for those who wish to save the planet. If you would save the planet, forget The Planet; if you would sustain and repair nature, forget Nature. Remember the example of Gilbert White. Think only of the sensual properties of one dear place. If you learn to love a pond or a creek or a valley, then what you love others will love—and will perhaps also come to find some element of their own local environment dear to them, dear enough to conserve and protect. Our obligations arise from our deepest affections. You just have to show them how.

- More Here


Sunday, April 12, 2026

Aristotle & His “Not Even Wrong” Ideas

Unbelievable bullshit people like Aristotle made the shit up without any epistemic humility but the real issue, these folks are still respected. People like Norman Borlaug, Robert Trivers names nor their works are known to anyone. Well, god bless my species. 

In the 4th century BCE, the philosopher Aristotle had two theories about this. He postulated that they hibernated during the winter as other animals did. Swallows, for example, encased themselves in little balls of clay and sank out of sight to the bottom of swamps. His other idea was that the missing species transformed themselves into the birds that did stick around for the winter, and changed back when summer came.

The little old man in de Bergerac’s tale was an imagined Spanish soldier called Domingo Gonsales, and he was the hero of another story. In 1638, just a couple of decades before Cyrano’s “A Voyage to the Moon” became available, the English cleric Francis Godwin published “The Man in the Moone,” a fictional account of Gonsales’ lunar adventure. In the book, Gonsales trained 25 swans to pull an ‘engine’ he had made. One day, he took a jaunt in his swan carriage which happened to coincide with the time birds were accustomed to disappear, as it seemed, from Earth.

Gonsales was about to find out the answer to the mystery. To his surprise, the swans flew upwards, until they reached what we would think of as orbit and became weightless. French scientist Blaise Pascal’s experiments demonstrating the lack of atmosphere in space had not yet filtered through to Godwin, as both birds and man breathed as usual. In 12 days they reached the Moon, where he found other migrating terrestrial birds, such as swallows, nightingales, and woodcocks. When the swans started to show signs of agitation, he divined that they were ready to return to Earth; and so he harnessed them again and sailed home in nine days, gravitational pull on his side.

This was a ripping yarn for sure, but some thought it was a plausible alternative to Aristotle’s theories, especially as there was a Biblical passage that seemed to allude to it. In the King James translation, it goes:

Yea, the stork in the heaven knoweth her appointed times; and the turtle and the crane and the swallow observe the time of their coming (Jeremiah 8:7).


Friday, April 10, 2026

On Steve Jobs

“Having been in Silicon Valley for 50 years, I’m an expert in assholes, okay?” says Guy Kawasaki, Apple’s early developer evangelist. “And 99.9 percent of assholes are egocentric assholes. But Steve is one of the very rare mission-driven assholes. He was driven by a mission to make the greatest computer by the greatest company. And if you got in the way of that, he would run you over. He would run you over, back up, and run you over again.”

[——]

No executive, before or since, has incorporated comedy so memorably into product presentations. When, in 2002, Jobs wanted to cajole an auditorium full of software companies to rewrite their programs for Apple’s new Mac OS X operating system, he staged a full onstage funeral for the outgoing Mac OS 9, complete with a live organist, a eulogy he read himself, and a casket occupied by a four-foot–tall Mac OS 9 box.

[—]

If you encountered Jobs in only one context, you were like one of the blind men in the parable of the elephant. You’d have to have known him for years to see the whole man, and even then you might get a picture that felt fractured or incomplete.

“He was a man of contradictions,” Hertzfeld says. “Almost any adjective you could think of could apply to him at different times.”

- More Here


Wednesday, April 8, 2026

The Irony Of American Righteousness - Reinhold Niebuhr

Reinhold Niebuhr was born in 1892 in Wright City, Missouri. After studying at Yale Divinity School, he began his pastoral work in Detroit in 1915, where he spent thirteen years witnessing the harsh realities of industrial capitalism. Beneath the shadow of Henry Ford’s factories, Niebuhr saw workers exploited and discarded. These experiences shaped his entire theological outlook and dispelled the optimistic Social Gospel theology in which he had been trained.

[---]

At the core of Niebuhr’s ideas is a paradox: human beings can strive for justice but are also prone to injustice. In his 1944 key work The Children of Light and the Children of Darkness, Niebuhr provided what might be the most insightful one-sentence defense of democracy ever written: “Man’s capacity for justice makes democracy possible; but man’s inclination to injustice makes democracy necessary.”

His 1932 book *Moral Man and Immoral Society* made a key distinction: individuals can sometimes go beyond self-interest through love and reason, but groups almost never do. Collectives like nations, corporations, or movements tend to combine individual selfishness into a “collective egoism” that is far more resistant to moral constraints than any person’s conscience. This idea became his main theme: the danger of self-righteousness. “Ultimately evil is done not so much by evil people,” he warned, “but by good people who do not know themselves and who do not probe deeply.”

[---]

Later, Niebuhr used his theological ideas to analyze American identity. He argued that the United States had developed an “innocent self-image” that made it blind to its own moral faults. America thought it was immune to the corruptions affecting other great powers.
The irony of American history, Niebuhr argued, is that the nation’s virtues turn into its vices. The work ethic that built prosperity becomes worship of money. The faith that held communities together turns into theocratic pretension. The confidence that led to victories in war gives rise to imperial hubris. “No laughter from heaven,” he wrote, “could possibly penetrate through the liturgy of moral self-appreciation.” When political rallies resemble worship services and when a partisan victory is declared to be divine approval, we have entered territory that Niebuhr mapped decades ago.

[---]

Niebuhr famously defined democracy as “a method of finding proximate solutions for insoluble problems.” This straightforward formulation offers both warning and hope. The warning: human problems are never permanently resolved. The hope: even without final solutions, we can develop workable arrangements that balance competing interests and limit concentrated power. 
What would Niebuhr advise for our current times? First, humility truly involves recognizing that we are limited, flawed, and self-deceived. Second, engaging without self-righteousness means making difficult choices among imperfect options while acknowledging that choosing involves us in the complexities of power. Third, a revival of irony, not cynical detachment, but the ability to see tragedy in victory and grace in defeat. Finally, forgiveness: “the recognition that our actions and attitudes are inevitably seen in a different light by friends and foes than we see them.”

- More Here


Monday, April 6, 2026

The Many Roots Of Our Suffering - Reflections )n Robert Trivers (1943–2026)

In March 2026, three prominent thinkers died within a day of each other. Lavish obituaries immediately marked the deaths of the always-wrong environmentalist Paul Ehrlich and the often-obscure political philosopher Jürgen Habermas. But two weeks after the death of Robert Trivers, one of the greatest evolutionary biologists since Charles Darwin, not a single major news source has noticed his passing. This despite Trivers’s singular accomplishment of showing how the endlessly fascinating complexities of human relations are grounded in the wellsprings of complex life. And despite the fact that the man’s life was itself an object of fascination. Trivers was no ordinary academic. He was privileged in upbringing but louche in lifestyle, personally endearing but at times obstreperous and irresponsible, otherworldly brilliant but forehead-slappingly foolish. 

Trivers’s contributions belong in the special category of ideas that are obvious once they are explained, yet eluded great minds for ages; simple enough to be stated in a few words, yet with implications that have busied scientists for decades. In an astonishing creative burst from 1971 to 1975, Trivers wrote five seminal essays that invoked patterns of genetic overlap to explain each of the major human relationships: male with female, parent with child, sibling with sibling, partner with partner, and a person with himself or herself. 

The fallout for science was vast. The fields of sociobiology, evolutionary psychology, behavioural ecology, and Darwinian social science are largely projects that test Trivers’s hypotheses. The ideas took pride of place in E. O. Wilson’s Sociobiology in 1975, Richard Dawkins’s The Selfish Gene in 1976, and many other bestsellers in the next three decades such as Robert Wright’s The Moral Animal (1994) and my own How the Mind Works (1997) and The Blank Slate (2002). In 2007 the ideas earned Trivers the Crafoord Prize, the equivalent of a Nobel for fields not recognised by Nobels.

[—]

In another landmark, Trivers turned to relations among people who are not bound by blood. No one doubts that humans, more than any other species, make sacrifices for nonrelatives. But Trivers recoiled from the romantic notion that people are by nature indiscriminately communal and generous. It’s not true to life, nor is it expected: in evolution as in baseball, nice guys finish last. Instead, he noted, nature provides opportunities for a more discerning form of altruism in the positive-sum exchange of benefits. One animal can help another by grooming, feeding, protecting, or backing him, and is helped in turn when the needs reverse. Everybody wins. 

Trivers called it reciprocal altruism, and noted that it can evolve only in a narrow envelope of circumstances. That is because it is vulnerable to cheaters who accept favours without returning them. The altruistic parties must recognise each other, interact repeatedly, be in a position to confer a large benefit on others at a small cost to themselves, keep a memory for favours offered or denied, and be impelled to reciprocate accordingly. Reciprocal altruism can evolve because cooperators do better than hermits or misanthropes. They enjoy the gains of trading surpluses of food, pulling ticks out of one another’s hair, saving each other from drowning or starvation, and babysitting each other’s children. Reciprocators can also do better over the long run than the cheaters who take favours without returning them, because the reciprocators will come to recognise the cheaters and shun or punish them. 

All this was quickly snapped up by game theorists, economists, and political scientists. But in a less-noticed passage, Trivers pointed out its implications for psychology. Reciprocal altruists must be equipped with cognitive faculties to recognise and remember individuals and what they have done. That helps explain why the most social species is also the smartest one; human intelligence evolved to deal with people, not just predators and tools. They also must be equipped with moral emotions that implement the tit-for-tat strategy necessary to stabilise cooperation. Sympathy and trust prompt people to extend the first favour. Gratitude and loyalty prompt them to repay favours. Guilt and shame deter them from hurting or failing to repay others. Anger and contempt prompt them to avoid or punish cheaters. 

And in a passage that even fewer readers noticed, Trivers anticipated a major phenomenon later studied in the guise of “partner choice.” Though it pays both sides in a reciprocal partnership to trade favours as long as each one gains more than he loses, people differ in how much advantage they’ll try to squeeze out of an exchange while leaving it just profitable enough for the partner that he won’t walk away. That’s why not everyone evolves into a rapacious scalper: potential partners can shun them, preferring to deal with someone who offers more generous terms.

[—]

And since humans are language users—indeed, reciprocity may be a big reason language evolved—any tendency of an individual to reciprocate or cheat, lavish or stint, does not have to be witnessed firsthand but can be passed through the grapevine. This leads to an interest in the reputation of others, and a concern with one’s own reputation. 

[—]

But Trivers rapidly spotted what everyone else missed, and still misses, together with the less biologically obvious concept of self-deception, so there must be another piece to the puzzle. During his junior year at Harvard, Trivers suffered two weeks of mania and then a breakdown that hospitalised him for two months. Bipolar disorder afflicted him throughout his life. I can’t help but wonder whether Trivers’s fecund period was driven by episodes of hypomania, when ideas surge and insights suddenly emerge through clouds of bafflement. Gamers sometimes “overclock” their computers, running the CPU at a higher speed than the rated limit, which boosts performance but risks instability and crashes. Did Trivers experience bursts of overclocking in the early 1970s? It would explain another fact about the man that was obvious to anyone who met him later: Trivers reeked of marijuana. His heavy use may have had a source other than his Jamaicaphilia. One wonders whether Trivers was self-medicating, with long-term costs to his clock speed. 

- Steven Pinker


Sunday, April 5, 2026

Frank Lloyd Wright As A Mirror Of The American Condition

The fixation on Wright’s paradoxes obscures a deeper contradiction embedded in the culture that produced him. Namely, that the United States has always been ambivalent about the individual: we valorise self-reliance but distrust those who stand too far apart; we celebrate democratic ideals but are uneasy with idiosyncrasy; we admire originality while punishing the disorder it brings. Wright lived squarely inside that tension. He took seriously the idea that one could make a life and a world from first principles – an act of courage in the best light. Hubris in the worst.

Seen through that lens, Wright becomes less an outlier than a mirror. His contradictions, less personal failings than reflections of the American condition. Our yearning for freedom is matched by our fear of its consequences; our desire for order by our suspicion of conformity; our reverence for the natural world by our relentless reshaping of it. Wright’s work endures because it speaks to these tensions with a force that resists resolution. If we judge him only by his wounds or only by his wonders, we see only half the man – and half the nation that shaped him. The truth, harder and more interesting, is that both are inseparable. His greatness is entangled with his flaws, his vision inseparable from his unruly humanity. To reduce him to saint or sinner is to miss what is most alive in his work: a belief that the individual, in all their contradictions, is still worth building for.

- More Here


Sunday, March 29, 2026

Grounded In Reality Piece On AI Mania

I don’t say that because I think that AI models are bad or because I think they won’t get better; I think that AI models are very good and will get much better. No. The fault is not with the models, but with us. The world is run by humans, and because it’s run by humans—entities that are smelly, oily, irritable, stubborn, competitive, easily frightened, and above all else inefficient—it is a world of bottlenecks. And as long as we have human bottlenecks, we’ll need humans to deal with them: we will have, in other words, complementarity.

People frequently underrate how inefficient things are in practically any domain, and how frequently these inefficiencies are reducible to bottlenecks caused simply by humans being human. Laws and regulations are obvious bottlenecks. But so are company cultures, and tacit local knowledge, and personal rivalries, and professional norms, and office politics, and national politics, and ossified hierarchies, and bureaucratic rigidities, and the human preference to be with other humans, and the human preference to be with particular humans over others, and the human love of narrative and branding, and the fickle nature of human preferences and tastes, and the severely limited nature of human comprehension. And the biggest bottleneck is simply the human resistance to change: the fact that people don’t like shifting what they’re doing. All of these are immensely powerful. Production processes are governed by their least efficient inputs: the more efficient the most efficient inputs, the more important the least efficient inputs.

In the long run, we should expect the power of technology to overcome these bottlenecks, in the same way that a river erodes a stone over many years and decades—just as how in the early decades of the twentieth century, the sheer power of what electricity could accomplish gradually overcame the bottlenecks of antiquated factory infrastructure, outdated workflows, and the conservatism of hidebound plant managers. This process, however, takes time: it took decades for electricity, among the most powerful of all general-purpose technologies, to start impacting productivity growth. AI will probably be much faster than that, not least because it can be agentic in a way that electricity cannot. But these bottlenecks are real and important and are obvious if you look at any part of the real world. And as long as those bottlenecks exist, no matter the level of AI capabilities, we should expect a real and powerful complementarity between human labor and AI, simply because the “human plus AI” combination will be more productive than AI alone.

- More Here


Friday, March 27, 2026

Humans Had Dogs Before They Had Farming

By roughly 14,000 years ago, hunter-gatherer societies across Europe had discovered dogs, scientists reported in two new papers, which were published Wednesday in the journal Nature. The studies provide the first definitive genetic evidence that dogs existed during the Paleolithic period, before humans developed agriculture.

The researchers, who used several approaches to analyze DNA extracted from ancient canine specimens, identified Paleolithic dogs at five different archaeological sites in Europe and Western Asia. The oldest of these dogs lived about 15,800 years ago, pushing back the oldest known genetic evidence of dogs by nearly 5,000 years.

These early dogs came from sites that extend from Britain to Turkey, and were associated with several very different hunter-gatherer populations. But the dogs themselves were closely related. Across the five sites, the dogs were more genetically similar than the humans were, the researchers found.

“The people are so different, but the dogs are very much the same,” said Greger Larson, a paleogeneticist at the University of Oxford and one of the authors on both new studies, which were conducted by large, international scientific teams.

The finding suggests that these early human societies were exchanging dogs or acquiring them from one another.

“It is kind of the equivalent of a new blade or a new point or a new kind of material culture or art form or something, where everybody’s getting really excited about having this fun new thing around.” Dr. Larson said. “And it’s useful and it’s interesting and it’s probably cute.”

The research provides new insight into the early history of dogs, as well as the genetic legacy and the interspecies relationship that extends to today.

“It’s really a major step forward in advancing our knowledge of humans and dogs,” said Elaine Ostrander, a canine genomics expert at the National Human Genome Research Institute who was not involved in the research.

[---]

The finding suggests that these early human societies were exchanging dogs or acquiring them from one another.

“It is kind of the equivalent of a new blade or a new point or a new kind of material culture or art form or something, where everybody’s getting really excited about having this fun new thing around.” Dr. Larson said. “And it’s useful and it’s interesting and it’s probably cute.”

The research provides new insight into the early history of dogs, as well as the genetic legacy and the interspecies relationship that extends to today.

“It’s really a major step forward in advancing our knowledge of humans and dogs,” said Elaine Ostrander, a canine genomics expert at the National Human Genome Research Institute who was not involved in the research.

- More Here


Sunday, February 15, 2026

There Is No Such Thing As Grand Strategy - The Continued Influence Of A Bad Genre

So this all begs the question, if not grand strategy, then what? If we discard the idea that states possess a coherent, elevated ideological and philosophical design integrating all instruments of power across time, what replaces it? I would simply say that doing so would provide a far clearer view of what strategy actually is. If we return to Gaddis’s original definition, “the alignment of potentially unlimited aspirations with necessarily limited capabilities,” strategy appears not as a grand design, but as a continual exercise in discipline, prioritization, and adjustment.

[---]

A more realistic approach, then, is to focus on decision points rather than designs. Instead of asking whether a state has a grand strategy, we should ask how it resolves specific tradeoffs at specific moments. Where does it allocate marginal resources? Which risks does it accept, and which does it avoid? Which commitments does it reinforce, and which does it quietly allow to erode? These choices, taken together, tell us far more about strategy than any post hoc narrative of alignment ever could. This reframing also forces greater intellectual honesty about failure. When strategy is imagined as a grand design, failure is attributed to incompetence or moral weakness. When strategy is understood as constraint management, failure is often tragic but explicable. States misjudge adversaries, overestimate capacities, underestimate costs, and act on incomplete information. These are not deviations from strategy; they are the conditions under which strategy exists.

Finally, abandoning the grand strategy genre clarifies what strategic skill actually looks like. It is not the ability to synthesize everything into a single vision, but the capacity to say no, to sequence objectives, and to recognize when ambition has outrun means. It is judgment exercised under uncertainty, not mastery imposed from above. This kind of strategic thinking is less glamorous and far harder to narrate, which is precisely why it is so often displaced by grander abstractions.

There is no higher plane of statecraft waiting to be discovered beyond politics, budgets, institutions, and tradeoffs. What exists instead is the ordinary, difficult work of governance under constraint—choosing among competing priorities, allocating scarce resources, managing risk, and accepting imperfection. Abandoning the language of grand strategy does not mean abandoning strategic thought. It means stripping away a genre that flatters elites and replacing it with analysis that takes politics seriously. Strategy need not be grand to be real. It needs only to be honest.

- More Here


Thursday, February 12, 2026

Culture Is The Mass-Synchronization Of Framings!

This can be good and bad too. Hence, I have an aversion for that word - "culture".

The genesis of almost all savagery, ruthlessness, and immorality against animals is from so called culture, 

This is an insightful piece on the same topic: 

A mental model is a simulation of "how things might unfold", and we all build and rebuild hundreds of mental models every day. A framing, on the other hand, is "what things exist in the first place", and it is much more stable and subtle. Every mental model is based on some framing, but we tend to be oblivious to which framing we're using most of the time (I've explained all this better in A Framing and Model About Framings and Models).

Framings are the basis of how we think and what we are even able to perceive, and they're the most consequential thing that spreads through a population in what we call "culture".

[---]

Each culture is made of shared framings—ontologies of things that are taken to exist and play a role in mental models—that arose in those same arbitrary but self-reinforcing ways. Anthropologist Joseph Henrich, in The Secret of Our Success, brings up several studies demonstrating the cultural differences in framings.

He mentions studies that estimated the average IQ of Americans in the early 1800's to have been around 70—not because they were dumber, but because their culture at the time was much poorer in sophisticated concepts. Their framings had fewer and less-defined moving parts, which translated into poorer mental models. Other studies found that children in Western countries are brought up with very general and abstract categories for animals, like "fish" and "bird", while children in small-scale societies tend to think in terms of more specific categories, such as "robin" and "jaguar", leading to different ways to understand and interface with the world.

But framings affect more than understanding. They influence how we take in the information from the world around us. Explaining this paper, Henrich writes:

People from different societies vary in their ability to accurately perceive objects and individuals both in and out of context. Unlike most other populations, educated Westerners have an inclination for, and are good at, focusing on and isolating objects or individuals and abstracting properties for these while ignoring background activity or context. Alternatively, expressing this in reverse: Westerners tend not to see objects or individuals in context, attend to relationships and their effects, or automatically consider context. Most other peoples are good at this.

How many connections and interrelations you consider when thinking is in the realm of framings. If your mental ontology treats most things as largely independent and self-sufficient, your mental models will tend to be, for better or worse, more reductionist and less holistic.

[---]

The basic force behind all culture formation is imitation. This ability is innate in all humans, regardless of culture: we are extraordinarily good imitators. Indeed, we are overimitators, sometimes with unfortunate consequences.

Overimitation ... may be distinctively human. For example, although chimpanzees imitate the way conspecifics instrumentally manipulate their environment to achieve a goal, they will copy the behavior only selectively, skipping steps which they recognize as unnecessary [unlike humans, who tend to keep even the unnecessary steps]. ... Once chimpanzees and orangutans have figured out how to solve a problem, they are conservative, sticking to whatever solution they learn first. Humans, in contrast, will often switch to a new solution that is demonstrated by peers, sometimes even switching to less effective strategies under peer influence.

— The Psychology of Normative Cognition, Stanford Encyclopedia of Philosophy, emphasis theirs.

We have a built-in need to do what the people around us do, even when we know of better or less wasteful ways. This means that we can't even explain culture as something that, while starting from chance events, naturally progresses towards better and better behaviors. That's what science is for.

Once the synchronized behaviors are in our systems, when we are habituated to certain shared ways of doing things, these behaviors feed back into our most basic mindsets, which guide our future behaviors, which further affect each other's mindset, and so on, congealing into the shared framings we call culture, i.e.: whatever happens to give the least friction in whatever happens to be the current shared behavioral landscape.

This is why, often, formal rules and laws do indeed take root in a culture: not because they're rules, but because the way they are enforced creates enough friction—or following them creates enough mutual benefits—that, like in the corridor lanes, crowds will settle into following them. This is also why, perhaps even more often, groups will settle into the easy "unruly" patterns.


 

Thursday, February 5, 2026

Akrasia !

Sometimes a simple word explains so much about humanity. 

Akrasia is a greek word, wiki page: 

Akrasia refers to the phenomenon of acting against one's better judgment—the state in which one intentionally performs an action while simultaneously believing that a different course of action would be better. 
Sometimes translated as "weakness of will" or "incontinence," akrasia describes the paradoxical human experience of knowingly choosing what one judges to be the inferior option.

Where do I even start unpacking this :-) ?  There are so many people who are akratic in some of the fundamental elements of one's life. I mean the core of existence 

Paul's piece about the stupidity of free-soloing, his accident and finally, how he learned from his akratic traits and now - the best part rebuilding his life with cat name Koshka.

for the record, I skipped the akratic segment and went straight to Max :-); man what a decision was that! Thank god, for once my pre-frontal cortex helped me.

Precisely because free soloing is selfish and stupid, it is a controversial topic even amongst climbers. The vast majority of free climbers do not free solo. Some of my closest climbing partners would commit to doing very serious traditional climbing routes, and yet firmly draw the line at soloing. (And trad climbing definitely is serious, as proved by the cripple voice dictating these words.) They told me bluntly that I should never do it, and they didn’t like hearing about it when I had done it. So why did I do it?

There is an ancient Greek term, akrasia. It is sometimes translated as “weakness of will” – although I don’t like that translation, because it already narrows and contorts the field in ways that distort reflection. Nonetheless, akrasia refers to situations in which a person apparently acts against their own professed best judgement. For example, the student who knows that the best thing to do is stay home and prepare for tomorrow’s exam (the outcome of which is crucial to her final grade), and yet who nonetheless goes to the party and gets drunk. She knows and agrees and affirms that the best thing for her to do is to stay home and revise. But she not only does something else, she does it when she herself knows and agrees and affirms that it is a worse thing for her to do. She is akratic. We all are, sometimes.

But the stakes of akrasia are not always the same.

[---]

On the way down, I texted my friend and told him what I had just done. He told me that I was a fucking idiot. I didn’t care. Sometimes you just have to go to the party, even when you know you shouldn’t. And whether you ultimately regret going will depend on more than just the fact that you went. Akrasia is a bird of many feathers.

[---]

But then I try to watch my anger, notice it – and let it slip away. Fair doesn’t come into it. It never did, and it never will. Such anger leads to nothing worth keeping. This week I adopted a cat. I’ve named her Koshka. You rebuild a life, one brick at a time.