Showing posts with label Psychology. Show all posts
Showing posts with label Psychology. Show all posts

Thursday, May 7, 2026

Culture - The Word That Fucked Up Our Species

I have written so many times about how almost all atrocities committed against our fellow animal family members is not considered immoral since people take umbrage behind the fucked up excuses of “culture”. 

People use culture to macro bullshit and not focus on micro morality which is precious for life on earth. 

Alex Nowrasteh’s wonderful piece is looking at this monster at a different angle. Different angle but same monster. 

The cleanest test is the divided-country natural experiment. North Korea and South Korea share a language, ethnicity, history, and culture up to 1945. One is among the richest countries on earth, the other among the poorest. East and West Germany diverged dramatically under different institutions and converged after reunification. Mainland China stagnated under Mao while Taiwan, Singapore, and Hong Kong prospered, all four sharing Chinese culture. In every case, the culture was identical on both sides of the border. The incentives, shaped by the institutions, are what changed. The outcome followed the institution, not the culture. Untangling causality is difficult, sometimes impossible, but that’s no reason to embrace a false explanation like “the culture made them do it.”

At its root, the culture discourse is anti-intellectual. Culture is a faux explanation for social behavior and outcomes that have real explanations. Think harder. Use AI to search the literature if you have to because other researchers have probably already written about the issue you claim is just caused by culture. The cultural explanation is the one you reach for when you’ve decided the search isn’t worth your time. Better to remain quiet if culture is the only explanation you’ve got. Here are some examples.

[---]

If a country is poor because of its culture, nobody has to examine the bad incentives facing members of that society. Intellectual laziness explains the rest. Finding the price, the constraint, the institutional mechanism that creates an incentive is hard, but invoking culture as if it’s a magical exogenous decider lets you stop searching. Cultural explanations are cheap to produce, requiring only anecdotes rather than data, prices, or evidence. It feels like an answer because it has the grammatical structure of one. “Japanese people ride trains because of their culture” masquerades as an explanation, but it’s just a tautology.

Culture is endogenous to everything. Claiming culture causes an outcome without first ruling out that the outcome’s causes also produced the culture is circular reasoning. Every cultural explanation must first survive a price, incentive, and institutional audit. Few of them do, but those that do are extraordinary findings, which is perhaps another explanation why so many claim it. Nobody would let economists get away with explaining a recession of high unemployment with the explanation, “It’s the economy.” We shouldn’t let others get away with the equally lazy non-explanation of “it’s the culture.”

 

Thursday, April 30, 2026

The Social Edge of Intelligence

If AI capability depends on the social complexity of human language production—and if AI deployment systematically reduces that complexity through cognitive offloading, homogenization of creative output, and the elimination of interaction-dense work—then the technology is gradually undermining the conditions for its own advancement. Its successes, rather than failures, create a spiral: a slow attenuation of the very substrate it feeds on, spelling doom.

This is the Social Edge Paradox, and the intellectual tradition it draws from is older and more interdisciplinary than most AI commentary acknowledges.

Michael Tomasello’s evolutionary research establishes that human cognition diverged from other primates by a process other than superior individual processing power. The real impetus came through the capacity for collaborative activity with shared goals and complementary roles. He argues that even private thought is “fundamentally dialogic and social” in structure—an internalization of interaction patterns. Autonomous neural capacity is far from enough to account for the abilities of human thought.

Robin Dunbar’s social brain hypothesis quantifies the link: neocortex ratios predict social group size across primates; language evolved as a mechanism for managing relationships at scales too large for grooming. Two-thirds of conversation is social, relational, reputational. Language is often mistaken as an information pipe, but it is really a social coordination technology.

My own position is that collective intent engineering, found in forms as familiar as simple brainstorming, accounts for most frontier cognitive expansion. The intelligent algorithms of today have not been built with this critical function in mind.

[---]

The AI industry is telling a story about the future of work that goes roughly like this: automate what can be automated, augment what remains, and trust that the productivity gains will compound into a wealthier, more efficient world.

The Social Edge Framework tells a different story. It says: the intelligence we are automating was never ours alone. It was forged in conversation, argument, institutional friction, and collaborative struggle. It lives in the spaces between people, and it shows up in AI capabilities only because those spaces were rich enough to leave linguistic traces worth learning from.

Every time a company automates an entry-level role, it saves a salary and loses a learning curve, unless it compensates. Every time a knowledge worker delegates a draft to an AI without engaging critically, the statistical thinning of the organizational record advances by an imperceptible increment. Every time an organization mistakes polished output for strategic progress, it consumes cognitive surplus without generating new knowledge.

None of these individual acts is catastrophic. However, their compound effect may be.

The organizations that will thrive in the next decade are not those with the highest AI utilization rates. They are those that understand something the epoch-chaining thought experiment makes vivid: that AI’s capabilities are an inheritance from the complexity of human social life. And inheritances, if consumed without reinvestment, eventually run out. This is particularly critical as AI becomes heavily customized for our organizational culture.

[---]

The Social Edge is more than a metaphor. It is the literal boundary between what AI can do well and what it will keep struggling with due to fundamental internal contradictions. Furthermore, the framework asks us all to pay attention to how the very investment thesis behind AI also contains the seeds of its own failure. And it reminds leaders that AI’s frontier today is set by the richness of the social world that produced the data it learned from.

- More Here



Wednesday, April 29, 2026

The Rise And Fall Of ‘Petty Tyrants’

Petty tyrants are more focused on personal victories than on national priorities. The good news is that they carry within them the seeds of their own destruction. Once we understand their common flaws, it becomes apparent why they eventually fall rapidly from power and leave few changes to government that last. Understanding this pattern can help us recognize a critical feature that distinguishes leaders who damage their nations from those who create lasting good: their relationship to truth.

[---]

One of the worst mistakes the opposition can make is extending contempt for the tyrant into contempt for the tyrant’s supporters. Most of these supporters sincerely believed that the tyrant would be more likely to solve their problems — often real grievances that the opposition had failed to address. Blaming the supporters denies the reality of the failures and reinforces their support for the tyrant. 

As Napoleon consolidated his power, his critics described the farmers who supported him as “a sack of potatoes” and Parisian workers as having “their minds crammed with vain theories and visionary hopes.” This attitude of condescension made it easier for Napoleon to position his opposition as arrogant elites and himself as the champion of ordinary people.

When the opposition makes it socially acceptable to show contempt for anyone who disagrees, they cooperate with the tyrant in creating a cycle of divisiveness that distracts from reality. That cycle sustains the tyrant’s hold on power. 

[---]

Once they had disabled democracy, these tyrants managed to hold onto power long after their popularity faded. Even removing the tyrant was not a guarantee of short-term success. In the Philippines, democracy has still not fully recovered.

It is much easier to stop the rise of a tyrant than to accelerate their fall. It would have been far better for each nation if the leaders of the opposition had learned from their failures, postponed their short-term ambitions and concentrated on preserving the democracy.

[---]

The legacies of these truth-based leaders have long outlived the leaders themselves, and they continue to benefit us in the 21st century. Bismarck’s social safety nets are still thriving in Germany, and they have been widely copied. Singapore is now a prosperous nation, and a Singaporean passport will get you visa-free entry into more countries than any other. Roosevelt’s Social Security is so successful that politicians on both sides of the aisle now compete to take credit for protecting it.

Look at what endures from these six stories: not the propaganda, the posters and parades, but the institutions that continue to serve their nations decade after decade. The children who are healthy and literate. The elderly and disabled who live in security and dignity. The deposits, safe in the bank. The honest civil services that provide real protections and solve real problems. These are the legacies that matter.

- More Here


Sunday, April 26, 2026

Ideas Of Slavery

Now a new book, John Samuel Harpham’s The Intellectual Origins of American Slavery, asks us to reconsider that standard account of events. Harpham does not discount economic or imperial explanations for the rise of New World slavery; what he suggests, instead, is that those explanations can make sense only within a culture where “slavery was available as an option.” His goal, as he puts it, is to discover “the reasons for which slavery was understood to be a status about which narrow-minded men could make calculations.”

The result is ironic and tragic in the way of the best history. Initially, Harpham claims, the English hesitated to embrace African slavery. Then, when they did, their decision was not based on any perceived racial difference or inferiority. It was based, instead, on something even more troubling: Harpham believes that English people enslaved Africans not because they were seen as different but because they seemed so very similar.

[—]

Harpham’s history reconsiders Jordan’s account of that “unthinking decision.” If the keynote of Jordan’s book was that early English observers saw Africans as different, the keynote of Harpham’s is that English people had a lot of different ideas: about Africa, about Africans, about skin color and about slavery. Nowhere was there broad agreement, he claims, except perhaps about the essence of slavery. But early English ideas about slavery were also different from what we might expect.

Throughout the period when colonial slavery was taking shape, Harpham explains, English writers still relied heavily on a conception of slavery that they inherited from ancient Rome. In contrast to the ancient Greek idea that some people could be “natural slaves,” a view most commonly associated with Aristotle, Roman law defined slavery as the product of convention. Individuals were naturally free, in this view, but could be reduced to slavery if they committed a crime or, more commonly, were captured in war. “In short,” Harpham writes, “slavery arose in Roman law as the result of history rather than nature, as a fact of modern life rather than a timeless feature of the universe.”

Accordingly, the central question for English writers in the late sixteenth and seventeenth centuries was not what qualities made a person a natural slave—a question that might lead to a racial answer—but instead what circumstances allowed for enslavement. The English showed a special interest in this question, Harpham suggests, because they were simultaneously forging a national self-identity based on “the conviction that theirs was a nation dedicated to freedom.” This conviction grew out of internal developments, such as the decline of villeinage (a kind of serfdom), but it also took shape in direct contrast to England’s chief international rivals, the Spanish and the Portuguese.

- More Here


Saturday, April 25, 2026

Rebel - Refuses To Consent To Falseness, Injustice, Or Mediocrity

Rebellion is not merely reactive but creative. It doesn’t only tear down — it seeks to reimagine. Albert Camus understood this when he wrote that “I rebel — therefore we exist.” For Camus, rebellion was the refusal to accept absurdity passively. It was the insistence that life and justice still matter even in a godless world. To rebel, then, is to affirm the possibility of meaning precisely where meaning seems most threatened. It is to insist that one’s freedom and integrity are worth defending, even when doing so brings discomfort or risk.

Rebellion typically begins in solitude but inevitably reaches toward solidarity. The solitary rebel says no to hypocrisy, cruelty, or exploitation; yet the truest form of that no is said on behalf of all. 

[—]

To live rebelliously in this deeper sense requires courage of a particular kind — the courage to trust one’s perception of what is wrong and to act in accordance with one’s conscience. Many people lose meaning because they no longer believe their own perceptions. They feel what is off — at work, in politics, in relationships — but they suppress that intuition in order to get by. Over time, this suppression breeds cynicism and fatigue.

Rebellion restores vitality by reuniting perception with action. It says: “I see what I see, I know what I know, and I will live in truth.” That alignment itself is deeply meaningful.

The pathway of rebellion does not exclude tenderness or humility. The most enduring rebels — figures like Rosa Parks, Mahatma Gandhi, or the many artists and thinkers who defied oppressive norms — rebelled not out of hatred but out of love: love for justice, for humanity, for the sanctity of truth. Rebellion, rightly understood, is a form of devotion. It refuses to let meaning be trampled by fear or conformity. It honors life enough to resist what diminishes it.

For the individual seeking reenchantment, rebellion may take quieter, more personal forms. It might mean refusing to keep up a façade of perpetual busyness or success. It might mean declining to participate in conversations that are mean-spirited or false. It might mean leaving a career that pays well but deadens the heart. In each case, rebellion functions as a reclamation of self. By saying “no” to what is meaningless, one makes room for what is real to appear. The act of refusal becomes the act of awakening.

This pathway, however, carries hazards. A rebel without an anchoring vision and a sense of humanity can become a cynic or destroyer, mistaking constant opposition for depth. To avoid this, it would be wise to tether rebellion to love, to beauty, to some image of the world as it could be. The purpose of rebellion is not to stay angry forever but to clear space for creation, renewal, and joy. Rebellion that remains open-hearted is not corrosive but cleansing; it removes what is false so that truth can breathe again.

In this way, rebellion restores the pulse of meaning through the experience of agency. The disenchantment of modern life often stems from powerlessness — feeling that one’s choices make no difference, that the world is too vast or corrupted to be changed. To rebel, even in a small and symbolic way, is to reclaim a measure of agency. It reignites the sense that one’s voice, one’s actions, one’s very stance toward the world still matter. That sense of mattering is one of the foundations of meaning itself.

Finally, rebellion reenchants because it reconnects us to the moral dimension of existence. It reminds us that life is not neutral or arbitrary but charged with value. Each act of rebellion is, at its core, an assertion of value: this matters; I matter; truth matters. That moral clarity dispels the fog of meaninglessness more effectively than any abstract philosophy. It returns us to the felt conviction that life is worth the trouble, that the struggle itself is vital.

- More Here


Monday, April 20, 2026

The Ideology, Economics, & Psychology Behind The Modern World's Draining Of Color From Homes, Cars, & Everyday Objects.

If you go to slums of Bombay to Brazil to Mexico to Kenya, you will notice a riot of colors. Yes, there is crime there, most people who live on day to day paycheck are content and happy. 

Color helps psychologically! It’s “Biophilia" of living in a rainforest - its one of the least studied simple psychological boosters. 

Max’s home is a riot of colors - living room is yellow, basement is pink, bed rooms are other colors - no no to neutral colors. I learned this a long time ago and even my dress has a variety of colors. 

I noticed something weird maybe a year or two ago, 5 plus years since Max passed away - a lot of my new t-shirts etc., were greyish… so I cleaned up my wardrobe and brought back color to my life. I was subconsciously depressed without Max. 

Color is the simplest and easiest confidence and psychological booster we have but alas we, sapiens even tend to ignore it. 

Very good history of why this transformation of color to grey happened in US and spread to across the globe: 

From Hawaii to Maine, from Alaska to Florida, the most popular shade for your home’s exterior is some variation of gray, off-white, beige, or greige — a hue so existentially undecided that it can’t commit to being either gray or beige, and so ends up neither, and both.

But how can this be? America is anything but monochrome. It contains multitudes of cultures, climates, and landscapes, and people who disagree, loudly and publicly, about nearly everything. So why, when Americans need a tin of house paint, do they so often reach for the neutral shelf? Why does the average house in this great and varied nation look like it’s been dipped in a vat of Resigned Indifference®?

The answer is a phenomenon dubbed “the grayening”: a gradual but relentless draining of pigment, not just from exteriors but also from interiors and from the stuff of everyday life, like cars and phones. In 2020, researchers at the Science Museum Group in London found evidence of the trend’s longevity. Feeding roughly 7,000 photographs of everyday objects — kettles, lamps, cameras — from the late 1800s to 2020 into an algorithm, they then asked it to track color distribution over time.

The result: a striking shift toward achromatic — that is, neutral — colors in material culture.

[---]

In his 1908 essay “Ornament and Crime,” Austrian architect Adolf Loos argued that ornamentation was not merely unnecessary, but a sign of arrested moral development. Truly evolved people, he suggested, would gravitate toward clean lines and plain surfaces. Applied ornament, including the use of color as decoration, didn’t enhance; it cluttered and distracted.

Loos’s polemical target was Art Nouveau, then in full frothy bloom. His arguments were influential on the Bauhaus school of art, which canonized restraint and straight lines. It, in turn, informed the International Style that swept global architecture from the 1930s onward, a style that favored glass, steel, and concrete. All gray: not just by default, but as a statement of seriousness.

Le Corbusier, pioneer of what we now simply call modern architecture, made the point with characteristic charm, declaring that color “is suited to simple races, peasants and savages.” Ouch.

The desaturation didn’t stop at buildings. Car colors have been meticulously catalogued since the dawn of the automotive age, making them a useful proxy for the broader culture’s chromatic pulse. Black had its first heyday as a car color about a century ago, when Henry Ford famously quipped that his Model T was available “in any color the customer wants, as long as it’s black.”

Sunday, April 5, 2026

Frank Lloyd Wright As A Mirror Of The American Condition

The fixation on Wright’s paradoxes obscures a deeper contradiction embedded in the culture that produced him. Namely, that the United States has always been ambivalent about the individual: we valorise self-reliance but distrust those who stand too far apart; we celebrate democratic ideals but are uneasy with idiosyncrasy; we admire originality while punishing the disorder it brings. Wright lived squarely inside that tension. He took seriously the idea that one could make a life and a world from first principles – an act of courage in the best light. Hubris in the worst.

Seen through that lens, Wright becomes less an outlier than a mirror. His contradictions, less personal failings than reflections of the American condition. Our yearning for freedom is matched by our fear of its consequences; our desire for order by our suspicion of conformity; our reverence for the natural world by our relentless reshaping of it. Wright’s work endures because it speaks to these tensions with a force that resists resolution. If we judge him only by his wounds or only by his wonders, we see only half the man – and half the nation that shaped him. The truth, harder and more interesting, is that both are inseparable. His greatness is entangled with his flaws, his vision inseparable from his unruly humanity. To reduce him to saint or sinner is to miss what is most alive in his work: a belief that the individual, in all their contradictions, is still worth building for.

- More Here


Sunday, March 29, 2026

Grounded In Reality Piece On AI Mania

I don’t say that because I think that AI models are bad or because I think they won’t get better; I think that AI models are very good and will get much better. No. The fault is not with the models, but with us. The world is run by humans, and because it’s run by humans—entities that are smelly, oily, irritable, stubborn, competitive, easily frightened, and above all else inefficient—it is a world of bottlenecks. And as long as we have human bottlenecks, we’ll need humans to deal with them: we will have, in other words, complementarity.

People frequently underrate how inefficient things are in practically any domain, and how frequently these inefficiencies are reducible to bottlenecks caused simply by humans being human. Laws and regulations are obvious bottlenecks. But so are company cultures, and tacit local knowledge, and personal rivalries, and professional norms, and office politics, and national politics, and ossified hierarchies, and bureaucratic rigidities, and the human preference to be with other humans, and the human preference to be with particular humans over others, and the human love of narrative and branding, and the fickle nature of human preferences and tastes, and the severely limited nature of human comprehension. And the biggest bottleneck is simply the human resistance to change: the fact that people don’t like shifting what they’re doing. All of these are immensely powerful. Production processes are governed by their least efficient inputs: the more efficient the most efficient inputs, the more important the least efficient inputs.

In the long run, we should expect the power of technology to overcome these bottlenecks, in the same way that a river erodes a stone over many years and decades—just as how in the early decades of the twentieth century, the sheer power of what electricity could accomplish gradually overcame the bottlenecks of antiquated factory infrastructure, outdated workflows, and the conservatism of hidebound plant managers. This process, however, takes time: it took decades for electricity, among the most powerful of all general-purpose technologies, to start impacting productivity growth. AI will probably be much faster than that, not least because it can be agentic in a way that electricity cannot. But these bottlenecks are real and important and are obvious if you look at any part of the real world. And as long as those bottlenecks exist, no matter the level of AI capabilities, we should expect a real and powerful complementarity between human labor and AI, simply because the “human plus AI” combination will be more productive than AI alone.

- More Here


Saturday, March 28, 2026

The fascinating Insights Of Robert Trivers

Trivers was one of the most—perhaps the most—influential evolutionary biologists of the 20th century. His work should be much more widely known in social and behavioural sciences, in particular in economics, as Trivers’ intellectual approach is very much in line with a game theoretic understanding of social interactions.

It is hard to overstate the importance of his work. Einstein famously published four groundbreaking papers in 1905, a year often referred to as his “Annus mirabilis”, during which he revolutionised physics. Trivers might be said to have had a “Quinquennium Mirabile” for the five years between 1971 and 1976, during which he produced a series of ideas that revolutionised evolutionary biology.

Reciprocal altruism - 1971:

The human altruistic system is a sensitive, unstable one. Often it will pay to cheat: namely, when the partner will not find out, when he will not discontinue his altruism even if he does find out, or when he is unlikely to survive long enough to reciprocate adequately. And the perception of subtle cheating may be very difficult. Given this unstable character of the system, where a degree of cheating is adaptive, natural selection will rapidly favor a complex psychological system in each individual regulating both his own altruistic and cheating tendencies and his responses to these tendencies in others. As selection favors subtler forms of cheating, it will favor more acute abilities to detect cheating.

Parental investment -1972:

Since the female already invests more than the male, breeding failure for lack of an additional investment selects more strongly against her than against the male. In that sense, her initial very great investment commits her to additional investment more than the male’s initial slight investment commies him.

[—]

Critics of evolutionary theory sometimes argue that it does not make any predictions that can be tested and that it only rationalises what has already been observed. Trivers’ work is one of the best examples disproving this accusation. In his paper on parental investment, Trivers argues that the differences in behaviour between males and females should reflect the degree of asymmetry in their parental investment. As a result, animals with more parental investment asymmetry should show greater asymmetry than those with less, and if we ever find animals with role reversals, we should also observe reversals in strategies. And indeed, we observe that in animals with less asymmetry in parental investment, like swans, the differences between males and females are less noticeable. In the rare cases where male investments are larger, like in seahorses, where the females literally place their eggs in the belly of the male who incubates them, we observe a role reversal, with females courting males and competing for access to them.

Parent Offspring Conflict - 1974:

The offspring can cry not only when it is famished but also when it merely wants more food than the parent is selected to give. Likewise, it can begin to withhold its smile until it has gotten its way. Selection will then of course favor parental ability to discriminate the two uses of the signals, but still subtler mimicry and deception by the offspring are always possible.

[---]

Obviously, overall parents tend to love their children and children tend to love their parents, but Trivers showed—with a theory now largely supported by empirical research— that the whole picture is more complex, because there are always also elements of conflict in parent-offspring relations.

Self-deception - 1976:

In the preface to Dawkins’ The Selfish Gene, Robert Trivers proposed a solution to this problem: our tendency to self-deceive, to think we are better than we are, may serve as a mechanism that enables us to deceive others more effectively. He wrote:

If … deceit is fundamental to animal communication, then there must be strong selection to spot deception and this ought, in turn, to select for a degree of self-deception, rendering some facts and motives unconscious so as not to betray – by the subtle signs of self-knowledge – the deception being practiced. —Trivers (1976)

Commenting on this assertion, psychologist Steven Pinker remarked, “This sentence... might have the highest ratio of profundity to words in the history of the social sciences”

[---]

In a 2011 paper with Bill von Hippel, Trivers developed this idea further, listing how self-deception can help. When trying to deceive, people may face cognitive load (the cognitive work required to make sure a web of lies does not have glaring contradictions). Given that lying is a betrayal of trust and is sanctioned when it is found out, it is risky, and people can get nervous about being found out, possibly showing signs of nervousness. Finally, people might try to mask signs of nervousness, thereby also behaving in a way that indirectly suggests lying. Self-deception, by inducing people to believe in their own lies, so to speak, can eliminate these possible clues while leading others to believe the preferred story of the person self-deceiving.

Trivers’ theory of self-deception has been supported by empirical research (including research I have contributed to). It explains what seems to be one of the most irrational patterns of human behaviour as emerging from strategic incentives.

Trivers has been one of the most influential evolutionary biologists, and his papers are still worth reading today. His insights, published more than 50 years ago, are fascinating. They often align very well with economic theories of behaviour, and it is therefore regrettable that his ideas are not more well-known in economics, and in particular in behavioural economics.

A key feature of Trivers’ take across these contributions was to see that beneath the world of social interactions we observe, there are deep structures in terms of incentives that shape the game we play. Understanding these games and their structures helps us make sense of the seemingly endless complexity of human psychology and social dynamics. In several key contributions, Trivers helped lift the veil on the underlying logic of human behaviour.

- More Here


Sunday, February 15, 2026

There Is No Such Thing As Grand Strategy - The Continued Influence Of A Bad Genre

So this all begs the question, if not grand strategy, then what? If we discard the idea that states possess a coherent, elevated ideological and philosophical design integrating all instruments of power across time, what replaces it? I would simply say that doing so would provide a far clearer view of what strategy actually is. If we return to Gaddis’s original definition, “the alignment of potentially unlimited aspirations with necessarily limited capabilities,” strategy appears not as a grand design, but as a continual exercise in discipline, prioritization, and adjustment.

[---]

A more realistic approach, then, is to focus on decision points rather than designs. Instead of asking whether a state has a grand strategy, we should ask how it resolves specific tradeoffs at specific moments. Where does it allocate marginal resources? Which risks does it accept, and which does it avoid? Which commitments does it reinforce, and which does it quietly allow to erode? These choices, taken together, tell us far more about strategy than any post hoc narrative of alignment ever could. This reframing also forces greater intellectual honesty about failure. When strategy is imagined as a grand design, failure is attributed to incompetence or moral weakness. When strategy is understood as constraint management, failure is often tragic but explicable. States misjudge adversaries, overestimate capacities, underestimate costs, and act on incomplete information. These are not deviations from strategy; they are the conditions under which strategy exists.

Finally, abandoning the grand strategy genre clarifies what strategic skill actually looks like. It is not the ability to synthesize everything into a single vision, but the capacity to say no, to sequence objectives, and to recognize when ambition has outrun means. It is judgment exercised under uncertainty, not mastery imposed from above. This kind of strategic thinking is less glamorous and far harder to narrate, which is precisely why it is so often displaced by grander abstractions.

There is no higher plane of statecraft waiting to be discovered beyond politics, budgets, institutions, and tradeoffs. What exists instead is the ordinary, difficult work of governance under constraint—choosing among competing priorities, allocating scarce resources, managing risk, and accepting imperfection. Abandoning the language of grand strategy does not mean abandoning strategic thought. It means stripping away a genre that flatters elites and replacing it with analysis that takes politics seriously. Strategy need not be grand to be real. It needs only to be honest.

- More Here


Friday, February 13, 2026

No-Technological-Solution Problem

Bingo! What an insight!

We sapiens fucked things up, are still fucking things up, and promise, to continue fucking things up in future. 

Changing their mind and behavior is not in the equation but my species is planning to  innovate the fuck of technologies to clean up the mess they created while they continue to fuck things up. 

Hmm, god bless my species. 

Wonderful, wonderful interview with Dan Brooks about his new book A Darwinian Survival Guide: Hope for the Twenty-First Century:

Well, the primary thing that we have to understand or internalize is that what we’re dealing with is what is called a no-technological-solution problem. In other words, technology is not going to save us, real or imaginary. We have to change our behavior. If we change our behavior, we have sufficient technology to save ourselves. If we don’t change our behavior, we are unlikely to come up with a magical technological fix to compensate for our bad behavior. 

This is why Sal and I have adopted a position that we should not be talking about sustainability, but about survival, in terms of humanity’s future. Sustainability has come to mean, what kind of technological fixes can we come up with that will allow us to continue to do business as usual without paying a penalty for it? As evolutionary biologists, we understand that all actions carry biological consequences. We know that relying on indefinite growth or uncontrolled growth is unsustainable in the long term, but that’s the behavior we’re seeing now.

Stepping back a bit. Darwin told us in 1859 that what we had been doing for the last 10,000 or so years was not going to work. But people didn’t want to hear that message. So along came a sociologist who said, “It’s OK; I can fix Darwinism.” This guy’s name was Herbert Spencer, and he said, “I can fix Darwinism. We’ll just call it natural selection, but instead of survival of what’s-good-enough-to-survive-in-the-future, we’re going to call it survival of the fittest, and it’s whatever is best now.” Herbert Spencer was instrumental in convincing most biologists to change their perspective from “evolution is long-term survival” to “evolution is short-term adaptation.” And that was consistent with the notion of maximizing short term profits economically, maximizing your chances of being reelected, maximizing the collection plate every Sunday in the churches, and people were quite happy with this.

Well, fast-forward and how’s that working out? Not very well. And it turns out that Spencer’s ideas were not, in fact, consistent with Darwin’s ideas. They represented a major change in perspective. What Sal and I suggest is that if we go back to Darwin’s original message, we not only find an explanation for why we’re in this problem, but, interestingly enough, it also gives us some insights into the kinds of behavioral changes we might want to undertake if we want to survive.

To clarify, when we talk about survival in the book, we talk about two different things. One is the survival of our species, Homo sapiens. We actually don’t think that’s in jeopardy. Now, Homo sapiens of some form or another is going to survive no matter what we do, short of blowing up the planet with nuclear weapons. What’s really important is trying to decide what we would need to do if we wanted what we call “technological humanity,” or better said “technologically-dependent humanity,” to survive.

Put it this way: If you take a couple of typical undergraduates from the University of Toronto and you drop them in the middle of Beijing with their cell phones, they’re going to be fine. You take them up to Algonquin Park, a few hours’ drive north of Toronto, and you drop them in the park, and they’re dead within 48 hours. So we have to understand that we’ve produced a lot of human beings on this planet who can’t survive outside of this technologically dependent existence. 

[---]

That’s actually a really good analogy to use, because of course, as you probably know, the temperatures around the Norwegian Seed Bank are so high now that the Seed Bank itself is in some jeopardy of survival. The place where it is was chosen because it was thought that it was going to be cold forever, and everything would be fine, and you could store all these seeds now. And now all the area around it is melting, and this whole thing is in jeopardy. This is a really good example of letting engineers and physicists be in charge of the construction process, rather than biologists. Biologists understand that conditions never stay the same; engineers engineer things for, this is the way things are, this is the way things are always going to be. Physicists are always looking for some sort of general law of in perpetuity, and biologists are never under any illusions about this. Biologists understand that things are always going to change.

[---]

One of the things that’s really important for us to focus on is to understand why it is that human beings are so susceptible to adopting behaviors that seem like a good idea, and are not. Sal and I say, here are some things that seem to be common to human misbehavior, with respect to their survival. One is that human beings really like drama. Human beings really like magic. And human beings don’t like to hear bad news, especially if it means that they’re personally responsible for the bad news. And that’s a very gross, very superficial thing, but beneath that is a whole bunch of really sophisticated stuff about how human brains work, and the relationship between human beings’ ability to conceptualize the future, but living and experiencing the present.

There seems to be a mismatch within our brain — this is an ongoing sort of sloppy evolutionary phenomenon. So that’s why we spend so much time in the first half of the book talking about human evolution, and that’s why we adopt a nonjudgmental approach to understanding how human beings have gotten themselves into this situation.


 

Thursday, February 12, 2026

Culture Is The Mass-Synchronization Of Framings!

This can be good and bad too. Hence, I have an aversion for that word - "culture".

The genesis of almost all savagery, ruthlessness, and immorality against animals is from so called culture, 

This is an insightful piece on the same topic: 

A mental model is a simulation of "how things might unfold", and we all build and rebuild hundreds of mental models every day. A framing, on the other hand, is "what things exist in the first place", and it is much more stable and subtle. Every mental model is based on some framing, but we tend to be oblivious to which framing we're using most of the time (I've explained all this better in A Framing and Model About Framings and Models).

Framings are the basis of how we think and what we are even able to perceive, and they're the most consequential thing that spreads through a population in what we call "culture".

[---]

Each culture is made of shared framings—ontologies of things that are taken to exist and play a role in mental models—that arose in those same arbitrary but self-reinforcing ways. Anthropologist Joseph Henrich, in The Secret of Our Success, brings up several studies demonstrating the cultural differences in framings.

He mentions studies that estimated the average IQ of Americans in the early 1800's to have been around 70—not because they were dumber, but because their culture at the time was much poorer in sophisticated concepts. Their framings had fewer and less-defined moving parts, which translated into poorer mental models. Other studies found that children in Western countries are brought up with very general and abstract categories for animals, like "fish" and "bird", while children in small-scale societies tend to think in terms of more specific categories, such as "robin" and "jaguar", leading to different ways to understand and interface with the world.

But framings affect more than understanding. They influence how we take in the information from the world around us. Explaining this paper, Henrich writes:

People from different societies vary in their ability to accurately perceive objects and individuals both in and out of context. Unlike most other populations, educated Westerners have an inclination for, and are good at, focusing on and isolating objects or individuals and abstracting properties for these while ignoring background activity or context. Alternatively, expressing this in reverse: Westerners tend not to see objects or individuals in context, attend to relationships and their effects, or automatically consider context. Most other peoples are good at this.

How many connections and interrelations you consider when thinking is in the realm of framings. If your mental ontology treats most things as largely independent and self-sufficient, your mental models will tend to be, for better or worse, more reductionist and less holistic.

[---]

The basic force behind all culture formation is imitation. This ability is innate in all humans, regardless of culture: we are extraordinarily good imitators. Indeed, we are overimitators, sometimes with unfortunate consequences.

Overimitation ... may be distinctively human. For example, although chimpanzees imitate the way conspecifics instrumentally manipulate their environment to achieve a goal, they will copy the behavior only selectively, skipping steps which they recognize as unnecessary [unlike humans, who tend to keep even the unnecessary steps]. ... Once chimpanzees and orangutans have figured out how to solve a problem, they are conservative, sticking to whatever solution they learn first. Humans, in contrast, will often switch to a new solution that is demonstrated by peers, sometimes even switching to less effective strategies under peer influence.

— The Psychology of Normative Cognition, Stanford Encyclopedia of Philosophy, emphasis theirs.

We have a built-in need to do what the people around us do, even when we know of better or less wasteful ways. This means that we can't even explain culture as something that, while starting from chance events, naturally progresses towards better and better behaviors. That's what science is for.

Once the synchronized behaviors are in our systems, when we are habituated to certain shared ways of doing things, these behaviors feed back into our most basic mindsets, which guide our future behaviors, which further affect each other's mindset, and so on, congealing into the shared framings we call culture, i.e.: whatever happens to give the least friction in whatever happens to be the current shared behavioral landscape.

This is why, often, formal rules and laws do indeed take root in a culture: not because they're rules, but because the way they are enforced creates enough friction—or following them creates enough mutual benefits—that, like in the corridor lanes, crowds will settle into following them. This is also why, perhaps even more often, groups will settle into the easy "unruly" patterns.


 

Wednesday, February 11, 2026

Deep Congruence

Congruence is a quality discussed by many psychologists—Carl Rogers popularized the word, saying that, among other things, it is a necessary trait in therapists. He defined it (roughly) as a state of unity between your experience, your self-concept, and your outward behavior. Which is to say: you aren’t pretending. I think this is a solid definition, but it’s likely to be misread. It can sound like living up to a scorecard—I said I would be an academic, now I’m tenure track. If that were the only requirement, congruence would be fairly common, when in fact highly congruent people are uncommon.

Deep congruence requires accepting all of the stuff of your life, every particle of feeling. If you are highly congruent, you disown none of your experience. None of it. You agree with what you’re doing with your time. You accept the stubborn approach of death, the arbitrariness of your fortune, your unimportance on the cosmic timescale, your potential importance for the local environment, the emotions of you and the people around you, the resources you’ve squandered. What stops congruence from occurring are layers of denial that are unpleasant to pass through. Although congruence is a source of endless happiness, the path there can be devastating. To paraphrase a cliche, you may have to finally give up on experiencing a better past.

But must we define it? We know it when we see the genuine article in abundance. We can spot people who live in non-naive contentment, or unhurried action. Running into them is comforting if we seek integrity ourselves. Speaking to my teacher feels like drinking water from a lucky well, filled with life-restoring minerals. On the other hand, if we’re interested in maintaining some variety of denial, the company of highly congruent people is disturbing. The falsehoods we’re trying to maintain immediately ring false before them. They appear as highly but particularly resonant chambers, in which integrity echoes and bullshit dies immediately.

[---]

Congruent people compel us because they have little to prove; they have converged on an inner authority. Thus, when you encounter them, you don’t feel like you’re being enlisted in their ongoing arguments with themselves. You’re not recruited to shore up their self-image, or resolve their dilemmas. You’re liberated to be as you are—talking to them feels like entering open space. Their love isn’t grabby and manipulative, and they can say hard truths from a place of simple observation. They can deeply understand you without needing to suck up your essence, or merge with it. Being listened to in this way, by a person capable of it, is psychoactive; you hear yourself anew.

[---]

Seeking congruence can sound selfish. However, in practice, it rarely is. Given that our environments consist of others in pain, facing the totality of your experience and remaining self-serving requires being a real asshole. Most of us are less cruel than that, and capable of gradually moving towards increasingly skillful love for others. The highly congruent people I know tend to support everyone around them, in ways both obvious and not.

One reliable test to see whether you’re in a place of congruence is the existence of boredom. When you are in a state of congruence, at rest you don’t feel bored. Instead you feel peace. What needs to be done has been done or will be done, there is no need to flail against the silence.

I’ve heard from multiple sources that deathbed enlightenment is a real phenomenon. Which is to say: approaching death, many disintegrated and suffering people suddenly find acceptance. Congruence is coming after you; you can almost outrun it, if you try.

- More Here


Thursday, February 5, 2026

Akrasia !

Sometimes a simple word explains so much about humanity. 

Akrasia is a greek word, wiki page: 

Akrasia refers to the phenomenon of acting against one's better judgment—the state in which one intentionally performs an action while simultaneously believing that a different course of action would be better. 
Sometimes translated as "weakness of will" or "incontinence," akrasia describes the paradoxical human experience of knowingly choosing what one judges to be the inferior option.

Where do I even start unpacking this :-) ?  There are so many people who are akratic in some of the fundamental elements of one's life. I mean the core of existence 

Paul's piece about the stupidity of free-soloing, his accident and finally, how he learned from his akratic traits and now - the best part rebuilding his life with cat name Koshka.

for the record, I skipped the akratic segment and went straight to Max :-); man what a decision was that! Thank god, for once my pre-frontal cortex helped me.

Precisely because free soloing is selfish and stupid, it is a controversial topic even amongst climbers. The vast majority of free climbers do not free solo. Some of my closest climbing partners would commit to doing very serious traditional climbing routes, and yet firmly draw the line at soloing. (And trad climbing definitely is serious, as proved by the cripple voice dictating these words.) They told me bluntly that I should never do it, and they didn’t like hearing about it when I had done it. So why did I do it?

There is an ancient Greek term, akrasia. It is sometimes translated as “weakness of will” – although I don’t like that translation, because it already narrows and contorts the field in ways that distort reflection. Nonetheless, akrasia refers to situations in which a person apparently acts against their own professed best judgement. For example, the student who knows that the best thing to do is stay home and prepare for tomorrow’s exam (the outcome of which is crucial to her final grade), and yet who nonetheless goes to the party and gets drunk. She knows and agrees and affirms that the best thing for her to do is to stay home and revise. But she not only does something else, she does it when she herself knows and agrees and affirms that it is a worse thing for her to do. She is akratic. We all are, sometimes.

But the stakes of akrasia are not always the same.

[---]

On the way down, I texted my friend and told him what I had just done. He told me that I was a fucking idiot. I didn’t care. Sometimes you just have to go to the party, even when you know you shouldn’t. And whether you ultimately regret going will depend on more than just the fact that you went. Akrasia is a bird of many feathers.

[---]

But then I try to watch my anger, notice it – and let it slip away. Fair doesn’t come into it. It never did, and it never will. Such anger leads to nothing worth keeping. This week I adopted a cat. I’ve named her Koshka. You rebuild a life, one brick at a time.

 

Monday, February 2, 2026

The Original Position Fallacy!

This is so god-damn important to understand that it should be taught in schools for all grades!

Mathew McAtter has a beautiful explanation

This is a simplified example of the Original Position Fallacy in action. A person supports some kind of policy, action, or revolution because they assume they’re either A) in the group that will benefit from it, or B) not in the group that will suffer from it. When used as a literary device, this is often used to compount a character’s suffering with the knowledge that they supported the measure when they thought someone else would be hurt. Indeed, you can think of the Original Position Fallacy as the opposite of the Golden Rule.

You’ve probably seen this fallacy in action among certain communists, neo-reactionaries, and a few libertarians. Many of these often support a massive upheaval to the social order, believing of course that they would inevitably survive (or even thrive) afterwards. Many modern communists forget that in many revolutions, large groups of supporters suddenly found only too late that the revolutionaries considered them in the class of the “bougie” instead of the true “proletariat”.

I’ve personally met many libertarians that believed that if only the government got out of their way, they could finally thrive. Of course, few give thought to any possible negative outcomes of reduced regulation (like Pan-Am, which was famously doomed when the airline industry was deregulated) or possibilities of being crushed by far more ruthless competitors. Many also seem to forget even recent times in their lives that they’ve had to rely on some kind of safety net, and don’t consider what might happen if that net were no longer there.

The Neo-reactionaries are an interesting bunch that desire a return to monarchies and autocracies, away from democracy. Few of them consider that they might end up outside a given autocrat’s favored inner circle, or that technology has not stopped modern monarch’s courts from being snake pits.

[---]

The point I’m trying to make is that even if you only have your own self-interests at heart deep down, you should at least acknowledge that the future is far too uncertain for you to be mentally throwing anyone under the bus. After all, your guarantees that you won’t be under the bus with them are getting shakier by the day.

Well, Pastor Martin Niemöller's poem goes well with original position fallacy and I am literally living to see this happen now. Alas, human nature doesn't change that easy: 

First they came for the Communists

And I did not speak out

Because I was not a Communist

Then they came for the Socialists

And I did not speak out

Because I was not a Socialist

Then they came for the trade unionists

And I did not speak out

Because I was not a trade unionist

Then they came for the Jews

And I did not speak out

Because I was not a Jew

Then they came for me

And there was no one left

To speak out for me

And of-course one of my favorite quotes of all time: 

Barbarism is never finally defeated; given propitious circumstances, men and women who seem quite orderly will commit every conceivable atrocity. The danger does not come merely from habitual hooligans; we are all potential recruits for anarchy. Unremitting effort is needed to keep men living together at peace; there is only a margin of error left over for experiment however beneficent. Once the prisons of the mind have been opened, the orgy is on. … The work of preserving society is sometimes onerous, sometimes almost effortless. The more elaborate the society, the more vulnerable it is to attack, and the more complete its collapse in case of defeat. At a time like the present it is notably precarious. If it falls, we shall see not merely the dissolution of a few joint-stock corporations, but of the spiritual and material achievements of our history.

- Robbery Under Law, Evelyn Waugh



Sunday, January 25, 2026

3 Antidotes To Your Suffering

So simple but yet a profound wisdom from George Saunders. 

I hardly meet anyone who lives by just one of these, leave alone all three.  

  • You’re not permanent. 

  • You’re not the most important thing. 

  • You’re not separate.

And why is this simple wisdom not omnipresent?

In the beginning, there’s a blank mind. Then that mind gets an idea in it, and the trouble begins, because the mind mistakes the idea for the world. Mistaking the idea for the world, the mind formulates a theory and, having formulated a theory, feels inclined to act… Because the idea is always only an approximation of the world, whether that action will be catastrophic or beneficial depends on the distance between the idea and the world. Mass media’s job is to provide this simulacra of the world, upon which we build our ideas. There’s another name for this simulacra-building: storytelling.

 

Saturday, January 24, 2026

What Is The Question?

Finding the question can be fun, as in thinking of a cartoon caption. But it can also be extremely difficult psychologically. Scientists are often expected by the public to know it all, and yet, “feeling stupid” is a common mode of operation for us. Science is the art of dealing with things we do not know enough about. As Wernher von Braun, the father of German and US rocket programs, phrased it: “Research is what I’m doing when I don’t know what I’m doing.” Science is humbling in this way. For young scientists, it is often very difficult to understand that it is perfectly normal to not know the answer—or even the question. Learning to embrace this uncertainty is part of our maturation as scientists.

Uri Alon has an intuitive image to describe the process of re-finding our questions. Given what we know about a given topic “A,” a researcher predicts that it should be possible to arrive at point “B,” a scientific destination that seems interesting—a hypothesis. However, the plot inevitably thickens over the course of the research project, and new hurdles force the scientist into a meandering path. Soon, the researcher is lost, having lost sight of the start point (which suddenly seems shaky) and end point (which appears unreachable). Uri calls this “being in the cloud”—you have lost your original question, but the reason why this has occurred is strange and thus potentially exciting and itself worthy of study. From inside the cloud, the situation may seem desperate, but Uri sees the cloud as the hallmark of science: if you are in the cloud, then you might have stumbled upon something non-obvious and interesting. “I’m very confused” a student would tell Uri, to which he would reply, “Oh good - So you’re in the cloud!” Eventually, a new question that arose inside the cloud may lead the way to an unexpected destination “C.”

Embracing uncertainty

The scientific method is often perceived as a simple sequence that leads from a problem to an answer, possibly through long iterations of modified hypotheses. But our reality is much less structured: it often starts with a topic and some observations, leading to the finding of patterns and questions about those patterns, possibly long before we have any explicit hypothesis or any direct tests. And even if a project starts out with a very specific hypothesis, in our experiences, it still generally arrives at a very different point than expected.

In some way, then, night science may be most productive when it has no agenda, when there are no particular questions it is trying to reshape or resolve. When the scientist does not have a hypothesis, she is free to explore, to make connections. In some sense, any kind of expectation on how things are to behave—a hypothesis—is a liability that could obstruct a new idea that awaits our discovery. Once night science elucidates and reframes this question, the researcher can use the full power of day science to solve it. In this sense, a major discovery is typically both the solution and the problem.

Much of basic, curiosity-driven science is exploration, and night science is a fundamental part of that; yet funding bodies often demand that research must be hypothesis-driven. But while some part of night science can be done with the help of an armchair and some good coffee, other parts require the exploration of large and complicated data sets. If no funding is provided for such endeavors, the generation of new questions may be stifled, hindering scientific progress: in science, the problem that is eventually solved is often not the one that was initially sought out.

- More Here


Tuesday, January 20, 2026

Why Would You Go From Micro To Macro?

Jeez, this pure bullshit. 

For starters, using the phrase "pursuit of happiness" was the big mistake Jefferson made. Happiness had a different meaning at that time, I cannot blame him. I wish he used a word like Gratitude. 

For all its prowess and simplicity the English language has its limitations - subtle richness innate to older languages.

The geniuses here want to consolidate whatever good micro words English has into one bucket called Happiness. 

God bless my species.