Showing posts with label Being Human. Show all posts
Showing posts with label Being Human. Show all posts

Thursday, May 7, 2026

Culture - The Word That Fucked Up Our Species

I have written so many times about how almost all atrocities committed against our fellow animal family members is not considered immoral since people take umbrage behind the fucked up excuses of “culture”. 

People use culture to macro bullshit and not focus on micro morality which is precious for life on earth. 

Alex Nowrasteh’s wonderful piece is looking at this monster at a different angle. Different angle but same monster. 

The cleanest test is the divided-country natural experiment. North Korea and South Korea share a language, ethnicity, history, and culture up to 1945. One is among the richest countries on earth, the other among the poorest. East and West Germany diverged dramatically under different institutions and converged after reunification. Mainland China stagnated under Mao while Taiwan, Singapore, and Hong Kong prospered, all four sharing Chinese culture. In every case, the culture was identical on both sides of the border. The incentives, shaped by the institutions, are what changed. The outcome followed the institution, not the culture. Untangling causality is difficult, sometimes impossible, but that’s no reason to embrace a false explanation like “the culture made them do it.”

At its root, the culture discourse is anti-intellectual. Culture is a faux explanation for social behavior and outcomes that have real explanations. Think harder. Use AI to search the literature if you have to because other researchers have probably already written about the issue you claim is just caused by culture. The cultural explanation is the one you reach for when you’ve decided the search isn’t worth your time. Better to remain quiet if culture is the only explanation you’ve got. Here are some examples.

[---]

If a country is poor because of its culture, nobody has to examine the bad incentives facing members of that society. Intellectual laziness explains the rest. Finding the price, the constraint, the institutional mechanism that creates an incentive is hard, but invoking culture as if it’s a magical exogenous decider lets you stop searching. Cultural explanations are cheap to produce, requiring only anecdotes rather than data, prices, or evidence. It feels like an answer because it has the grammatical structure of one. “Japanese people ride trains because of their culture” masquerades as an explanation, but it’s just a tautology.

Culture is endogenous to everything. Claiming culture causes an outcome without first ruling out that the outcome’s causes also produced the culture is circular reasoning. Every cultural explanation must first survive a price, incentive, and institutional audit. Few of them do, but those that do are extraordinary findings, which is perhaps another explanation why so many claim it. Nobody would let economists get away with explaining a recession of high unemployment with the explanation, “It’s the economy.” We shouldn’t let others get away with the equally lazy non-explanation of “it’s the culture.”

 

Tuesday, May 5, 2026

Derek Parfit - What Is The Impact Of Thousands Of Small Environmental Or Personal Abuses Over Time?

One particular example I’ve always liked (especially since as a kid I had similar thoughts) provides a vivid illustration of the psychology underlying the dismissal of global warming. It shows that the consequences of our decisions need not occur in the distant future for us to discount them. They can occur out of sight or after so many steps as to seem distant. The example (embroidered a bit here) appears in Derek Parfit’s book “Reasons and Persons,” where he discusses the case of a man strapped to a hospital bed, say by a psychopath, in some indeterminate place with electrodes attached to his heart. Rotation of a dial on the other side of the world minusculely and imperceptibly increases the current in the electrodes and the stress on the man’s heart.

Perhaps a free piece of candy, a pleasant buzz, and a snapshot with the dial are on offer from a mysterious donor as an incentive to anyone in the distant location who twists the dial. Assuming it takes 10,000 people, each rotating the dial once to electrocute the victim, what degree of guilt, if any, do we assign to each individual dial-twister? After all, none of the dial-twisters know the poor man in question nor have they ever been in his part of the world. They might well doubt there is such a man if the situation isn’t clearly communicated to them or if it is ridiculed by a few influential people. Whatever their excuses, however, they are likely to be at least vaguely aware of rumors about the situation. How then do we deposit all these tiny bits of personal guilt into some moral bank account to save the victim. Or do we just shrug and dismiss the significant probability of ordinary indifferent people killing the distant stranger?

The real question of course is, What is the impact of thousands of small environmental or personal abuses over time? In the context of this rather morbid tale of a psychopath, most environmentalists would probably opt to stop rotating the dial or at least to rotate it very infrequently. 

- More Here


Sunday, May 3, 2026

Curiosity Is No Solo Act

The Foucauldian assumption that networks of information precondition ways of thinking, doing, and being has an ancient, rich, and still robust precedent in Indigenous philosophy. Rooted in the wisdom that everything that exists is connected to everything else, Indigenous philosophy foregrounds the vast and complex system of relational networks. While Western philosophy, especially post-Enlightenment, has typically emphasized the individual nodes of knowers and knowns, Indigenous philosophy has consistently contributed to a thinking on the edge, or edgework. (It is not insignificant that the English language is 70 percent nouns, while Potawatomi is 70 percent verbs. Or that Western settlers conceptualize land as private property and commodity capital, while Indigenous peoples understand it as a connective tissue in a larger gift economy.) The difference in ethos between piecemeal and of a piece with could not be more pronounced.

In an Indigenous onto-epistemology, one is always coming to know in intimate relationship with other knowers, including not only community members, but also all the components of the earth itself. In “Braiding Sweetgrass,” Potawatomi botanist Robin Wall Kimmerer tells the story of her own Indigenous curiosity. Growing up surrounded by “shoeboxes of seeds and piles of pressed leaves,” she knew the plants had chosen her. Declaring a botany major in college, she soon learned to stockpile taxonomic names and functional facts, all while letting her capacities to attend to energetic relationships fall into disuse. It was not until rekindling her connections with Indigenous communities — and specifically Indigenous scientists — that she remembered how “intimacy gives us a different way of seeing.” Her scholarship and outreach are now focused on honoring this ray of scientific and social wisdom.

What is perhaps most distinctive about Indigenous philosophy is its imbrication of a relational cosmology with a relational epistemology. At the heart of this worldview is “the eternal convergence of the world within any one thing,” writes Carl Mika, such that “one thing is never alone and all things actively construct and compose it.” From this perspective of deep holism, talk of knowing any one thing is “minimally useful.” As such, knowledge is not properly propositional but instead procedural; it is less concerned with knowing what than with knowing how. And its wisdom lies in “sharing” more than “stating.”

- More Here

Thursday, April 30, 2026

The Social Edge of Intelligence

If AI capability depends on the social complexity of human language production—and if AI deployment systematically reduces that complexity through cognitive offloading, homogenization of creative output, and the elimination of interaction-dense work—then the technology is gradually undermining the conditions for its own advancement. Its successes, rather than failures, create a spiral: a slow attenuation of the very substrate it feeds on, spelling doom.

This is the Social Edge Paradox, and the intellectual tradition it draws from is older and more interdisciplinary than most AI commentary acknowledges.

Michael Tomasello’s evolutionary research establishes that human cognition diverged from other primates by a process other than superior individual processing power. The real impetus came through the capacity for collaborative activity with shared goals and complementary roles. He argues that even private thought is “fundamentally dialogic and social” in structure—an internalization of interaction patterns. Autonomous neural capacity is far from enough to account for the abilities of human thought.

Robin Dunbar’s social brain hypothesis quantifies the link: neocortex ratios predict social group size across primates; language evolved as a mechanism for managing relationships at scales too large for grooming. Two-thirds of conversation is social, relational, reputational. Language is often mistaken as an information pipe, but it is really a social coordination technology.

My own position is that collective intent engineering, found in forms as familiar as simple brainstorming, accounts for most frontier cognitive expansion. The intelligent algorithms of today have not been built with this critical function in mind.

[---]

The AI industry is telling a story about the future of work that goes roughly like this: automate what can be automated, augment what remains, and trust that the productivity gains will compound into a wealthier, more efficient world.

The Social Edge Framework tells a different story. It says: the intelligence we are automating was never ours alone. It was forged in conversation, argument, institutional friction, and collaborative struggle. It lives in the spaces between people, and it shows up in AI capabilities only because those spaces were rich enough to leave linguistic traces worth learning from.

Every time a company automates an entry-level role, it saves a salary and loses a learning curve, unless it compensates. Every time a knowledge worker delegates a draft to an AI without engaging critically, the statistical thinning of the organizational record advances by an imperceptible increment. Every time an organization mistakes polished output for strategic progress, it consumes cognitive surplus without generating new knowledge.

None of these individual acts is catastrophic. However, their compound effect may be.

The organizations that will thrive in the next decade are not those with the highest AI utilization rates. They are those that understand something the epoch-chaining thought experiment makes vivid: that AI’s capabilities are an inheritance from the complexity of human social life. And inheritances, if consumed without reinvestment, eventually run out. This is particularly critical as AI becomes heavily customized for our organizational culture.

[---]

The Social Edge is more than a metaphor. It is the literal boundary between what AI can do well and what it will keep struggling with due to fundamental internal contradictions. Furthermore, the framework asks us all to pay attention to how the very investment thesis behind AI also contains the seeds of its own failure. And it reminds leaders that AI’s frontier today is set by the richness of the social world that produced the data it learned from.

- More Here



Wednesday, April 29, 2026

The Rise And Fall Of ‘Petty Tyrants’

Petty tyrants are more focused on personal victories than on national priorities. The good news is that they carry within them the seeds of their own destruction. Once we understand their common flaws, it becomes apparent why they eventually fall rapidly from power and leave few changes to government that last. Understanding this pattern can help us recognize a critical feature that distinguishes leaders who damage their nations from those who create lasting good: their relationship to truth.

[---]

One of the worst mistakes the opposition can make is extending contempt for the tyrant into contempt for the tyrant’s supporters. Most of these supporters sincerely believed that the tyrant would be more likely to solve their problems — often real grievances that the opposition had failed to address. Blaming the supporters denies the reality of the failures and reinforces their support for the tyrant. 

As Napoleon consolidated his power, his critics described the farmers who supported him as “a sack of potatoes” and Parisian workers as having “their minds crammed with vain theories and visionary hopes.” This attitude of condescension made it easier for Napoleon to position his opposition as arrogant elites and himself as the champion of ordinary people.

When the opposition makes it socially acceptable to show contempt for anyone who disagrees, they cooperate with the tyrant in creating a cycle of divisiveness that distracts from reality. That cycle sustains the tyrant’s hold on power. 

[---]

Once they had disabled democracy, these tyrants managed to hold onto power long after their popularity faded. Even removing the tyrant was not a guarantee of short-term success. In the Philippines, democracy has still not fully recovered.

It is much easier to stop the rise of a tyrant than to accelerate their fall. It would have been far better for each nation if the leaders of the opposition had learned from their failures, postponed their short-term ambitions and concentrated on preserving the democracy.

[---]

The legacies of these truth-based leaders have long outlived the leaders themselves, and they continue to benefit us in the 21st century. Bismarck’s social safety nets are still thriving in Germany, and they have been widely copied. Singapore is now a prosperous nation, and a Singaporean passport will get you visa-free entry into more countries than any other. Roosevelt’s Social Security is so successful that politicians on both sides of the aisle now compete to take credit for protecting it.

Look at what endures from these six stories: not the propaganda, the posters and parades, but the institutions that continue to serve their nations decade after decade. The children who are healthy and literate. The elderly and disabled who live in security and dignity. The deposits, safe in the bank. The honest civil services that provide real protections and solve real problems. These are the legacies that matter.

- More Here


Tuesday, April 28, 2026

Golden Retriever Lifetime Study - Update From Morris Animal Foundation

Got this poignant email from Morris Animal Foundation today: 

As we approach the 15th year of the Golden Retriever Lifetime Study, we are entering a new, exciting stage every pet owner will appreciate. To date, 386 of our dogs have lived to age 13 or older, including three who have reached the remarkable milestone of 15 years. As a lifelong golden retriever owner, it warms my heart to see these dogs thrive. As a veterinarian and epidemiologist, I am eager to leverage this unique dataset to understand what sets these “super-seniors” apart. After all, that is our ultimate goal: we don’t just want dogs to avoid cancer, we want dogs that remain healthy and vibrant well into their golden years.

To capture the shifting challenges these dogs may face as they age, the Study utilizes supplemental surveys that participants can opt into every six months. These provide vital data on mobility and cognition. This initiative began when most dogs in the Study were approximately 8 years old and is rapidly becoming a robust dataset that will aid researchers for decades. Current research suggests dogs fall into two categories: "cognitive maintainers" and "cognitive decliners." Our data is uniquely positioned to help us identify the specific factors that contribute to prolonged cognitive health.

Because the Golden Retriever Lifetime Study is longitudinal, scientific interest has accelerated alongside the Study’s progress. While we have sadly said goodbye to 1,780 heroes, the information they contributed from puppyhood onward is of historic importance. As I write this, more than 100 studies have leveraged our data to investigate a wide variety of health topics. We recently closed our annual call for canine research proposals, and of the 142 pre-proposals submitted, 21 plan to incorporate Study data.

While the Study’s evolution into aging is exciting, our primary objective — to make progress against canine cancer — remains unchanged. The Foundation recently invested in two cancer studies that showed promising initial results. Both successfully identified genetic regions related to hemangiosarcoma and histiocytic sarcoma, respectively. Researchers are now building on these findings using Study data, which could lead to life-saving genetic tests. These are just two examples of the many promising studies currently underway that have the potential to change the future of canine health.

From all of us at Morris Animal Foundation, thank you for making this work possible and supporting the research that will help dogs run, play and be with us to create more memories well into their golden years.

Please keep up the good work; your team will always have wishes from Max and I.  

I said this when Max had cancer and I am saying this now - a lot of insights will come from this study and the Dog Aging Project which will help Sapiens although my moronic species refuse to give data. 

Researchers need a lot of data from healthy people to understand what it looks like not having cancer - a fundamental machine learning common sense. 


Saturday, April 25, 2026

Rebel - Refuses To Consent To Falseness, Injustice, Or Mediocrity

Rebellion is not merely reactive but creative. It doesn’t only tear down — it seeks to reimagine. Albert Camus understood this when he wrote that “I rebel — therefore we exist.” For Camus, rebellion was the refusal to accept absurdity passively. It was the insistence that life and justice still matter even in a godless world. To rebel, then, is to affirm the possibility of meaning precisely where meaning seems most threatened. It is to insist that one’s freedom and integrity are worth defending, even when doing so brings discomfort or risk.

Rebellion typically begins in solitude but inevitably reaches toward solidarity. The solitary rebel says no to hypocrisy, cruelty, or exploitation; yet the truest form of that no is said on behalf of all. 

[—]

To live rebelliously in this deeper sense requires courage of a particular kind — the courage to trust one’s perception of what is wrong and to act in accordance with one’s conscience. Many people lose meaning because they no longer believe their own perceptions. They feel what is off — at work, in politics, in relationships — but they suppress that intuition in order to get by. Over time, this suppression breeds cynicism and fatigue.

Rebellion restores vitality by reuniting perception with action. It says: “I see what I see, I know what I know, and I will live in truth.” That alignment itself is deeply meaningful.

The pathway of rebellion does not exclude tenderness or humility. The most enduring rebels — figures like Rosa Parks, Mahatma Gandhi, or the many artists and thinkers who defied oppressive norms — rebelled not out of hatred but out of love: love for justice, for humanity, for the sanctity of truth. Rebellion, rightly understood, is a form of devotion. It refuses to let meaning be trampled by fear or conformity. It honors life enough to resist what diminishes it.

For the individual seeking reenchantment, rebellion may take quieter, more personal forms. It might mean refusing to keep up a façade of perpetual busyness or success. It might mean declining to participate in conversations that are mean-spirited or false. It might mean leaving a career that pays well but deadens the heart. In each case, rebellion functions as a reclamation of self. By saying “no” to what is meaningless, one makes room for what is real to appear. The act of refusal becomes the act of awakening.

This pathway, however, carries hazards. A rebel without an anchoring vision and a sense of humanity can become a cynic or destroyer, mistaking constant opposition for depth. To avoid this, it would be wise to tether rebellion to love, to beauty, to some image of the world as it could be. The purpose of rebellion is not to stay angry forever but to clear space for creation, renewal, and joy. Rebellion that remains open-hearted is not corrosive but cleansing; it removes what is false so that truth can breathe again.

In this way, rebellion restores the pulse of meaning through the experience of agency. The disenchantment of modern life often stems from powerlessness — feeling that one’s choices make no difference, that the world is too vast or corrupted to be changed. To rebel, even in a small and symbolic way, is to reclaim a measure of agency. It reignites the sense that one’s voice, one’s actions, one’s very stance toward the world still matter. That sense of mattering is one of the foundations of meaning itself.

Finally, rebellion reenchants because it reconnects us to the moral dimension of existence. It reminds us that life is not neutral or arbitrary but charged with value. Each act of rebellion is, at its core, an assertion of value: this matters; I matter; truth matters. That moral clarity dispels the fog of meaninglessness more effectively than any abstract philosophy. It returns us to the felt conviction that life is worth the trouble, that the struggle itself is vital.

- More Here


Tuesday, April 21, 2026

Wisdom Of Isabel Allende

You need your space, and that ‘room of one’s own,’ as Virginia Woolf put it. That room is also your time, your space, your silence—that has to be sacred. I need to close the door to my office when I finish for the day, and no one should get in. I have the idea in my mind that the story is an entity that lives in that room, with the characters, the emotions that I have been putting together. And when I come back the next day, I open the door; it’s waiting for me, intact. I don’t want anybody to go in and vacuum, or use my computer. That would kill me if somebody used my computer!

When I finally close the computer for the day, I look at my desk and put things in piles, and I usually have a candle on, because for me the candle reminds me that I am in the process of writing—not because there’s anything magic about it. And then I blow out my candle—that ends the day. And I look around to see that everything is organized, and I leave. I’m incredibly organized, because that’s part of my structure. When I walk into my office, it looks like a lab. It’s impeccable. And when I leave, it’s impeccable. I never leave a messed-up place, because when I come back, if everything is disorganized, I feel the story isn’t there for me.

Writing is pretty much like training for sports. You train and train and train to be able to play the game. And nobody cares how much you’ve trained. Nobody cares about the effort. What matters is the performance at the end, the result. Sometimes I research a whole book for one sentence, but that’s part of the job, part of the training, so that the performance will be impeccable. Nothing comes out of thin air. But once I have my hands on the keyboard, and I start creating, then things start to happen immediately, almost immediately. But I need to get to that point. I spend hours and hours alone and in silence. Without the silence and the structure, I wouldn’t be able to do it.

- David Epstein Interview with Isabel Allende


Monday, April 20, 2026

The Ideology, Economics, & Psychology Behind The Modern World's Draining Of Color From Homes, Cars, & Everyday Objects.

If you go to slums of Bombay to Brazil to Mexico to Kenya, you will notice a riot of colors. Yes, there is crime there, most people who live on day to day paycheck are content and happy. 

Color helps psychologically! It’s “Biophilia" of living in a rainforest - its one of the least studied simple psychological boosters. 

Max’s home is a riot of colors - living room is yellow, basement is pink, bed rooms are other colors - no no to neutral colors. I learned this a long time ago and even my dress has a variety of colors. 

I noticed something weird maybe a year or two ago, 5 plus years since Max passed away - a lot of my new t-shirts etc., were greyish… so I cleaned up my wardrobe and brought back color to my life. I was subconsciously depressed without Max. 

Color is the simplest and easiest confidence and psychological booster we have but alas we, sapiens even tend to ignore it. 

Very good history of why this transformation of color to grey happened in US and spread to across the globe: 

From Hawaii to Maine, from Alaska to Florida, the most popular shade for your home’s exterior is some variation of gray, off-white, beige, or greige — a hue so existentially undecided that it can’t commit to being either gray or beige, and so ends up neither, and both.

But how can this be? America is anything but monochrome. It contains multitudes of cultures, climates, and landscapes, and people who disagree, loudly and publicly, about nearly everything. So why, when Americans need a tin of house paint, do they so often reach for the neutral shelf? Why does the average house in this great and varied nation look like it’s been dipped in a vat of Resigned Indifference®?

The answer is a phenomenon dubbed “the grayening”: a gradual but relentless draining of pigment, not just from exteriors but also from interiors and from the stuff of everyday life, like cars and phones. In 2020, researchers at the Science Museum Group in London found evidence of the trend’s longevity. Feeding roughly 7,000 photographs of everyday objects — kettles, lamps, cameras — from the late 1800s to 2020 into an algorithm, they then asked it to track color distribution over time.

The result: a striking shift toward achromatic — that is, neutral — colors in material culture.

[---]

In his 1908 essay “Ornament and Crime,” Austrian architect Adolf Loos argued that ornamentation was not merely unnecessary, but a sign of arrested moral development. Truly evolved people, he suggested, would gravitate toward clean lines and plain surfaces. Applied ornament, including the use of color as decoration, didn’t enhance; it cluttered and distracted.

Loos’s polemical target was Art Nouveau, then in full frothy bloom. His arguments were influential on the Bauhaus school of art, which canonized restraint and straight lines. It, in turn, informed the International Style that swept global architecture from the 1930s onward, a style that favored glass, steel, and concrete. All gray: not just by default, but as a statement of seriousness.

Le Corbusier, pioneer of what we now simply call modern architecture, made the point with characteristic charm, declaring that color “is suited to simple races, peasants and savages.” Ouch.

The desaturation didn’t stop at buildings. Car colors have been meticulously catalogued since the dawn of the automotive age, making them a useful proxy for the broader culture’s chromatic pulse. Black had its first heyday as a car color about a century ago, when Henry Ford famously quipped that his Model T was available “in any color the customer wants, as long as it’s black.”

Friday, April 10, 2026

On Steve Jobs

“Having been in Silicon Valley for 50 years, I’m an expert in assholes, okay?” says Guy Kawasaki, Apple’s early developer evangelist. “And 99.9 percent of assholes are egocentric assholes. But Steve is one of the very rare mission-driven assholes. He was driven by a mission to make the greatest computer by the greatest company. And if you got in the way of that, he would run you over. He would run you over, back up, and run you over again.”

[——]

No executive, before or since, has incorporated comedy so memorably into product presentations. When, in 2002, Jobs wanted to cajole an auditorium full of software companies to rewrite their programs for Apple’s new Mac OS X operating system, he staged a full onstage funeral for the outgoing Mac OS 9, complete with a live organist, a eulogy he read himself, and a casket occupied by a four-foot–tall Mac OS 9 box.

[—]

If you encountered Jobs in only one context, you were like one of the blind men in the parable of the elephant. You’d have to have known him for years to see the whole man, and even then you might get a picture that felt fractured or incomplete.

“He was a man of contradictions,” Hertzfeld says. “Almost any adjective you could think of could apply to him at different times.”

- More Here


Wednesday, April 8, 2026

The Irony Of American Righteousness - Reinhold Niebuhr

Reinhold Niebuhr was born in 1892 in Wright City, Missouri. After studying at Yale Divinity School, he began his pastoral work in Detroit in 1915, where he spent thirteen years witnessing the harsh realities of industrial capitalism. Beneath the shadow of Henry Ford’s factories, Niebuhr saw workers exploited and discarded. These experiences shaped his entire theological outlook and dispelled the optimistic Social Gospel theology in which he had been trained.

[---]

At the core of Niebuhr’s ideas is a paradox: human beings can strive for justice but are also prone to injustice. In his 1944 key work The Children of Light and the Children of Darkness, Niebuhr provided what might be the most insightful one-sentence defense of democracy ever written: “Man’s capacity for justice makes democracy possible; but man’s inclination to injustice makes democracy necessary.”

His 1932 book *Moral Man and Immoral Society* made a key distinction: individuals can sometimes go beyond self-interest through love and reason, but groups almost never do. Collectives like nations, corporations, or movements tend to combine individual selfishness into a “collective egoism” that is far more resistant to moral constraints than any person’s conscience. This idea became his main theme: the danger of self-righteousness. “Ultimately evil is done not so much by evil people,” he warned, “but by good people who do not know themselves and who do not probe deeply.”

[---]

Later, Niebuhr used his theological ideas to analyze American identity. He argued that the United States had developed an “innocent self-image” that made it blind to its own moral faults. America thought it was immune to the corruptions affecting other great powers.
The irony of American history, Niebuhr argued, is that the nation’s virtues turn into its vices. The work ethic that built prosperity becomes worship of money. The faith that held communities together turns into theocratic pretension. The confidence that led to victories in war gives rise to imperial hubris. “No laughter from heaven,” he wrote, “could possibly penetrate through the liturgy of moral self-appreciation.” When political rallies resemble worship services and when a partisan victory is declared to be divine approval, we have entered territory that Niebuhr mapped decades ago.

[---]

Niebuhr famously defined democracy as “a method of finding proximate solutions for insoluble problems.” This straightforward formulation offers both warning and hope. The warning: human problems are never permanently resolved. The hope: even without final solutions, we can develop workable arrangements that balance competing interests and limit concentrated power. 
What would Niebuhr advise for our current times? First, humility truly involves recognizing that we are limited, flawed, and self-deceived. Second, engaging without self-righteousness means making difficult choices among imperfect options while acknowledging that choosing involves us in the complexities of power. Third, a revival of irony, not cynical detachment, but the ability to see tragedy in victory and grace in defeat. Finally, forgiveness: “the recognition that our actions and attitudes are inevitably seen in a different light by friends and foes than we see them.”

- More Here


Monday, April 6, 2026

The Many Roots Of Our Suffering - Reflections )n Robert Trivers (1943–2026)

In March 2026, three prominent thinkers died within a day of each other. Lavish obituaries immediately marked the deaths of the always-wrong environmentalist Paul Ehrlich and the often-obscure political philosopher Jürgen Habermas. But two weeks after the death of Robert Trivers, one of the greatest evolutionary biologists since Charles Darwin, not a single major news source has noticed his passing. This despite Trivers’s singular accomplishment of showing how the endlessly fascinating complexities of human relations are grounded in the wellsprings of complex life. And despite the fact that the man’s life was itself an object of fascination. Trivers was no ordinary academic. He was privileged in upbringing but louche in lifestyle, personally endearing but at times obstreperous and irresponsible, otherworldly brilliant but forehead-slappingly foolish. 

Trivers’s contributions belong in the special category of ideas that are obvious once they are explained, yet eluded great minds for ages; simple enough to be stated in a few words, yet with implications that have busied scientists for decades. In an astonishing creative burst from 1971 to 1975, Trivers wrote five seminal essays that invoked patterns of genetic overlap to explain each of the major human relationships: male with female, parent with child, sibling with sibling, partner with partner, and a person with himself or herself. 

The fallout for science was vast. The fields of sociobiology, evolutionary psychology, behavioural ecology, and Darwinian social science are largely projects that test Trivers’s hypotheses. The ideas took pride of place in E. O. Wilson’s Sociobiology in 1975, Richard Dawkins’s The Selfish Gene in 1976, and many other bestsellers in the next three decades such as Robert Wright’s The Moral Animal (1994) and my own How the Mind Works (1997) and The Blank Slate (2002). In 2007 the ideas earned Trivers the Crafoord Prize, the equivalent of a Nobel for fields not recognised by Nobels.

[—]

In another landmark, Trivers turned to relations among people who are not bound by blood. No one doubts that humans, more than any other species, make sacrifices for nonrelatives. But Trivers recoiled from the romantic notion that people are by nature indiscriminately communal and generous. It’s not true to life, nor is it expected: in evolution as in baseball, nice guys finish last. Instead, he noted, nature provides opportunities for a more discerning form of altruism in the positive-sum exchange of benefits. One animal can help another by grooming, feeding, protecting, or backing him, and is helped in turn when the needs reverse. Everybody wins. 

Trivers called it reciprocal altruism, and noted that it can evolve only in a narrow envelope of circumstances. That is because it is vulnerable to cheaters who accept favours without returning them. The altruistic parties must recognise each other, interact repeatedly, be in a position to confer a large benefit on others at a small cost to themselves, keep a memory for favours offered or denied, and be impelled to reciprocate accordingly. Reciprocal altruism can evolve because cooperators do better than hermits or misanthropes. They enjoy the gains of trading surpluses of food, pulling ticks out of one another’s hair, saving each other from drowning or starvation, and babysitting each other’s children. Reciprocators can also do better over the long run than the cheaters who take favours without returning them, because the reciprocators will come to recognise the cheaters and shun or punish them. 

All this was quickly snapped up by game theorists, economists, and political scientists. But in a less-noticed passage, Trivers pointed out its implications for psychology. Reciprocal altruists must be equipped with cognitive faculties to recognise and remember individuals and what they have done. That helps explain why the most social species is also the smartest one; human intelligence evolved to deal with people, not just predators and tools. They also must be equipped with moral emotions that implement the tit-for-tat strategy necessary to stabilise cooperation. Sympathy and trust prompt people to extend the first favour. Gratitude and loyalty prompt them to repay favours. Guilt and shame deter them from hurting or failing to repay others. Anger and contempt prompt them to avoid or punish cheaters. 

And in a passage that even fewer readers noticed, Trivers anticipated a major phenomenon later studied in the guise of “partner choice.” Though it pays both sides in a reciprocal partnership to trade favours as long as each one gains more than he loses, people differ in how much advantage they’ll try to squeeze out of an exchange while leaving it just profitable enough for the partner that he won’t walk away. That’s why not everyone evolves into a rapacious scalper: potential partners can shun them, preferring to deal with someone who offers more generous terms.

[—]

And since humans are language users—indeed, reciprocity may be a big reason language evolved—any tendency of an individual to reciprocate or cheat, lavish or stint, does not have to be witnessed firsthand but can be passed through the grapevine. This leads to an interest in the reputation of others, and a concern with one’s own reputation. 

[—]

But Trivers rapidly spotted what everyone else missed, and still misses, together with the less biologically obvious concept of self-deception, so there must be another piece to the puzzle. During his junior year at Harvard, Trivers suffered two weeks of mania and then a breakdown that hospitalised him for two months. Bipolar disorder afflicted him throughout his life. I can’t help but wonder whether Trivers’s fecund period was driven by episodes of hypomania, when ideas surge and insights suddenly emerge through clouds of bafflement. Gamers sometimes “overclock” their computers, running the CPU at a higher speed than the rated limit, which boosts performance but risks instability and crashes. Did Trivers experience bursts of overclocking in the early 1970s? It would explain another fact about the man that was obvious to anyone who met him later: Trivers reeked of marijuana. His heavy use may have had a source other than his Jamaicaphilia. One wonders whether Trivers was self-medicating, with long-term costs to his clock speed. 

- Steven Pinker


Tuesday, March 31, 2026

Remembering Robert Trivers

Robert Trivers, who died on March 12, 2026, was arguably the most important evolutionary theorist since Darwin. He had a rare gift for seeing through the messy clutter of life and revealing the underlying logic beneath it. E. O. Wilson called him “one of the most influential and consistently correct theoretical evolutionary biologists of our time.” Steven Pinker described him as “one of the great thinkers in the history of Western thought.”

I was Robert’s graduate student at Rutgers from 2006 to 2014. Long before I knew him personally, however, he had already established himself as one of the most original and insightful scientists of the twentieth century. In an astonishing series of papers in the early 1970s, he changed forever our understanding of evolution and social behavior.

[---]

The next year in 1972, Trivers published his most cited paper, Parental Investment and Sexual Selection. Here he offered a unified explanation for something that had puzzled biologists since Darwin. Writing perhaps the most famous sentence in all of evolutionary biology—“What governs the operation of sexual selection is the relative parental investment of the sexes in their offspring”—Trivers threw down the gauntlet and revealed a deceptively simple principle that reorganized the field. From that insight flowed one of the most powerful and falsifiable ideas in modern science: the sex that invests more in offspring will tend to be choosier about mates, while the sex that invests less will compete more intensely for access to them.

[---]

Each of these papers spawned entirely new research fields, and many have dedicated their careers to unpacking and testing the implications of his ideas. As Harvard biologist David Haig put it, “I don’t know of any comparable set of papers. Most of my career has been based on exploring the implications of one of them.” Indeed, it is hardly an exaggeration to say that his ideas gave birth to the field of evolutionary psychology and the whole line of popular Darwinian books from Richard Dawkins and Robert Wright to David Buss and Steven Pinker.

To know Robert personally, however, was to confront a more uneven and less orderly organism— to use one of his favorite words—than the one revealed in his papers. The man who explained the hidden order in life often struggled to impose order in his own. “Genius” is one of the most overused words in the language, with “asshole” not far behind, and I have known few people who truly deserved either label. Robert deserved both. He could be genuinely funny, extraordinarily generous, and breathtakingly perceptive, but also moody, childish, and needlessly cruel.

[---]

I used to joke that one reason he was so good at explaining behaviors the rest of us took for granted was that he was like an alien visiting our planet trying to make sense of our strange habits—why we invest in our children, why we are nice to our friends, why we lie to ourselves. He told me that conflict with his own father was part of the inspiration for parent-offspring conflict and one of the observations that led to his insight into parental investment came from watching male pigeons jockeying for position on a railing outside his apartment window in Cambridge.

Robert also had a respect for evidence and for correcting mistakes that I’ve rarely seen among academics, a group not known for their humility. He cared more about truth than about his reputation and retracted papers at great cost to himself and his career when he thought there were errors. He also knew that he was standing on the shoulders of the giants who had come before him. 

[---]

He was a lifelong learner with a willingness to do hard things. After his astonishing early success, he could have done what many academics do: stay in his lane, guard his territory, and spend the rest of his career commenting on ideas he had already had. Instead, in the early 1990s he saw that genetics mattered and spent the next fifteen years trying to master it. The result was Genes in Conflict, the 2006 book he wrote with Austin Burt, which pushed his interest in conflict down to the level of selfish genetic elements. Few scientists, after making contributions as important as he had, would have had the curiosity, humility, and stamina to begin again in an entirely new area.

Trivers was a great teacher, though not always in the ways he intended. He often asked dumb questions—’What does cytosine bind to again?’ in the middle of a genetics seminar and made obvious observations—’Did you know that running the air-conditioner in the car uses gas?’ But as he liked to say, ‘I might be ignorant, but I ain’t gonna be for long.’ He could also be volatile and aggressive and there were many times when he threatened to kick my ass. I may have been the only graduate student who ever had to wonder whether he could take his advisor in a fight. Once, over lunch at Rutgers, I asked about a cut on his thumb after he had returned from one of his frequent trips to Jamaica. He matter-of-factly told me that he had just survived a home invasion in which two men armed with machetes held him hostage. He escaped by jumping from a second-story window, rolling downhill, and stabbing both men with the eight-inch knife he carried everywhere he went. He was 67 at the time.

[---]

One of the last times I spoke with Robert, a fall had left his right arm nearly useless. He described it as “two sausages connected by an elbow.” He was a chaotic and deeply imperfect man, but also one of the few people whose ideas permanently changed how we understand evolution, animal behavior, and ourselves. Steven Pinker wrote that “it would not be too much of an exaggeration to say that [Trivers] provided a scientific explanation for the human condition: the intricately complicated and endlessly fascinating relationships that bind us to one another.” That seems just about right to me. His ideas are some of the deepest insights we have into human nature, animal behavior, and our place in the web of life. The mark of a great person is someone who never reminds us of anyone else. I have never known anyone like him. I’ll miss you, Robert. You asshole.

- More Here


Sunday, March 29, 2026

Grounded In Reality Piece On AI Mania

I don’t say that because I think that AI models are bad or because I think they won’t get better; I think that AI models are very good and will get much better. No. The fault is not with the models, but with us. The world is run by humans, and because it’s run by humans—entities that are smelly, oily, irritable, stubborn, competitive, easily frightened, and above all else inefficient—it is a world of bottlenecks. And as long as we have human bottlenecks, we’ll need humans to deal with them: we will have, in other words, complementarity.

People frequently underrate how inefficient things are in practically any domain, and how frequently these inefficiencies are reducible to bottlenecks caused simply by humans being human. Laws and regulations are obvious bottlenecks. But so are company cultures, and tacit local knowledge, and personal rivalries, and professional norms, and office politics, and national politics, and ossified hierarchies, and bureaucratic rigidities, and the human preference to be with other humans, and the human preference to be with particular humans over others, and the human love of narrative and branding, and the fickle nature of human preferences and tastes, and the severely limited nature of human comprehension. And the biggest bottleneck is simply the human resistance to change: the fact that people don’t like shifting what they’re doing. All of these are immensely powerful. Production processes are governed by their least efficient inputs: the more efficient the most efficient inputs, the more important the least efficient inputs.

In the long run, we should expect the power of technology to overcome these bottlenecks, in the same way that a river erodes a stone over many years and decades—just as how in the early decades of the twentieth century, the sheer power of what electricity could accomplish gradually overcame the bottlenecks of antiquated factory infrastructure, outdated workflows, and the conservatism of hidebound plant managers. This process, however, takes time: it took decades for electricity, among the most powerful of all general-purpose technologies, to start impacting productivity growth. AI will probably be much faster than that, not least because it can be agentic in a way that electricity cannot. But these bottlenecks are real and important and are obvious if you look at any part of the real world. And as long as those bottlenecks exist, no matter the level of AI capabilities, we should expect a real and powerful complementarity between human labor and AI, simply because the “human plus AI” combination will be more productive than AI alone.

- More Here


Saturday, March 28, 2026

The fascinating Insights Of Robert Trivers

Trivers was one of the most—perhaps the most—influential evolutionary biologists of the 20th century. His work should be much more widely known in social and behavioural sciences, in particular in economics, as Trivers’ intellectual approach is very much in line with a game theoretic understanding of social interactions.

It is hard to overstate the importance of his work. Einstein famously published four groundbreaking papers in 1905, a year often referred to as his “Annus mirabilis”, during which he revolutionised physics. Trivers might be said to have had a “Quinquennium Mirabile” for the five years between 1971 and 1976, during which he produced a series of ideas that revolutionised evolutionary biology.

Reciprocal altruism - 1971:

The human altruistic system is a sensitive, unstable one. Often it will pay to cheat: namely, when the partner will not find out, when he will not discontinue his altruism even if he does find out, or when he is unlikely to survive long enough to reciprocate adequately. And the perception of subtle cheating may be very difficult. Given this unstable character of the system, where a degree of cheating is adaptive, natural selection will rapidly favor a complex psychological system in each individual regulating both his own altruistic and cheating tendencies and his responses to these tendencies in others. As selection favors subtler forms of cheating, it will favor more acute abilities to detect cheating.

Parental investment -1972:

Since the female already invests more than the male, breeding failure for lack of an additional investment selects more strongly against her than against the male. In that sense, her initial very great investment commits her to additional investment more than the male’s initial slight investment commies him.

[—]

Critics of evolutionary theory sometimes argue that it does not make any predictions that can be tested and that it only rationalises what has already been observed. Trivers’ work is one of the best examples disproving this accusation. In his paper on parental investment, Trivers argues that the differences in behaviour between males and females should reflect the degree of asymmetry in their parental investment. As a result, animals with more parental investment asymmetry should show greater asymmetry than those with less, and if we ever find animals with role reversals, we should also observe reversals in strategies. And indeed, we observe that in animals with less asymmetry in parental investment, like swans, the differences between males and females are less noticeable. In the rare cases where male investments are larger, like in seahorses, where the females literally place their eggs in the belly of the male who incubates them, we observe a role reversal, with females courting males and competing for access to them.

Parent Offspring Conflict - 1974:

The offspring can cry not only when it is famished but also when it merely wants more food than the parent is selected to give. Likewise, it can begin to withhold its smile until it has gotten its way. Selection will then of course favor parental ability to discriminate the two uses of the signals, but still subtler mimicry and deception by the offspring are always possible.

[---]

Obviously, overall parents tend to love their children and children tend to love their parents, but Trivers showed—with a theory now largely supported by empirical research— that the whole picture is more complex, because there are always also elements of conflict in parent-offspring relations.

Self-deception - 1976:

In the preface to Dawkins’ The Selfish Gene, Robert Trivers proposed a solution to this problem: our tendency to self-deceive, to think we are better than we are, may serve as a mechanism that enables us to deceive others more effectively. He wrote:

If … deceit is fundamental to animal communication, then there must be strong selection to spot deception and this ought, in turn, to select for a degree of self-deception, rendering some facts and motives unconscious so as not to betray – by the subtle signs of self-knowledge – the deception being practiced. —Trivers (1976)

Commenting on this assertion, psychologist Steven Pinker remarked, “This sentence... might have the highest ratio of profundity to words in the history of the social sciences”

[---]

In a 2011 paper with Bill von Hippel, Trivers developed this idea further, listing how self-deception can help. When trying to deceive, people may face cognitive load (the cognitive work required to make sure a web of lies does not have glaring contradictions). Given that lying is a betrayal of trust and is sanctioned when it is found out, it is risky, and people can get nervous about being found out, possibly showing signs of nervousness. Finally, people might try to mask signs of nervousness, thereby also behaving in a way that indirectly suggests lying. Self-deception, by inducing people to believe in their own lies, so to speak, can eliminate these possible clues while leading others to believe the preferred story of the person self-deceiving.

Trivers’ theory of self-deception has been supported by empirical research (including research I have contributed to). It explains what seems to be one of the most irrational patterns of human behaviour as emerging from strategic incentives.

Trivers has been one of the most influential evolutionary biologists, and his papers are still worth reading today. His insights, published more than 50 years ago, are fascinating. They often align very well with economic theories of behaviour, and it is therefore regrettable that his ideas are not more well-known in economics, and in particular in behavioural economics.

A key feature of Trivers’ take across these contributions was to see that beneath the world of social interactions we observe, there are deep structures in terms of incentives that shape the game we play. Understanding these games and their structures helps us make sense of the seemingly endless complexity of human psychology and social dynamics. In several key contributions, Trivers helped lift the veil on the underlying logic of human behaviour.

- More Here


Sunday, March 15, 2026

Good Bye Robert Trivers

He was the only person, I wanted to meet but never met (although he lived in NJ).

Humanity hasn't scratched the surface of his work's; thank you for everything sir.



Saturday, February 21, 2026

Wisdom Of Taleb

To be a real Human, one needs intelligence, courage, tenacity, curiosity, and a strong sense of justice. 
Remove any one of the five and you end up with the equivalent of a lemon. 
Remove the sense of justice and you end up with a monster.

- Taleb


Friday, February 13, 2026

No-Technological-Solution Problem

Bingo! What an insight!

We sapiens fucked things up, are still fucking things up, and promise, to continue fucking things up in future. 

Changing their mind and behavior is not in the equation but my species is planning to  innovate the fuck of technologies to clean up the mess they created while they continue to fuck things up. 

Hmm, god bless my species. 

Wonderful, wonderful interview with Dan Brooks about his new book A Darwinian Survival Guide: Hope for the Twenty-First Century:

Well, the primary thing that we have to understand or internalize is that what we’re dealing with is what is called a no-technological-solution problem. In other words, technology is not going to save us, real or imaginary. We have to change our behavior. If we change our behavior, we have sufficient technology to save ourselves. If we don’t change our behavior, we are unlikely to come up with a magical technological fix to compensate for our bad behavior. 

This is why Sal and I have adopted a position that we should not be talking about sustainability, but about survival, in terms of humanity’s future. Sustainability has come to mean, what kind of technological fixes can we come up with that will allow us to continue to do business as usual without paying a penalty for it? As evolutionary biologists, we understand that all actions carry biological consequences. We know that relying on indefinite growth or uncontrolled growth is unsustainable in the long term, but that’s the behavior we’re seeing now.

Stepping back a bit. Darwin told us in 1859 that what we had been doing for the last 10,000 or so years was not going to work. But people didn’t want to hear that message. So along came a sociologist who said, “It’s OK; I can fix Darwinism.” This guy’s name was Herbert Spencer, and he said, “I can fix Darwinism. We’ll just call it natural selection, but instead of survival of what’s-good-enough-to-survive-in-the-future, we’re going to call it survival of the fittest, and it’s whatever is best now.” Herbert Spencer was instrumental in convincing most biologists to change their perspective from “evolution is long-term survival” to “evolution is short-term adaptation.” And that was consistent with the notion of maximizing short term profits economically, maximizing your chances of being reelected, maximizing the collection plate every Sunday in the churches, and people were quite happy with this.

Well, fast-forward and how’s that working out? Not very well. And it turns out that Spencer’s ideas were not, in fact, consistent with Darwin’s ideas. They represented a major change in perspective. What Sal and I suggest is that if we go back to Darwin’s original message, we not only find an explanation for why we’re in this problem, but, interestingly enough, it also gives us some insights into the kinds of behavioral changes we might want to undertake if we want to survive.

To clarify, when we talk about survival in the book, we talk about two different things. One is the survival of our species, Homo sapiens. We actually don’t think that’s in jeopardy. Now, Homo sapiens of some form or another is going to survive no matter what we do, short of blowing up the planet with nuclear weapons. What’s really important is trying to decide what we would need to do if we wanted what we call “technological humanity,” or better said “technologically-dependent humanity,” to survive.

Put it this way: If you take a couple of typical undergraduates from the University of Toronto and you drop them in the middle of Beijing with their cell phones, they’re going to be fine. You take them up to Algonquin Park, a few hours’ drive north of Toronto, and you drop them in the park, and they’re dead within 48 hours. So we have to understand that we’ve produced a lot of human beings on this planet who can’t survive outside of this technologically dependent existence. 

[---]

That’s actually a really good analogy to use, because of course, as you probably know, the temperatures around the Norwegian Seed Bank are so high now that the Seed Bank itself is in some jeopardy of survival. The place where it is was chosen because it was thought that it was going to be cold forever, and everything would be fine, and you could store all these seeds now. And now all the area around it is melting, and this whole thing is in jeopardy. This is a really good example of letting engineers and physicists be in charge of the construction process, rather than biologists. Biologists understand that conditions never stay the same; engineers engineer things for, this is the way things are, this is the way things are always going to be. Physicists are always looking for some sort of general law of in perpetuity, and biologists are never under any illusions about this. Biologists understand that things are always going to change.

[---]

One of the things that’s really important for us to focus on is to understand why it is that human beings are so susceptible to adopting behaviors that seem like a good idea, and are not. Sal and I say, here are some things that seem to be common to human misbehavior, with respect to their survival. One is that human beings really like drama. Human beings really like magic. And human beings don’t like to hear bad news, especially if it means that they’re personally responsible for the bad news. And that’s a very gross, very superficial thing, but beneath that is a whole bunch of really sophisticated stuff about how human brains work, and the relationship between human beings’ ability to conceptualize the future, but living and experiencing the present.

There seems to be a mismatch within our brain — this is an ongoing sort of sloppy evolutionary phenomenon. So that’s why we spend so much time in the first half of the book talking about human evolution, and that’s why we adopt a nonjudgmental approach to understanding how human beings have gotten themselves into this situation.


 

Thursday, February 12, 2026

Culture Is The Mass-Synchronization Of Framings!

This can be good and bad too. Hence, I have an aversion for that word - "culture".

The genesis of almost all savagery, ruthlessness, and immorality against animals is from so called culture, 

This is an insightful piece on the same topic: 

A mental model is a simulation of "how things might unfold", and we all build and rebuild hundreds of mental models every day. A framing, on the other hand, is "what things exist in the first place", and it is much more stable and subtle. Every mental model is based on some framing, but we tend to be oblivious to which framing we're using most of the time (I've explained all this better in A Framing and Model About Framings and Models).

Framings are the basis of how we think and what we are even able to perceive, and they're the most consequential thing that spreads through a population in what we call "culture".

[---]

Each culture is made of shared framings—ontologies of things that are taken to exist and play a role in mental models—that arose in those same arbitrary but self-reinforcing ways. Anthropologist Joseph Henrich, in The Secret of Our Success, brings up several studies demonstrating the cultural differences in framings.

He mentions studies that estimated the average IQ of Americans in the early 1800's to have been around 70—not because they were dumber, but because their culture at the time was much poorer in sophisticated concepts. Their framings had fewer and less-defined moving parts, which translated into poorer mental models. Other studies found that children in Western countries are brought up with very general and abstract categories for animals, like "fish" and "bird", while children in small-scale societies tend to think in terms of more specific categories, such as "robin" and "jaguar", leading to different ways to understand and interface with the world.

But framings affect more than understanding. They influence how we take in the information from the world around us. Explaining this paper, Henrich writes:

People from different societies vary in their ability to accurately perceive objects and individuals both in and out of context. Unlike most other populations, educated Westerners have an inclination for, and are good at, focusing on and isolating objects or individuals and abstracting properties for these while ignoring background activity or context. Alternatively, expressing this in reverse: Westerners tend not to see objects or individuals in context, attend to relationships and their effects, or automatically consider context. Most other peoples are good at this.

How many connections and interrelations you consider when thinking is in the realm of framings. If your mental ontology treats most things as largely independent and self-sufficient, your mental models will tend to be, for better or worse, more reductionist and less holistic.

[---]

The basic force behind all culture formation is imitation. This ability is innate in all humans, regardless of culture: we are extraordinarily good imitators. Indeed, we are overimitators, sometimes with unfortunate consequences.

Overimitation ... may be distinctively human. For example, although chimpanzees imitate the way conspecifics instrumentally manipulate their environment to achieve a goal, they will copy the behavior only selectively, skipping steps which they recognize as unnecessary [unlike humans, who tend to keep even the unnecessary steps]. ... Once chimpanzees and orangutans have figured out how to solve a problem, they are conservative, sticking to whatever solution they learn first. Humans, in contrast, will often switch to a new solution that is demonstrated by peers, sometimes even switching to less effective strategies under peer influence.

— The Psychology of Normative Cognition, Stanford Encyclopedia of Philosophy, emphasis theirs.

We have a built-in need to do what the people around us do, even when we know of better or less wasteful ways. This means that we can't even explain culture as something that, while starting from chance events, naturally progresses towards better and better behaviors. That's what science is for.

Once the synchronized behaviors are in our systems, when we are habituated to certain shared ways of doing things, these behaviors feed back into our most basic mindsets, which guide our future behaviors, which further affect each other's mindset, and so on, congealing into the shared framings we call culture, i.e.: whatever happens to give the least friction in whatever happens to be the current shared behavioral landscape.

This is why, often, formal rules and laws do indeed take root in a culture: not because they're rules, but because the way they are enforced creates enough friction—or following them creates enough mutual benefits—that, like in the corridor lanes, crowds will settle into following them. This is also why, perhaps even more often, groups will settle into the easy "unruly" patterns.