Thursday, May 7, 2026

Culture - The Word That Fucked Up Our Species

I have written so many times about how almost all atrocities committed against our fellow animal family members is not considered immoral since people take umbrage behind the fucked up excuses of “culture”. 

People use culture to macro bullshit and not focus on micro morality which is precious for life on earth. 

Alex Nowrasteh’s wonderful piece is looking at this monster at a different angle. Different angle but same monster. 

The cleanest test is the divided-country natural experiment. North Korea and South Korea share a language, ethnicity, history, and culture up to 1945. One is among the richest countries on earth, the other among the poorest. East and West Germany diverged dramatically under different institutions and converged after reunification. Mainland China stagnated under Mao while Taiwan, Singapore, and Hong Kong prospered, all four sharing Chinese culture. In every case, the culture was identical on both sides of the border. The incentives, shaped by the institutions, are what changed. The outcome followed the institution, not the culture. Untangling causality is difficult, sometimes impossible, but that’s no reason to embrace a false explanation like “the culture made them do it.”

At its root, the culture discourse is anti-intellectual. Culture is a faux explanation for social behavior and outcomes that have real explanations. Think harder. Use AI to search the literature if you have to because other researchers have probably already written about the issue you claim is just caused by culture. The cultural explanation is the one you reach for when you’ve decided the search isn’t worth your time. Better to remain quiet if culture is the only explanation you’ve got. Here are some examples.

[---]

If a country is poor because of its culture, nobody has to examine the bad incentives facing members of that society. Intellectual laziness explains the rest. Finding the price, the constraint, the institutional mechanism that creates an incentive is hard, but invoking culture as if it’s a magical exogenous decider lets you stop searching. Cultural explanations are cheap to produce, requiring only anecdotes rather than data, prices, or evidence. It feels like an answer because it has the grammatical structure of one. “Japanese people ride trains because of their culture” masquerades as an explanation, but it’s just a tautology.

Culture is endogenous to everything. Claiming culture causes an outcome without first ruling out that the outcome’s causes also produced the culture is circular reasoning. Every cultural explanation must first survive a price, incentive, and institutional audit. Few of them do, but those that do are extraordinary findings, which is perhaps another explanation why so many claim it. Nobody would let economists get away with explaining a recession of high unemployment with the explanation, “It’s the economy.” We shouldn’t let others get away with the equally lazy non-explanation of “it’s the culture.”

 

Wednesday, May 6, 2026

Animal Research Doesn’t Need Better Messaging. It Needs an Exit Strategy

The problem is not poor communication by researchers, but systemic lack of transparency and accountability in animal labs. You cannot whitewash an industry that is fraught with infractions that clearly document negligence and abuse of animals in labs.

Industry defenders claim that animal research is “heavily regulated.” In reality, oversight is largely dependent on self-policing. The cornerstone of federal oversight is built on voluntary compliance through an “assurance” document submitted by the laboratory. Once this is approved, the federal oversight agency “grants considerable authority to institutions for self-regulation.” Compounding this problem, inspections by federal authorities are infrequent, often occurring only once every few years and are typically announced in advance.

Meanwhile, the vast majority of animals used in experiments—by most estimates numbering over 100 million mice—are not even covered under the US Animal Welfare Act. Internal oversight bodies, known as the Institutional Animal Care and Use Committees, are embedded within the very institutions they regulate, creating inherent conflicts of interest.

[---]

Another common claim is that critics rely on outdated information. But delays in public awareness are largely a product of the system’s opacity. Accessing records requires the filing of formal public records requests that can take months or longer to process. Even official databases lag behind real-time conditions, when they are even available. What is perceived as “old news” is often simply the first moment the public is allowed to learn what has already occurred.

Perhaps the most striking attempt to downplay these issues is the comparison of laboratory violations to incident reports at daycare centers. The analogy collapses under even minimal scrutiny. The harms documented in research facilities—botched surgical procedures, burns, dehydration, strangulation, and fatal injuries—bear no resemblance to childcare incidents.

Even basic “housekeeping” standards are not consistently met in labs. Animals have died due to overheating, drowning, exposure, and unsafe enclosures. In one recent case, dozens of rabbits drowned in preventable accidents. These are not edge cases; they are part of a documented pattern that raises serious questions about the system’s ability to safeguard even minimal welfare.

[---]

More importantly, the conversation should not stop at reform. Increasingly, scientific and regulatory communities are investing in alternatives that do not rely on animal use. Emerging methods like organ-on-chip technologies, in silico studies, advanced cell cultures, and more are now being prioritized by the US FDA and NIH for their ability to deliver human-relevant outcomes. These innovations did not emerge from efforts to defend the status quo, but from recognition that better approaches are both possible and necessary.

Animal research does not need a more effective communication strategy to explain away its problems. It needs a plan to move beyond them. With over 90 percent of animal experiments failing to produce meaningful results for human health, this is a system that is seriously underperforming because it is scientifically unsound. Add to that the failed oversight of millions more animals that can be reasonably cared for, and you have an industry that no amount of reframing can improve. The question is not whether the industry communicates the right message. It is whether the system, as it currently exists, can be justified at all.

- More Here

Fuck… thank god for Max otherwise I wouldn’t have lived with these miserable sapiens and hence probably for past 15 years I haven’t taken a single pill nor been to a doctor. 



Tuesday, May 5, 2026

Derek Parfit - What Is The Impact Of Thousands Of Small Environmental Or Personal Abuses Over Time?

One particular example I’ve always liked (especially since as a kid I had similar thoughts) provides a vivid illustration of the psychology underlying the dismissal of global warming. It shows that the consequences of our decisions need not occur in the distant future for us to discount them. They can occur out of sight or after so many steps as to seem distant. The example (embroidered a bit here) appears in Derek Parfit’s book “Reasons and Persons,” where he discusses the case of a man strapped to a hospital bed, say by a psychopath, in some indeterminate place with electrodes attached to his heart. Rotation of a dial on the other side of the world minusculely and imperceptibly increases the current in the electrodes and the stress on the man’s heart.

Perhaps a free piece of candy, a pleasant buzz, and a snapshot with the dial are on offer from a mysterious donor as an incentive to anyone in the distant location who twists the dial. Assuming it takes 10,000 people, each rotating the dial once to electrocute the victim, what degree of guilt, if any, do we assign to each individual dial-twister? After all, none of the dial-twisters know the poor man in question nor have they ever been in his part of the world. They might well doubt there is such a man if the situation isn’t clearly communicated to them or if it is ridiculed by a few influential people. Whatever their excuses, however, they are likely to be at least vaguely aware of rumors about the situation. How then do we deposit all these tiny bits of personal guilt into some moral bank account to save the victim. Or do we just shrug and dismiss the significant probability of ordinary indifferent people killing the distant stranger?

The real question of course is, What is the impact of thousands of small environmental or personal abuses over time? In the context of this rather morbid tale of a psychopath, most environmentalists would probably opt to stop rotating the dial or at least to rotate it very infrequently. 

- More Here


Sunday, May 3, 2026

Curiosity Is No Solo Act

The Foucauldian assumption that networks of information precondition ways of thinking, doing, and being has an ancient, rich, and still robust precedent in Indigenous philosophy. Rooted in the wisdom that everything that exists is connected to everything else, Indigenous philosophy foregrounds the vast and complex system of relational networks. While Western philosophy, especially post-Enlightenment, has typically emphasized the individual nodes of knowers and knowns, Indigenous philosophy has consistently contributed to a thinking on the edge, or edgework. (It is not insignificant that the English language is 70 percent nouns, while Potawatomi is 70 percent verbs. Or that Western settlers conceptualize land as private property and commodity capital, while Indigenous peoples understand it as a connective tissue in a larger gift economy.) The difference in ethos between piecemeal and of a piece with could not be more pronounced.

In an Indigenous onto-epistemology, one is always coming to know in intimate relationship with other knowers, including not only community members, but also all the components of the earth itself. In “Braiding Sweetgrass,” Potawatomi botanist Robin Wall Kimmerer tells the story of her own Indigenous curiosity. Growing up surrounded by “shoeboxes of seeds and piles of pressed leaves,” she knew the plants had chosen her. Declaring a botany major in college, she soon learned to stockpile taxonomic names and functional facts, all while letting her capacities to attend to energetic relationships fall into disuse. It was not until rekindling her connections with Indigenous communities — and specifically Indigenous scientists — that she remembered how “intimacy gives us a different way of seeing.” Her scholarship and outreach are now focused on honoring this ray of scientific and social wisdom.

What is perhaps most distinctive about Indigenous philosophy is its imbrication of a relational cosmology with a relational epistemology. At the heart of this worldview is “the eternal convergence of the world within any one thing,” writes Carl Mika, such that “one thing is never alone and all things actively construct and compose it.” From this perspective of deep holism, talk of knowing any one thing is “minimally useful.” As such, knowledge is not properly propositional but instead procedural; it is less concerned with knowing what than with knowing how. And its wisdom lies in “sharing” more than “stating.”

- More Here

Thursday, April 30, 2026

The Social Edge of Intelligence

If AI capability depends on the social complexity of human language production—and if AI deployment systematically reduces that complexity through cognitive offloading, homogenization of creative output, and the elimination of interaction-dense work—then the technology is gradually undermining the conditions for its own advancement. Its successes, rather than failures, create a spiral: a slow attenuation of the very substrate it feeds on, spelling doom.

This is the Social Edge Paradox, and the intellectual tradition it draws from is older and more interdisciplinary than most AI commentary acknowledges.

Michael Tomasello’s evolutionary research establishes that human cognition diverged from other primates by a process other than superior individual processing power. The real impetus came through the capacity for collaborative activity with shared goals and complementary roles. He argues that even private thought is “fundamentally dialogic and social” in structure—an internalization of interaction patterns. Autonomous neural capacity is far from enough to account for the abilities of human thought.

Robin Dunbar’s social brain hypothesis quantifies the link: neocortex ratios predict social group size across primates; language evolved as a mechanism for managing relationships at scales too large for grooming. Two-thirds of conversation is social, relational, reputational. Language is often mistaken as an information pipe, but it is really a social coordination technology.

My own position is that collective intent engineering, found in forms as familiar as simple brainstorming, accounts for most frontier cognitive expansion. The intelligent algorithms of today have not been built with this critical function in mind.

[---]

The AI industry is telling a story about the future of work that goes roughly like this: automate what can be automated, augment what remains, and trust that the productivity gains will compound into a wealthier, more efficient world.

The Social Edge Framework tells a different story. It says: the intelligence we are automating was never ours alone. It was forged in conversation, argument, institutional friction, and collaborative struggle. It lives in the spaces between people, and it shows up in AI capabilities only because those spaces were rich enough to leave linguistic traces worth learning from.

Every time a company automates an entry-level role, it saves a salary and loses a learning curve, unless it compensates. Every time a knowledge worker delegates a draft to an AI without engaging critically, the statistical thinning of the organizational record advances by an imperceptible increment. Every time an organization mistakes polished output for strategic progress, it consumes cognitive surplus without generating new knowledge.

None of these individual acts is catastrophic. However, their compound effect may be.

The organizations that will thrive in the next decade are not those with the highest AI utilization rates. They are those that understand something the epoch-chaining thought experiment makes vivid: that AI’s capabilities are an inheritance from the complexity of human social life. And inheritances, if consumed without reinvestment, eventually run out. This is particularly critical as AI becomes heavily customized for our organizational culture.

[---]

The Social Edge is more than a metaphor. It is the literal boundary between what AI can do well and what it will keep struggling with due to fundamental internal contradictions. Furthermore, the framework asks us all to pay attention to how the very investment thesis behind AI also contains the seeds of its own failure. And it reminds leaders that AI’s frontier today is set by the richness of the social world that produced the data it learned from.

- More Here



Wednesday, April 29, 2026

The Rise And Fall Of ‘Petty Tyrants’

Petty tyrants are more focused on personal victories than on national priorities. The good news is that they carry within them the seeds of their own destruction. Once we understand their common flaws, it becomes apparent why they eventually fall rapidly from power and leave few changes to government that last. Understanding this pattern can help us recognize a critical feature that distinguishes leaders who damage their nations from those who create lasting good: their relationship to truth.

[---]

One of the worst mistakes the opposition can make is extending contempt for the tyrant into contempt for the tyrant’s supporters. Most of these supporters sincerely believed that the tyrant would be more likely to solve their problems — often real grievances that the opposition had failed to address. Blaming the supporters denies the reality of the failures and reinforces their support for the tyrant. 

As Napoleon consolidated his power, his critics described the farmers who supported him as “a sack of potatoes” and Parisian workers as having “their minds crammed with vain theories and visionary hopes.” This attitude of condescension made it easier for Napoleon to position his opposition as arrogant elites and himself as the champion of ordinary people.

When the opposition makes it socially acceptable to show contempt for anyone who disagrees, they cooperate with the tyrant in creating a cycle of divisiveness that distracts from reality. That cycle sustains the tyrant’s hold on power. 

[---]

Once they had disabled democracy, these tyrants managed to hold onto power long after their popularity faded. Even removing the tyrant was not a guarantee of short-term success. In the Philippines, democracy has still not fully recovered.

It is much easier to stop the rise of a tyrant than to accelerate their fall. It would have been far better for each nation if the leaders of the opposition had learned from their failures, postponed their short-term ambitions and concentrated on preserving the democracy.

[---]

The legacies of these truth-based leaders have long outlived the leaders themselves, and they continue to benefit us in the 21st century. Bismarck’s social safety nets are still thriving in Germany, and they have been widely copied. Singapore is now a prosperous nation, and a Singaporean passport will get you visa-free entry into more countries than any other. Roosevelt’s Social Security is so successful that politicians on both sides of the aisle now compete to take credit for protecting it.

Look at what endures from these six stories: not the propaganda, the posters and parades, but the institutions that continue to serve their nations decade after decade. The children who are healthy and literate. The elderly and disabled who live in security and dignity. The deposits, safe in the bank. The honest civil services that provide real protections and solve real problems. These are the legacies that matter.

- More Here


Tuesday, April 28, 2026

Golden Retriever Lifetime Study - Update From Morris Animal Foundation

Got this poignant email from Morris Animal Foundation today: 

As we approach the 15th year of the Golden Retriever Lifetime Study, we are entering a new, exciting stage every pet owner will appreciate. To date, 386 of our dogs have lived to age 13 or older, including three who have reached the remarkable milestone of 15 years. As a lifelong golden retriever owner, it warms my heart to see these dogs thrive. As a veterinarian and epidemiologist, I am eager to leverage this unique dataset to understand what sets these “super-seniors” apart. After all, that is our ultimate goal: we don’t just want dogs to avoid cancer, we want dogs that remain healthy and vibrant well into their golden years.

To capture the shifting challenges these dogs may face as they age, the Study utilizes supplemental surveys that participants can opt into every six months. These provide vital data on mobility and cognition. This initiative began when most dogs in the Study were approximately 8 years old and is rapidly becoming a robust dataset that will aid researchers for decades. Current research suggests dogs fall into two categories: "cognitive maintainers" and "cognitive decliners." Our data is uniquely positioned to help us identify the specific factors that contribute to prolonged cognitive health.

Because the Golden Retriever Lifetime Study is longitudinal, scientific interest has accelerated alongside the Study’s progress. While we have sadly said goodbye to 1,780 heroes, the information they contributed from puppyhood onward is of historic importance. As I write this, more than 100 studies have leveraged our data to investigate a wide variety of health topics. We recently closed our annual call for canine research proposals, and of the 142 pre-proposals submitted, 21 plan to incorporate Study data.

While the Study’s evolution into aging is exciting, our primary objective — to make progress against canine cancer — remains unchanged. The Foundation recently invested in two cancer studies that showed promising initial results. Both successfully identified genetic regions related to hemangiosarcoma and histiocytic sarcoma, respectively. Researchers are now building on these findings using Study data, which could lead to life-saving genetic tests. These are just two examples of the many promising studies currently underway that have the potential to change the future of canine health.

From all of us at Morris Animal Foundation, thank you for making this work possible and supporting the research that will help dogs run, play and be with us to create more memories well into their golden years.

Please keep up the good work; your team will always have wishes from Max and I.  

I said this when Max had cancer and I am saying this now - a lot of insights will come from this study and the Dog Aging Project which will help Sapiens although my moronic species refuse to give data. 

Researchers need a lot of data from healthy people to understand what it looks like not having cancer - a fundamental machine learning common sense. 


Monday, April 27, 2026

Plants Can Hear The Sound Of Falling Rain

Deep inside the inner ear are tiny calcium carbonate crystals called otoliths that swish around in fluid-filled sacs when we move, helping our brains detect acceleration. Plants have similarly situated calcium carbonate crystals called “statoliths” in their root cells. Instead of detecting acceleration, however, the crystals tell the plants which way is down so the roots can grow in that direction. While the otoliths in our ears don’t help us hear, the statoliths in plant roots could help plants hear, according to a new study published in Scientific Reports. 

Researchers from the Massachusetts Institute of Technology wanted to find out if sound waves from natural phenomena like rain could be energetic enough to jostle the stratoliths in plants and facilitate germination. To test their theory, they used rice seeds, which typically grow in shallow water, an environment that can transmit sound waves more efficiently.

“Water is denser than air, so the same drop makes larger pressure waves underwater,” study author Nicholas Makris said in a statement. “So if you’re a seed that’s within a few centimeters of a raindrop’s impact, the kind of sound pressures that you would experience in water or in the ground are equivalent to what you’d be subject to within a few meters of a jet engine in the air.”

- More Here


Sunday, April 26, 2026

Ideas Of Slavery

Now a new book, John Samuel Harpham’s The Intellectual Origins of American Slavery, asks us to reconsider that standard account of events. Harpham does not discount economic or imperial explanations for the rise of New World slavery; what he suggests, instead, is that those explanations can make sense only within a culture where “slavery was available as an option.” His goal, as he puts it, is to discover “the reasons for which slavery was understood to be a status about which narrow-minded men could make calculations.”

The result is ironic and tragic in the way of the best history. Initially, Harpham claims, the English hesitated to embrace African slavery. Then, when they did, their decision was not based on any perceived racial difference or inferiority. It was based, instead, on something even more troubling: Harpham believes that English people enslaved Africans not because they were seen as different but because they seemed so very similar.

[—]

Harpham’s history reconsiders Jordan’s account of that “unthinking decision.” If the keynote of Jordan’s book was that early English observers saw Africans as different, the keynote of Harpham’s is that English people had a lot of different ideas: about Africa, about Africans, about skin color and about slavery. Nowhere was there broad agreement, he claims, except perhaps about the essence of slavery. But early English ideas about slavery were also different from what we might expect.

Throughout the period when colonial slavery was taking shape, Harpham explains, English writers still relied heavily on a conception of slavery that they inherited from ancient Rome. In contrast to the ancient Greek idea that some people could be “natural slaves,” a view most commonly associated with Aristotle, Roman law defined slavery as the product of convention. Individuals were naturally free, in this view, but could be reduced to slavery if they committed a crime or, more commonly, were captured in war. “In short,” Harpham writes, “slavery arose in Roman law as the result of history rather than nature, as a fact of modern life rather than a timeless feature of the universe.”

Accordingly, the central question for English writers in the late sixteenth and seventeenth centuries was not what qualities made a person a natural slave—a question that might lead to a racial answer—but instead what circumstances allowed for enslavement. The English showed a special interest in this question, Harpham suggests, because they were simultaneously forging a national self-identity based on “the conviction that theirs was a nation dedicated to freedom.” This conviction grew out of internal developments, such as the decline of villeinage (a kind of serfdom), but it also took shape in direct contrast to England’s chief international rivals, the Spanish and the Portuguese.

- More Here


Saturday, April 25, 2026

Rebel - Refuses To Consent To Falseness, Injustice, Or Mediocrity

Rebellion is not merely reactive but creative. It doesn’t only tear down — it seeks to reimagine. Albert Camus understood this when he wrote that “I rebel — therefore we exist.” For Camus, rebellion was the refusal to accept absurdity passively. It was the insistence that life and justice still matter even in a godless world. To rebel, then, is to affirm the possibility of meaning precisely where meaning seems most threatened. It is to insist that one’s freedom and integrity are worth defending, even when doing so brings discomfort or risk.

Rebellion typically begins in solitude but inevitably reaches toward solidarity. The solitary rebel says no to hypocrisy, cruelty, or exploitation; yet the truest form of that no is said on behalf of all. 

[—]

To live rebelliously in this deeper sense requires courage of a particular kind — the courage to trust one’s perception of what is wrong and to act in accordance with one’s conscience. Many people lose meaning because they no longer believe their own perceptions. They feel what is off — at work, in politics, in relationships — but they suppress that intuition in order to get by. Over time, this suppression breeds cynicism and fatigue.

Rebellion restores vitality by reuniting perception with action. It says: “I see what I see, I know what I know, and I will live in truth.” That alignment itself is deeply meaningful.

The pathway of rebellion does not exclude tenderness or humility. The most enduring rebels — figures like Rosa Parks, Mahatma Gandhi, or the many artists and thinkers who defied oppressive norms — rebelled not out of hatred but out of love: love for justice, for humanity, for the sanctity of truth. Rebellion, rightly understood, is a form of devotion. It refuses to let meaning be trampled by fear or conformity. It honors life enough to resist what diminishes it.

For the individual seeking reenchantment, rebellion may take quieter, more personal forms. It might mean refusing to keep up a façade of perpetual busyness or success. It might mean declining to participate in conversations that are mean-spirited or false. It might mean leaving a career that pays well but deadens the heart. In each case, rebellion functions as a reclamation of self. By saying “no” to what is meaningless, one makes room for what is real to appear. The act of refusal becomes the act of awakening.

This pathway, however, carries hazards. A rebel without an anchoring vision and a sense of humanity can become a cynic or destroyer, mistaking constant opposition for depth. To avoid this, it would be wise to tether rebellion to love, to beauty, to some image of the world as it could be. The purpose of rebellion is not to stay angry forever but to clear space for creation, renewal, and joy. Rebellion that remains open-hearted is not corrosive but cleansing; it removes what is false so that truth can breathe again.

In this way, rebellion restores the pulse of meaning through the experience of agency. The disenchantment of modern life often stems from powerlessness — feeling that one’s choices make no difference, that the world is too vast or corrupted to be changed. To rebel, even in a small and symbolic way, is to reclaim a measure of agency. It reignites the sense that one’s voice, one’s actions, one’s very stance toward the world still matter. That sense of mattering is one of the foundations of meaning itself.

Finally, rebellion reenchants because it reconnects us to the moral dimension of existence. It reminds us that life is not neutral or arbitrary but charged with value. Each act of rebellion is, at its core, an assertion of value: this matters; I matter; truth matters. That moral clarity dispels the fog of meaninglessness more effectively than any abstract philosophy. It returns us to the felt conviction that life is worth the trouble, that the struggle itself is vital.

- More Here


Tuesday, April 21, 2026

Wisdom Of Isabel Allende

You need your space, and that ‘room of one’s own,’ as Virginia Woolf put it. That room is also your time, your space, your silence—that has to be sacred. I need to close the door to my office when I finish for the day, and no one should get in. I have the idea in my mind that the story is an entity that lives in that room, with the characters, the emotions that I have been putting together. And when I come back the next day, I open the door; it’s waiting for me, intact. I don’t want anybody to go in and vacuum, or use my computer. That would kill me if somebody used my computer!

When I finally close the computer for the day, I look at my desk and put things in piles, and I usually have a candle on, because for me the candle reminds me that I am in the process of writing—not because there’s anything magic about it. And then I blow out my candle—that ends the day. And I look around to see that everything is organized, and I leave. I’m incredibly organized, because that’s part of my structure. When I walk into my office, it looks like a lab. It’s impeccable. And when I leave, it’s impeccable. I never leave a messed-up place, because when I come back, if everything is disorganized, I feel the story isn’t there for me.

Writing is pretty much like training for sports. You train and train and train to be able to play the game. And nobody cares how much you’ve trained. Nobody cares about the effort. What matters is the performance at the end, the result. Sometimes I research a whole book for one sentence, but that’s part of the job, part of the training, so that the performance will be impeccable. Nothing comes out of thin air. But once I have my hands on the keyboard, and I start creating, then things start to happen immediately, almost immediately. But I need to get to that point. I spend hours and hours alone and in silence. Without the silence and the structure, I wouldn’t be able to do it.

- David Epstein Interview with Isabel Allende


Monday, April 20, 2026

The Ideology, Economics, & Psychology Behind The Modern World's Draining Of Color From Homes, Cars, & Everyday Objects.

If you go to slums of Bombay to Brazil to Mexico to Kenya, you will notice a riot of colors. Yes, there is crime there, most people who live on day to day paycheck are content and happy. 

Color helps psychologically! It’s “Biophilia" of living in a rainforest - its one of the least studied simple psychological boosters. 

Max’s home is a riot of colors - living room is yellow, basement is pink, bed rooms are other colors - no no to neutral colors. I learned this a long time ago and even my dress has a variety of colors. 

I noticed something weird maybe a year or two ago, 5 plus years since Max passed away - a lot of my new t-shirts etc., were greyish… so I cleaned up my wardrobe and brought back color to my life. I was subconsciously depressed without Max. 

Color is the simplest and easiest confidence and psychological booster we have but alas we, sapiens even tend to ignore it. 

Very good history of why this transformation of color to grey happened in US and spread to across the globe: 

From Hawaii to Maine, from Alaska to Florida, the most popular shade for your home’s exterior is some variation of gray, off-white, beige, or greige — a hue so existentially undecided that it can’t commit to being either gray or beige, and so ends up neither, and both.

But how can this be? America is anything but monochrome. It contains multitudes of cultures, climates, and landscapes, and people who disagree, loudly and publicly, about nearly everything. So why, when Americans need a tin of house paint, do they so often reach for the neutral shelf? Why does the average house in this great and varied nation look like it’s been dipped in a vat of Resigned Indifference®?

The answer is a phenomenon dubbed “the grayening”: a gradual but relentless draining of pigment, not just from exteriors but also from interiors and from the stuff of everyday life, like cars and phones. In 2020, researchers at the Science Museum Group in London found evidence of the trend’s longevity. Feeding roughly 7,000 photographs of everyday objects — kettles, lamps, cameras — from the late 1800s to 2020 into an algorithm, they then asked it to track color distribution over time.

The result: a striking shift toward achromatic — that is, neutral — colors in material culture.

[---]

In his 1908 essay “Ornament and Crime,” Austrian architect Adolf Loos argued that ornamentation was not merely unnecessary, but a sign of arrested moral development. Truly evolved people, he suggested, would gravitate toward clean lines and plain surfaces. Applied ornament, including the use of color as decoration, didn’t enhance; it cluttered and distracted.

Loos’s polemical target was Art Nouveau, then in full frothy bloom. His arguments were influential on the Bauhaus school of art, which canonized restraint and straight lines. It, in turn, informed the International Style that swept global architecture from the 1930s onward, a style that favored glass, steel, and concrete. All gray: not just by default, but as a statement of seriousness.

Le Corbusier, pioneer of what we now simply call modern architecture, made the point with characteristic charm, declaring that color “is suited to simple races, peasants and savages.” Ouch.

The desaturation didn’t stop at buildings. Car colors have been meticulously catalogued since the dawn of the automotive age, making them a useful proxy for the broader culture’s chromatic pulse. Black had its first heyday as a car color about a century ago, when Henry Ford famously quipped that his Model T was available “in any color the customer wants, as long as it’s black.”

Sunday, April 19, 2026

How Not To Save The Planet

Wendell Berry, one of the few remaining writers in the older topophilic tradition, understands this better than anyone. In 1991, he wrote an essay for the Atlantic—a magazine for which Thoreau had written—in response to the then-common slogan “Think globally, act locally”:

Properly speaking, global thinking is not possible. Those who have “thought globally” (and among them the most successful have been imperial governments and multinational corporations) have done so by means of simplifications too extreme and oppressive to merit the name of thought. Global thinkers have been, and will be, dangerous people.

Global thinking is, for Berry, intrinsically and necessarily destructive of actual places:

Unless one is willing to be destructive on a very large scale, one cannot do something except locally, in a small place…. If we want to put local life in proper relation to the globe, we must do so by imagination, charity, and forbearance, and by making local life as independent and self-sufficient as we can—not by the presumptuous abstractions of “global thought.”

I would add to this that when global thought is not actively destructive it nevertheless tends to encourage depression in those who attempt it—which accounts, I think, for the gloomy and finger-wagging tone to which we have become accustomed.

[---]

This, I think, is an object lesson for those who wish to save the planet. If you would save the planet, forget The Planet; if you would sustain and repair nature, forget Nature. Remember the example of Gilbert White. Think only of the sensual properties of one dear place. If you learn to love a pond or a creek or a valley, then what you love others will love—and will perhaps also come to find some element of their own local environment dear to them, dear enough to conserve and protect. Our obligations arise from our deepest affections. You just have to show them how.

- More Here


Saturday, April 18, 2026

Urban Evolution

The water flea Daphnia magna — a freshwater crustacean up to a few millimeters in size — is one species busy evolving in cities in response to heat, pollution and even local predators. These zooplankton can prevent algal blooms that overload ponds with toxic cyanobacteria, so this adaptation may have a big effect on freshwater ecosystems, says Kristien Brans, an evolutionary ecologist at KU Leuven in Belgium, who studies the water fleas.

One basic challenge in such urban investigations is to distinguish between two modes of response to altered environments: evolution (genetic alterations that appear across generations) and phenotypic plasticity (the flexibility to alter physical and/or behavioral characteristics in an organism’s lifetime).

For water fleas, it turns out that both are at play. Fleas raised in lab experiments at temperatures matching urban ponds are smaller, and mature and reproduce more quickly, than fleas reared at rural pond temperatures that tend to be several degrees cooler. (That’s phenotypic plasticity — no genetic changes have occurred.) But over time, urban water fleas living generation after generation in warmer, urban pond waters have genetically changed to have those same kinds of alterations. (That’s evolution.)

[---]

GLUE took white clover’s cyanide production as a model to study three questions. Do instances of urbanization in different cities lead to similar local environments? Do those similar environments lead the clover to evolve along the same lines — display parallel evolution — in a trait of interest (in this case, cyanide production)? And if so, what environmental factors are driving the pattern?

In a new Science paper, the collaborators showed that urban environments do indeed end up quite similar to each other, with less vegetation, more impervious surfaces and higher summer temperatures than their outlying rural areas. (In fact, downtowns of cities such as Beijing and Boston are more similar to each other in such factors than they are to their rural areas, Johnson comments.) Analyzing more than 110,000 clover plants from 160 cities in 26 countries, the GLUE investigators also demonstrated a strong link between urbanization and clover cyanide production. And after sequencing more than 2,000 clover genomes and analyzing the urban-rural differences, the researchers showed that natural selection truly is at work.

[---]

Unfortunately, the genetic biodiversity that can fuel adaptation often dwindles in urban areas. A genetic survey by Chloé Schmidt working in Garroway’s lab, for example, found this to be the case, along with lower population sizes, for North American mammals living in more disturbed environments. That’s a concern during a period when so many populations of animals and plants are seeing their natural habitats degraded or simply destroyed.

Scientists don’t take urban environments as precise models for the impacts of climate change. But they say such studies will provide important clues to how creatures may respond to dwindling access to water and food, and exposure to pollution, heat, drought and other dangers.

“We’re in the Anthropocene, and we don’t understand how we’re changing the environment on every level, from greenhouse gas emissions to changing the evolution of life around us,” Johnson says. “People realize this research is part of the solution.”

- More Here


Sunday, April 12, 2026

Aristotle & His “Not Even Wrong” Ideas

Unbelievable bullshit people like Aristotle made the shit up without any epistemic humility but the real issue, these folks are still respected. People like Norman Borlaug, Robert Trivers names nor their works are known to anyone. Well, god bless my species. 

In the 4th century BCE, the philosopher Aristotle had two theories about this. He postulated that they hibernated during the winter as other animals did. Swallows, for example, encased themselves in little balls of clay and sank out of sight to the bottom of swamps. His other idea was that the missing species transformed themselves into the birds that did stick around for the winter, and changed back when summer came.

The little old man in de Bergerac’s tale was an imagined Spanish soldier called Domingo Gonsales, and he was the hero of another story. In 1638, just a couple of decades before Cyrano’s “A Voyage to the Moon” became available, the English cleric Francis Godwin published “The Man in the Moone,” a fictional account of Gonsales’ lunar adventure. In the book, Gonsales trained 25 swans to pull an ‘engine’ he had made. One day, he took a jaunt in his swan carriage which happened to coincide with the time birds were accustomed to disappear, as it seemed, from Earth.

Gonsales was about to find out the answer to the mystery. To his surprise, the swans flew upwards, until they reached what we would think of as orbit and became weightless. French scientist Blaise Pascal’s experiments demonstrating the lack of atmosphere in space had not yet filtered through to Godwin, as both birds and man breathed as usual. In 12 days they reached the Moon, where he found other migrating terrestrial birds, such as swallows, nightingales, and woodcocks. When the swans started to show signs of agitation, he divined that they were ready to return to Earth; and so he harnessed them again and sailed home in nine days, gravitational pull on his side.

This was a ripping yarn for sure, but some thought it was a plausible alternative to Aristotle’s theories, especially as there was a Biblical passage that seemed to allude to it. In the King James translation, it goes:

Yea, the stork in the heaven knoweth her appointed times; and the turtle and the crane and the swallow observe the time of their coming (Jeremiah 8:7).


Saturday, April 11, 2026

Meta Value - 47

In unexpected moments in life, an insight, an epiphany, a beautiful question, or an answer pops out of nowhere. 

I have forgotten a lot of them because of the sheer arrogance of my ability to remember. 

Work in progress: I try to jot them down immediately since some of these can be life altering moments. 


Friday, April 10, 2026

On Steve Jobs

“Having been in Silicon Valley for 50 years, I’m an expert in assholes, okay?” says Guy Kawasaki, Apple’s early developer evangelist. “And 99.9 percent of assholes are egocentric assholes. But Steve is one of the very rare mission-driven assholes. He was driven by a mission to make the greatest computer by the greatest company. And if you got in the way of that, he would run you over. He would run you over, back up, and run you over again.”

[——]

No executive, before or since, has incorporated comedy so memorably into product presentations. When, in 2002, Jobs wanted to cajole an auditorium full of software companies to rewrite their programs for Apple’s new Mac OS X operating system, he staged a full onstage funeral for the outgoing Mac OS 9, complete with a live organist, a eulogy he read himself, and a casket occupied by a four-foot–tall Mac OS 9 box.

[—]

If you encountered Jobs in only one context, you were like one of the blind men in the parable of the elephant. You’d have to have known him for years to see the whole man, and even then you might get a picture that felt fractured or incomplete.

“He was a man of contradictions,” Hertzfeld says. “Almost any adjective you could think of could apply to him at different times.”

- More Here


Wednesday, April 8, 2026

The Irony Of American Righteousness - Reinhold Niebuhr

Reinhold Niebuhr was born in 1892 in Wright City, Missouri. After studying at Yale Divinity School, he began his pastoral work in Detroit in 1915, where he spent thirteen years witnessing the harsh realities of industrial capitalism. Beneath the shadow of Henry Ford’s factories, Niebuhr saw workers exploited and discarded. These experiences shaped his entire theological outlook and dispelled the optimistic Social Gospel theology in which he had been trained.

[---]

At the core of Niebuhr’s ideas is a paradox: human beings can strive for justice but are also prone to injustice. In his 1944 key work The Children of Light and the Children of Darkness, Niebuhr provided what might be the most insightful one-sentence defense of democracy ever written: “Man’s capacity for justice makes democracy possible; but man’s inclination to injustice makes democracy necessary.”

His 1932 book *Moral Man and Immoral Society* made a key distinction: individuals can sometimes go beyond self-interest through love and reason, but groups almost never do. Collectives like nations, corporations, or movements tend to combine individual selfishness into a “collective egoism” that is far more resistant to moral constraints than any person’s conscience. This idea became his main theme: the danger of self-righteousness. “Ultimately evil is done not so much by evil people,” he warned, “but by good people who do not know themselves and who do not probe deeply.”

[---]

Later, Niebuhr used his theological ideas to analyze American identity. He argued that the United States had developed an “innocent self-image” that made it blind to its own moral faults. America thought it was immune to the corruptions affecting other great powers.
The irony of American history, Niebuhr argued, is that the nation’s virtues turn into its vices. The work ethic that built prosperity becomes worship of money. The faith that held communities together turns into theocratic pretension. The confidence that led to victories in war gives rise to imperial hubris. “No laughter from heaven,” he wrote, “could possibly penetrate through the liturgy of moral self-appreciation.” When political rallies resemble worship services and when a partisan victory is declared to be divine approval, we have entered territory that Niebuhr mapped decades ago.

[---]

Niebuhr famously defined democracy as “a method of finding proximate solutions for insoluble problems.” This straightforward formulation offers both warning and hope. The warning: human problems are never permanently resolved. The hope: even without final solutions, we can develop workable arrangements that balance competing interests and limit concentrated power. 
What would Niebuhr advise for our current times? First, humility truly involves recognizing that we are limited, flawed, and self-deceived. Second, engaging without self-righteousness means making difficult choices among imperfect options while acknowledging that choosing involves us in the complexities of power. Third, a revival of irony, not cynical detachment, but the ability to see tragedy in victory and grace in defeat. Finally, forgiveness: “the recognition that our actions and attitudes are inevitably seen in a different light by friends and foes than we see them.”

- More Here


Monday, April 6, 2026

The Many Roots Of Our Suffering - Reflections )n Robert Trivers (1943–2026)

In March 2026, three prominent thinkers died within a day of each other. Lavish obituaries immediately marked the deaths of the always-wrong environmentalist Paul Ehrlich and the often-obscure political philosopher Jürgen Habermas. But two weeks after the death of Robert Trivers, one of the greatest evolutionary biologists since Charles Darwin, not a single major news source has noticed his passing. This despite Trivers’s singular accomplishment of showing how the endlessly fascinating complexities of human relations are grounded in the wellsprings of complex life. And despite the fact that the man’s life was itself an object of fascination. Trivers was no ordinary academic. He was privileged in upbringing but louche in lifestyle, personally endearing but at times obstreperous and irresponsible, otherworldly brilliant but forehead-slappingly foolish. 

Trivers’s contributions belong in the special category of ideas that are obvious once they are explained, yet eluded great minds for ages; simple enough to be stated in a few words, yet with implications that have busied scientists for decades. In an astonishing creative burst from 1971 to 1975, Trivers wrote five seminal essays that invoked patterns of genetic overlap to explain each of the major human relationships: male with female, parent with child, sibling with sibling, partner with partner, and a person with himself or herself. 

The fallout for science was vast. The fields of sociobiology, evolutionary psychology, behavioural ecology, and Darwinian social science are largely projects that test Trivers’s hypotheses. The ideas took pride of place in E. O. Wilson’s Sociobiology in 1975, Richard Dawkins’s The Selfish Gene in 1976, and many other bestsellers in the next three decades such as Robert Wright’s The Moral Animal (1994) and my own How the Mind Works (1997) and The Blank Slate (2002). In 2007 the ideas earned Trivers the Crafoord Prize, the equivalent of a Nobel for fields not recognised by Nobels.

[—]

In another landmark, Trivers turned to relations among people who are not bound by blood. No one doubts that humans, more than any other species, make sacrifices for nonrelatives. But Trivers recoiled from the romantic notion that people are by nature indiscriminately communal and generous. It’s not true to life, nor is it expected: in evolution as in baseball, nice guys finish last. Instead, he noted, nature provides opportunities for a more discerning form of altruism in the positive-sum exchange of benefits. One animal can help another by grooming, feeding, protecting, or backing him, and is helped in turn when the needs reverse. Everybody wins. 

Trivers called it reciprocal altruism, and noted that it can evolve only in a narrow envelope of circumstances. That is because it is vulnerable to cheaters who accept favours without returning them. The altruistic parties must recognise each other, interact repeatedly, be in a position to confer a large benefit on others at a small cost to themselves, keep a memory for favours offered or denied, and be impelled to reciprocate accordingly. Reciprocal altruism can evolve because cooperators do better than hermits or misanthropes. They enjoy the gains of trading surpluses of food, pulling ticks out of one another’s hair, saving each other from drowning or starvation, and babysitting each other’s children. Reciprocators can also do better over the long run than the cheaters who take favours without returning them, because the reciprocators will come to recognise the cheaters and shun or punish them. 

All this was quickly snapped up by game theorists, economists, and political scientists. But in a less-noticed passage, Trivers pointed out its implications for psychology. Reciprocal altruists must be equipped with cognitive faculties to recognise and remember individuals and what they have done. That helps explain why the most social species is also the smartest one; human intelligence evolved to deal with people, not just predators and tools. They also must be equipped with moral emotions that implement the tit-for-tat strategy necessary to stabilise cooperation. Sympathy and trust prompt people to extend the first favour. Gratitude and loyalty prompt them to repay favours. Guilt and shame deter them from hurting or failing to repay others. Anger and contempt prompt them to avoid or punish cheaters. 

And in a passage that even fewer readers noticed, Trivers anticipated a major phenomenon later studied in the guise of “partner choice.” Though it pays both sides in a reciprocal partnership to trade favours as long as each one gains more than he loses, people differ in how much advantage they’ll try to squeeze out of an exchange while leaving it just profitable enough for the partner that he won’t walk away. That’s why not everyone evolves into a rapacious scalper: potential partners can shun them, preferring to deal with someone who offers more generous terms.

[—]

And since humans are language users—indeed, reciprocity may be a big reason language evolved—any tendency of an individual to reciprocate or cheat, lavish or stint, does not have to be witnessed firsthand but can be passed through the grapevine. This leads to an interest in the reputation of others, and a concern with one’s own reputation. 

[—]

But Trivers rapidly spotted what everyone else missed, and still misses, together with the less biologically obvious concept of self-deception, so there must be another piece to the puzzle. During his junior year at Harvard, Trivers suffered two weeks of mania and then a breakdown that hospitalised him for two months. Bipolar disorder afflicted him throughout his life. I can’t help but wonder whether Trivers’s fecund period was driven by episodes of hypomania, when ideas surge and insights suddenly emerge through clouds of bafflement. Gamers sometimes “overclock” their computers, running the CPU at a higher speed than the rated limit, which boosts performance but risks instability and crashes. Did Trivers experience bursts of overclocking in the early 1970s? It would explain another fact about the man that was obvious to anyone who met him later: Trivers reeked of marijuana. His heavy use may have had a source other than his Jamaicaphilia. One wonders whether Trivers was self-medicating, with long-term costs to his clock speed. 

- Steven Pinker


Sunday, April 5, 2026

Frank Lloyd Wright As A Mirror Of The American Condition

The fixation on Wright’s paradoxes obscures a deeper contradiction embedded in the culture that produced him. Namely, that the United States has always been ambivalent about the individual: we valorise self-reliance but distrust those who stand too far apart; we celebrate democratic ideals but are uneasy with idiosyncrasy; we admire originality while punishing the disorder it brings. Wright lived squarely inside that tension. He took seriously the idea that one could make a life and a world from first principles – an act of courage in the best light. Hubris in the worst.

Seen through that lens, Wright becomes less an outlier than a mirror. His contradictions, less personal failings than reflections of the American condition. Our yearning for freedom is matched by our fear of its consequences; our desire for order by our suspicion of conformity; our reverence for the natural world by our relentless reshaping of it. Wright’s work endures because it speaks to these tensions with a force that resists resolution. If we judge him only by his wounds or only by his wonders, we see only half the man – and half the nation that shaped him. The truth, harder and more interesting, is that both are inseparable. His greatness is entangled with his flaws, his vision inseparable from his unruly humanity. To reduce him to saint or sinner is to miss what is most alive in his work: a belief that the individual, in all their contradictions, is still worth building for.

- More Here


Saturday, April 4, 2026

Meta Value - 46

Maybe 200 years from now, we will be laughed at for embracing moronic concentration of power by not lack of knowledge in neuroscience but unwillingness to connect the obvious dots between knowledge,  action and inability to change. 

Yes, there are regular "columns" about the foolishness of multi-tasking but yet we elect officials, presidents to multitask. We celebrate CEOs and so-called corporate leaders who multi-task. Even worse, we expect the same from doctors, nurses, cops, and fire fighters. 

They just don't multi-task. Their role and responsibility is to tackle, understand, and solve the new problem, issue et al at a spur of the moment while still solving the issue at hand and yesterday's unsolved issue. Oh yeah, they also need to deal with family, celebrations, mundanity, issues and sickness. Watch news, tv, movies, and sports.

And they need to eat, poop, workout, and sleep.

All this in 24 hours of time. 

No living being can do this. Lions don't chase Zebra's 24 hours a day nor do crocodiles. No living organism does this. 

Humans cannot eat, read while doing pull ups. 

But yet, our entire civilization, economy, politics, is built around the fabric of few humans who are capable of doing pulls while popping, eating and reading. 

The real issue is the messed up dichotomy of mind and body. There is no dichotomy. The mind is not magic. It is an organic matter with limited capabilities. 

Plus we breed smaller humans to make life altering decisions at 18 with their underdeveloped prefrontal cortex.

It seems that the entire complexity of human civilization is built on Robert Trivers’ fragile ground of self-deception. 

The meta value here is - if one can not do any of the above, focus on few important things that matter in their limited time - one can not only thrive but have a wonderful and peaceful life. 

If by Rudyard Kipling:

If you can keep your head when all about you   
    Are losing theirs and blaming it on you,   
If you can trust yourself when all men doubt you,
    But make allowance for their doubting too;   
If you can wait and not be tired by waiting,
    Or being lied about, don’t deal in lies,
Or being hated, don’t give way to hating,
    And yet don’t look too good, nor talk too wise:

If you can dream—and not make dreams your master;   
    If you can think—and not make thoughts your aim;   
If you can meet with Triumph and Disaster
    And treat those two impostors just the same;   
If you can bear to hear the truth you’ve spoken
    Twisted by knaves to make a trap for fools,
Or watch the things you gave your life to, broken,
    And stoop and build ’em up with worn-out tools:

If you can make one heap of all your winnings
    And risk it on one turn of pitch-and-toss,
And lose, and start again at your beginnings
    And never breathe a word about your loss;
If you can force your heart and nerve and sinew
    To serve your turn long after they are gone,   
And so hold on when there is nothing in you
    Except the Will which says to them: ‘Hold on!’

If you can talk with crowds and keep your virtue,   
    Or walk with Kings—nor lose the common touch,
If neither foes nor loving friends can hurt you,
    If all men count with you, but none too much;
If you can fill the unforgiving minute
    With sixty seconds’ worth of distance run,   
Yours is the Earth and everything that’s in it,   
    And—which is more—you’ll be a Man, my son!





Tuesday, March 31, 2026

Remembering Robert Trivers

Robert Trivers, who died on March 12, 2026, was arguably the most important evolutionary theorist since Darwin. He had a rare gift for seeing through the messy clutter of life and revealing the underlying logic beneath it. E. O. Wilson called him “one of the most influential and consistently correct theoretical evolutionary biologists of our time.” Steven Pinker described him as “one of the great thinkers in the history of Western thought.”

I was Robert’s graduate student at Rutgers from 2006 to 2014. Long before I knew him personally, however, he had already established himself as one of the most original and insightful scientists of the twentieth century. In an astonishing series of papers in the early 1970s, he changed forever our understanding of evolution and social behavior.

[---]

The next year in 1972, Trivers published his most cited paper, Parental Investment and Sexual Selection. Here he offered a unified explanation for something that had puzzled biologists since Darwin. Writing perhaps the most famous sentence in all of evolutionary biology—“What governs the operation of sexual selection is the relative parental investment of the sexes in their offspring”—Trivers threw down the gauntlet and revealed a deceptively simple principle that reorganized the field. From that insight flowed one of the most powerful and falsifiable ideas in modern science: the sex that invests more in offspring will tend to be choosier about mates, while the sex that invests less will compete more intensely for access to them.

[---]

Each of these papers spawned entirely new research fields, and many have dedicated their careers to unpacking and testing the implications of his ideas. As Harvard biologist David Haig put it, “I don’t know of any comparable set of papers. Most of my career has been based on exploring the implications of one of them.” Indeed, it is hardly an exaggeration to say that his ideas gave birth to the field of evolutionary psychology and the whole line of popular Darwinian books from Richard Dawkins and Robert Wright to David Buss and Steven Pinker.

To know Robert personally, however, was to confront a more uneven and less orderly organism— to use one of his favorite words—than the one revealed in his papers. The man who explained the hidden order in life often struggled to impose order in his own. “Genius” is one of the most overused words in the language, with “asshole” not far behind, and I have known few people who truly deserved either label. Robert deserved both. He could be genuinely funny, extraordinarily generous, and breathtakingly perceptive, but also moody, childish, and needlessly cruel.

[---]

I used to joke that one reason he was so good at explaining behaviors the rest of us took for granted was that he was like an alien visiting our planet trying to make sense of our strange habits—why we invest in our children, why we are nice to our friends, why we lie to ourselves. He told me that conflict with his own father was part of the inspiration for parent-offspring conflict and one of the observations that led to his insight into parental investment came from watching male pigeons jockeying for position on a railing outside his apartment window in Cambridge.

Robert also had a respect for evidence and for correcting mistakes that I’ve rarely seen among academics, a group not known for their humility. He cared more about truth than about his reputation and retracted papers at great cost to himself and his career when he thought there were errors. He also knew that he was standing on the shoulders of the giants who had come before him. 

[---]

He was a lifelong learner with a willingness to do hard things. After his astonishing early success, he could have done what many academics do: stay in his lane, guard his territory, and spend the rest of his career commenting on ideas he had already had. Instead, in the early 1990s he saw that genetics mattered and spent the next fifteen years trying to master it. The result was Genes in Conflict, the 2006 book he wrote with Austin Burt, which pushed his interest in conflict down to the level of selfish genetic elements. Few scientists, after making contributions as important as he had, would have had the curiosity, humility, and stamina to begin again in an entirely new area.

Trivers was a great teacher, though not always in the ways he intended. He often asked dumb questions—’What does cytosine bind to again?’ in the middle of a genetics seminar and made obvious observations—’Did you know that running the air-conditioner in the car uses gas?’ But as he liked to say, ‘I might be ignorant, but I ain’t gonna be for long.’ He could also be volatile and aggressive and there were many times when he threatened to kick my ass. I may have been the only graduate student who ever had to wonder whether he could take his advisor in a fight. Once, over lunch at Rutgers, I asked about a cut on his thumb after he had returned from one of his frequent trips to Jamaica. He matter-of-factly told me that he had just survived a home invasion in which two men armed with machetes held him hostage. He escaped by jumping from a second-story window, rolling downhill, and stabbing both men with the eight-inch knife he carried everywhere he went. He was 67 at the time.

[---]

One of the last times I spoke with Robert, a fall had left his right arm nearly useless. He described it as “two sausages connected by an elbow.” He was a chaotic and deeply imperfect man, but also one of the few people whose ideas permanently changed how we understand evolution, animal behavior, and ourselves. Steven Pinker wrote that “it would not be too much of an exaggeration to say that [Trivers] provided a scientific explanation for the human condition: the intricately complicated and endlessly fascinating relationships that bind us to one another.” That seems just about right to me. His ideas are some of the deepest insights we have into human nature, animal behavior, and our place in the web of life. The mark of a great person is someone who never reminds us of anyone else. I have never known anyone like him. I’ll miss you, Robert. You asshole.

- More Here


Sunday, March 29, 2026

Grounded In Reality Piece On AI Mania

I don’t say that because I think that AI models are bad or because I think they won’t get better; I think that AI models are very good and will get much better. No. The fault is not with the models, but with us. The world is run by humans, and because it’s run by humans—entities that are smelly, oily, irritable, stubborn, competitive, easily frightened, and above all else inefficient—it is a world of bottlenecks. And as long as we have human bottlenecks, we’ll need humans to deal with them: we will have, in other words, complementarity.

People frequently underrate how inefficient things are in practically any domain, and how frequently these inefficiencies are reducible to bottlenecks caused simply by humans being human. Laws and regulations are obvious bottlenecks. But so are company cultures, and tacit local knowledge, and personal rivalries, and professional norms, and office politics, and national politics, and ossified hierarchies, and bureaucratic rigidities, and the human preference to be with other humans, and the human preference to be with particular humans over others, and the human love of narrative and branding, and the fickle nature of human preferences and tastes, and the severely limited nature of human comprehension. And the biggest bottleneck is simply the human resistance to change: the fact that people don’t like shifting what they’re doing. All of these are immensely powerful. Production processes are governed by their least efficient inputs: the more efficient the most efficient inputs, the more important the least efficient inputs.

In the long run, we should expect the power of technology to overcome these bottlenecks, in the same way that a river erodes a stone over many years and decades—just as how in the early decades of the twentieth century, the sheer power of what electricity could accomplish gradually overcame the bottlenecks of antiquated factory infrastructure, outdated workflows, and the conservatism of hidebound plant managers. This process, however, takes time: it took decades for electricity, among the most powerful of all general-purpose technologies, to start impacting productivity growth. AI will probably be much faster than that, not least because it can be agentic in a way that electricity cannot. But these bottlenecks are real and important and are obvious if you look at any part of the real world. And as long as those bottlenecks exist, no matter the level of AI capabilities, we should expect a real and powerful complementarity between human labor and AI, simply because the “human plus AI” combination will be more productive than AI alone.

- More Here


Saturday, March 28, 2026

The fascinating Insights Of Robert Trivers

Trivers was one of the most—perhaps the most—influential evolutionary biologists of the 20th century. His work should be much more widely known in social and behavioural sciences, in particular in economics, as Trivers’ intellectual approach is very much in line with a game theoretic understanding of social interactions.

It is hard to overstate the importance of his work. Einstein famously published four groundbreaking papers in 1905, a year often referred to as his “Annus mirabilis”, during which he revolutionised physics. Trivers might be said to have had a “Quinquennium Mirabile” for the five years between 1971 and 1976, during which he produced a series of ideas that revolutionised evolutionary biology.

Reciprocal altruism - 1971:

The human altruistic system is a sensitive, unstable one. Often it will pay to cheat: namely, when the partner will not find out, when he will not discontinue his altruism even if he does find out, or when he is unlikely to survive long enough to reciprocate adequately. And the perception of subtle cheating may be very difficult. Given this unstable character of the system, where a degree of cheating is adaptive, natural selection will rapidly favor a complex psychological system in each individual regulating both his own altruistic and cheating tendencies and his responses to these tendencies in others. As selection favors subtler forms of cheating, it will favor more acute abilities to detect cheating.

Parental investment -1972:

Since the female already invests more than the male, breeding failure for lack of an additional investment selects more strongly against her than against the male. In that sense, her initial very great investment commits her to additional investment more than the male’s initial slight investment commies him.

[—]

Critics of evolutionary theory sometimes argue that it does not make any predictions that can be tested and that it only rationalises what has already been observed. Trivers’ work is one of the best examples disproving this accusation. In his paper on parental investment, Trivers argues that the differences in behaviour between males and females should reflect the degree of asymmetry in their parental investment. As a result, animals with more parental investment asymmetry should show greater asymmetry than those with less, and if we ever find animals with role reversals, we should also observe reversals in strategies. And indeed, we observe that in animals with less asymmetry in parental investment, like swans, the differences between males and females are less noticeable. In the rare cases where male investments are larger, like in seahorses, where the females literally place their eggs in the belly of the male who incubates them, we observe a role reversal, with females courting males and competing for access to them.

Parent Offspring Conflict - 1974:

The offspring can cry not only when it is famished but also when it merely wants more food than the parent is selected to give. Likewise, it can begin to withhold its smile until it has gotten its way. Selection will then of course favor parental ability to discriminate the two uses of the signals, but still subtler mimicry and deception by the offspring are always possible.

[---]

Obviously, overall parents tend to love their children and children tend to love their parents, but Trivers showed—with a theory now largely supported by empirical research— that the whole picture is more complex, because there are always also elements of conflict in parent-offspring relations.

Self-deception - 1976:

In the preface to Dawkins’ The Selfish Gene, Robert Trivers proposed a solution to this problem: our tendency to self-deceive, to think we are better than we are, may serve as a mechanism that enables us to deceive others more effectively. He wrote:

If … deceit is fundamental to animal communication, then there must be strong selection to spot deception and this ought, in turn, to select for a degree of self-deception, rendering some facts and motives unconscious so as not to betray – by the subtle signs of self-knowledge – the deception being practiced. —Trivers (1976)

Commenting on this assertion, psychologist Steven Pinker remarked, “This sentence... might have the highest ratio of profundity to words in the history of the social sciences”

[---]

In a 2011 paper with Bill von Hippel, Trivers developed this idea further, listing how self-deception can help. When trying to deceive, people may face cognitive load (the cognitive work required to make sure a web of lies does not have glaring contradictions). Given that lying is a betrayal of trust and is sanctioned when it is found out, it is risky, and people can get nervous about being found out, possibly showing signs of nervousness. Finally, people might try to mask signs of nervousness, thereby also behaving in a way that indirectly suggests lying. Self-deception, by inducing people to believe in their own lies, so to speak, can eliminate these possible clues while leading others to believe the preferred story of the person self-deceiving.

Trivers’ theory of self-deception has been supported by empirical research (including research I have contributed to). It explains what seems to be one of the most irrational patterns of human behaviour as emerging from strategic incentives.

Trivers has been one of the most influential evolutionary biologists, and his papers are still worth reading today. His insights, published more than 50 years ago, are fascinating. They often align very well with economic theories of behaviour, and it is therefore regrettable that his ideas are not more well-known in economics, and in particular in behavioural economics.

A key feature of Trivers’ take across these contributions was to see that beneath the world of social interactions we observe, there are deep structures in terms of incentives that shape the game we play. Understanding these games and their structures helps us make sense of the seemingly endless complexity of human psychology and social dynamics. In several key contributions, Trivers helped lift the veil on the underlying logic of human behaviour.

- More Here


Friday, March 27, 2026

Humans Had Dogs Before They Had Farming

By roughly 14,000 years ago, hunter-gatherer societies across Europe had discovered dogs, scientists reported in two new papers, which were published Wednesday in the journal Nature. The studies provide the first definitive genetic evidence that dogs existed during the Paleolithic period, before humans developed agriculture.

The researchers, who used several approaches to analyze DNA extracted from ancient canine specimens, identified Paleolithic dogs at five different archaeological sites in Europe and Western Asia. The oldest of these dogs lived about 15,800 years ago, pushing back the oldest known genetic evidence of dogs by nearly 5,000 years.

These early dogs came from sites that extend from Britain to Turkey, and were associated with several very different hunter-gatherer populations. But the dogs themselves were closely related. Across the five sites, the dogs were more genetically similar than the humans were, the researchers found.

“The people are so different, but the dogs are very much the same,” said Greger Larson, a paleogeneticist at the University of Oxford and one of the authors on both new studies, which were conducted by large, international scientific teams.

The finding suggests that these early human societies were exchanging dogs or acquiring them from one another.

“It is kind of the equivalent of a new blade or a new point or a new kind of material culture or art form or something, where everybody’s getting really excited about having this fun new thing around.” Dr. Larson said. “And it’s useful and it’s interesting and it’s probably cute.”

The research provides new insight into the early history of dogs, as well as the genetic legacy and the interspecies relationship that extends to today.

“It’s really a major step forward in advancing our knowledge of humans and dogs,” said Elaine Ostrander, a canine genomics expert at the National Human Genome Research Institute who was not involved in the research.

[---]

The finding suggests that these early human societies were exchanging dogs or acquiring them from one another.

“It is kind of the equivalent of a new blade or a new point or a new kind of material culture or art form or something, where everybody’s getting really excited about having this fun new thing around.” Dr. Larson said. “And it’s useful and it’s interesting and it’s probably cute.”

The research provides new insight into the early history of dogs, as well as the genetic legacy and the interspecies relationship that extends to today.

“It’s really a major step forward in advancing our knowledge of humans and dogs,” said Elaine Ostrander, a canine genomics expert at the National Human Genome Research Institute who was not involved in the research.

- More Here


Tuesday, March 24, 2026

Happy Brithday Saroo & Blue!

March 24th 2024; the day Saroo and Blue came to Max's home. This day became their birthday. 

Life hasn't been mundane ever since they came :-) 

Happy 9th birthday my little ones!































Monday, March 23, 2026

What Was The Very First Plant In The World?

The story of plants begins in the water. The earliest plantlike organisms were simple, tiny green life-forms such as algae. You can still see algae today as seaweed along beaches or as green slime on rocks in ponds.

Algae have lived in Earth’s oceans and lakes for over 1 billion years. They can make their own food, using sunlight, water and carbon dioxide to create sugars. This process is called photosynthesis; it releases oxygen – the gas we need to breathe – as a byproduct.

At first, Earth’s atmosphere had very little oxygen. Over millions of years, photosynthesizing organisms like algae and some bacteria slowly released oxygen into the air. This change, sometimes called the Great Oxygenation Event, made it possible for larger and more complex life to evolve. Without oxygen-producing organisms, animals, including humans, could never have existed.

[---]

Moving onto land was not easy. Water plants are supported by water and can absorb nutrients easily, but land plants faced new challenges. How would they avoid drying out? How could they stand upright without floating? How would they get water and nutrients from dry ground?

To survive, early plants evolved important new features. One key adaptation was a waxy coating, called a cuticle, which helped keep water inside the plant. Plants also developed stronger cell walls that allowed them to stand upright against gravity. Simple rootlike structures, called rhizoids, helped anchor plants to the ground and absorb water and minerals from the soil.

The earliest land plants were very small and simple. They looked similar to modern mosses, liverworts and hornworts, which still grow today in damp places like forest floors and stream edges. These plants did not have true roots or stems, and they stayed close to the ground. Fossils of early land plants, such as Cooksonia, date back to about 430 million years ago and show small branching stems only an inch or two tall.

- More Here



Saturday, March 21, 2026

Happy 20th Birthday Max !

My everything would have been 20 today! 

20 years! 

Two insignificant creatures became one this day to make earth a little better place than the one we inherited. 

Happy Birthday Max! 

You know I talk to you every hour if not every moment of the day - I am in you and you are in me. 

Together we bought beauty, magic, awe, wonder, and peace into our lives and it will persist for eternity my love. 

I love you, I miss you every moment.

                                                                2009, Max's 3rd Birthday!