Thursday, March 31, 2022

Skepticism As A Way Of Life

The desire for certainty is often foolish and sometimes dangerous. Scepticism undermines it, both in oneself and in other.

Think about a time when you changed your mind. Maybe you heard about a crime, and rushed to judgment about the guilt or innocence of the accused. Perhaps you wanted your country to go to war, and realise now that maybe that was a bad idea. Or possibly you grew up in a religious or partisan household, and switched allegiances when you got older. Part of maturing is developing intellectual humility. You’ve been wrong before; you could be wrong now.

We all are familiar, I take it, with people who refuse to admit mistakes. What do you think about such people? Do you admire their tenacity? Or do you wish that they would acknowledge that they jumped to conclusions, misread the evidence, or saw what they wanted to see? Stubborn people are not just wrong about facts. They can also be mean. Living in society means making compromises and tolerating people with whom you disagree.

Fortunately, we have a work of philosophy from antiquity filled with strategies to counter dogmatic tendencies, whether in ourselves or in other people. The book makes one laugh out loud with questions about whether we know that grass is green, that scorpion stings are deadly, or if it is wrong for parents to tattoo their babies. The French writer Michel de Montaigne read the book in the 16th century and used the strategies in his essay ‘An Apology for Raymond Sebond’. Through Montaigne, many European Enlightenment philosophers came to see a link between scepticism and toleration. Plato’s Republic is more renowned, but the book from antiquity that people ought to read right now is Sextus Empiricus’ Outlines of Pyrrhonism.

[---]

Are you smarter than a dog? It seems obvious that humans have capabilities that dogs lack. However, Sextus notes, dogs can reason which path to pursue their prey by eliminating the paths that do not have a scent. Dogs can be brave and loyal friends, have the power to choose whether and what to eat, and can convey subtle emotions and messages through sounds. Not only do dogs resemble humans in intelligence, virtue, freedom and communication, they perceive things that humans cannot. After all, it was Odysseus’ dog, Argus, who recognised his disguised master when he returned to the household. Upon reflection, we appreciate that octopi, whales, bats, spiders and so forth sense all sorts of things in the world that we apparently cannot.

Honey tastes sweet but appears unpleasant to the eyes. Perfume smells nice but tastes disgusting. Olive oil soothes the skin but irritates the windpipe. Paintings can be of mountains, but to the touch they are flat. What is the true quality of honey, perfume, olive oil, paintings? One cannot say for sure; the senses conflict with one another. We see an apple using our five senses. But, says Sextus, ‘it is possible that there exist other qualities that fall within the province of other organs of sense.’ The intellect works with material provided by the senses, and the senses conflict and may be incomplete. Our intellect might not be able to know the true story.

[---]

Outlines of Pyrrhonism provides readers with a list of argumentative strategies to use whenever anybody claims to know how things really are. Maybe you, the subject, are influencing your judgment, like when you comment on a meal at the end of a frustrating day. Maybe the object changes appearance depending on whether it is isolated or compounded, like when a grain of sand feels sharp, but a sand dune feels soft. Maybe both subjective and objective factors are at work, as when you notice a small comet because it is rare but do not notice the Sun because it rises every day.

After reading Outlines of Pyrrhonism, you might fold modesty into your speech and say things such as ‘This is how things appear to me’ and ‘Nothing more’ (ouden mallon).

But scepticism is not simply about knowledge or language. It is a way of life. Sextus invites you to become an open-minded, calm person who seeks out knowledge but does not become angry when certainty eludes your grasp or when others don’t see things the same way.

- Read the whole piece here. It's beautiful. This is why I love Montaigne so much.

Remember - one doesn't have to believe in any kind magic or miracle to live a good life. 

Skepticism is an antidote to ideology. Embrace skepticism to become an open minded person and that will inevitably make you kinder. 



Wednesday, March 30, 2022

Horrible Hundred - Worst States That Support Puppy Mills

We live in a democracy - translation, we are not ruled by an autocrat. This wasn't possible until United States was formed (sorry UK , France). So don't sell your soul to ideology, cable news and morons. 

Stand up for what is right using the metric of - "does it cause pain and suffering?"

Work with your elected representatives (they are not leaders but mere humans elected by us to serve us) and vote with you money (don't buy from puppy mills or stores) and finally, spread awareness (for ... sake - stop talking and arguing about sports, religion and politics). 

Human society's list of worst states with puppy mills: 

  • Missouri (21 puppy mills) 
  • Ohio (16 puppy mills) 
  • Pennsylvania (8 puppy mills) 
  • New York (7 puppy mills) 

For once in your life, stop looking the US map as red vs blue. Dive into details and stand up for what is right. 


Sunday, March 27, 2022

One More Reason To Be Grateful For What We Have (& To Animals) - Surgery

As Rutkow observes at the beginning of his book, it is a “reasonable certainty that no one in the industrialized world will escape having an illness for which effective treatment requires a surgical operation.” I myself would probably be blind in at least one eye (from retinal detachments), walk with a limp (from a complex ankle fracture) and possibly be dead (from urosepsis) if not for the help of my surgical colleagues. Yet until 150 years ago, as Rutkow explains, surgery was limited to the external parts of the human body, such as amputations for trauma. The only internal surgery was the occasional foray into the bladder for bladder stones and trepanning of the skull. Indeed, skulls have been found all over the planet, dating back thousands of years, with deliberately made holes that had healed over with new bone, meaning that the patient survived the procedure. But it is anybody’s guess as to whether the earliest trepanning was done to release a traumatic blood clot from inside the skull, or to release an evil spirit responsible for epilepsy or some similar, misunderstood disorder.

As Rutkow writes, the emergence of surgery from its barbaric past rested on four pillars — the understanding of anatomy, the control of bleeding, anesthesia and antisepsis. The story, however, is not one of steady, rational progress. The surgeon Galen, working in the second century A.D., wrote extensively on anatomy; some of his experience came from treating wounded gladiators but much of it was based on dissecting animals, and was simply wrong with respect to human anatomy. His writings were passed down by the Andalusian physician Abu al-Qasim al-Zahrawi, among others, to become dogma in the Middle Ages.

The first breakthrough came more than a thousand years later with the Renaissance, and the relaxation of taboos about dissecting the dead. The Flemish physician Andreas Vesalius, the greatest of the early anatomists, carried out his dissections on the corpses of executed criminals, often removed surreptitiously from the gallows at night. Surgeons such as Ambroise ParĂ© in France, working on battlefield injuries, established ways of controlling bleeding — tying off blood vessels, for instance, rather than using red-hot irons and plunging the stump of an amputated limb into boiling oil.

- Review of the book Empire of the Scalpel: The History of Surgery by Ira Rutkow M.D

One has to understand the huge line between of historic wisdom and sheer barbarism. Not everything historic is romantic and conversely not everything in the past is stupid. 

One the best way to gauge historic wisdom is by asking a simple question - "does it cause pain and suffering?".  

If the answer yes then it's barbaric. 



Thursday, March 24, 2022

Earth 200-300 Million Years Ago !

Ridiculous egos, clashes, hatred, death and destruction for land which was attached once upon time and will do so again in due time - ad infinitum or until earth is no more. 

But most commonly opinions, debates, ideology, and arguments fall into the ridiculous bucket. 


- More Here

Monday, March 21, 2022

Happy Birthday Max!

Max turned 16 today! 

He would have jumped up and down getting so many toys today and feasting on the cake :-) 

My joy of life with no hidden desires and needs - I miss you Max. 



I am extremely lucky to have spend majority of my adulthood with Max. I learnt what is to love, change minds and loss. All this experience and emotions happened while I am now still only 47. Not many people on the planet can say this. 

Many think I will not get attached to anyone because of my fear of loss. On the contrary, having lost Max and just the mere fact that I am still alive taught me first hand the concept of impermanence. Nothing will last forever and I never take anything for granted. So, I don't fear death nor loss but instead relish every second I have to Neo, Fluffy, and Garph. 

I love you Max. It would have been a perfect day if you had been with me today. 














Saturday, March 19, 2022

Wise Words On AI

I have skin in the game and I am "in" the field. Wise and timely piece from Gary Marcus: 

In time we will see that deep learning was only a tiny part of what we need to build if we’re ever going to get trustworthy AI.

[---]

When the stakes are higher, though, as in radiology or driverless cars, we need to be much more cautious about adopting deep learning. When a single error can cost a life, it’s just not good enough. Deep-learning systems are particularly problematic when it comes to “outliers” that differ substantially from the things on which they are trained.

[---]

As AI researchers Emily Bender, Timnit Gebru, and colleagues have put it, deep-learning-powered large language models are like “stochastic parrots,” repeating a lot, understanding little.

[---]

For at least four reasons, hybrid AI, not deep learning alone (nor symbols alone) seems the best way forward:

  • So much of the world’s knowledge, from recipes to history to technology is currently available mainly or only in symbolic form. Trying to build AGI without that knowledge, instead relearning absolutely everything from scratch, as pure deep learning aims to do, seems like an excessive and foolhardy burden.
  • Deep learning on its own continues to struggle even in domains as orderly as arithmetic.A hybrid system may have more power than either system on its own.
  • Symbols still far outstrip current neural networks in many fundamental aspects of computation. They are much better positioned to reason their way through complex scenarios,22 can do basic operations like arithmetic more systematically and reliably, and are better able to precisely represent relationships between parts and wholes (essential both in the interpretation of the 3-D world and the comprehension of human language). They are more robust and flexible in their capacity to represent and query large-scale databases. Symbols are also more conducive to formal verification techniques, which are critical for some aspects of safety and ubiquitous in the design of modern microprocessors. To abandon these virtues rather than leveraging them into some sort of hybrid architecture would make little sense.
  • Deep learning systems are black boxes; we can look at their inputs, and their outputs, but we have a lot of trouble peering inside. We don’t know exactly why they make the decisions they do, and often don’t know what to do about them (except to gather more data) if they come up with the wrong answers. This makes them inherently unwieldy and uninterpretable, and in many ways unsuited for “augmented cognition” in conjunction with humans. Hybrids that allow us to connect the learning prowess of deep learning, with the explicit, semantic richness of symbols, could be transformative.

[---]

For the first time in 40 years, I finally feel some optimism about AI. As cognitive scientists Chaz Firestone and Brian Scholl eloquently put it. “There is no one way the mind works, because the mind is not one thing. Instead, the mind has parts, and the different parts of the mind operate in different ways: Seeing a color works differently than planning a vacation, which works differently than understanding a sentence, moving a limb, remembering a fact, or feeling an emotion.” Trying to squash all of cognition into a single round hole was never going to work. With a small but growing openness to a hybrid approach, I think maybe we finally have a chance.

With all the challenges in ethics and computation, and the knowledge needed from fields like linguistics, psychology, anthropology, and neuroscience, and not just mathematics and computer science, it will take a village to raise to an AI. We should never forget that the human brain is perhaps the most complicated system in the known universe; if we are to build something roughly its equal, open-hearted collaboration will be key. 

Amongst all these hype; the reality of what's going on currently with AI is nothing but quintessential and slow progress as always happens in any scientific discipline (sprinkled with ego clashes). 

We should be prudent and use AI only where stakes aren't high. We should force ourselves to have human-in-loop in most AI applications. If not anything, it will help improve the AI models. 

Remember - whatever we have as AI is only because it is powered by data. If there are no relevant data then there will be no magical AI. 

Last time I checked, most humans refuse to change their mind even with right data. So let's give AI models a break and use it for our advantage where its applicable and safe.

The evolution of  all problems with AI are inherited from the age old human problem of longing for panacea and magic. As usual, the remedy is to self-reflect and grow up. 


  

Friday, March 18, 2022

Dogs N Us - We Share Everything!

Lorazepam is just one of many drugs that dogs and humans take for similar psychiatric problems. Canine compulsive behavior resembles human obsessive-compulsive disorder, for example, and impulsivity or inattention in dogs can resemble attention deficit hyperactivity disorder in us. The risk for these conditions can even be influenced by the same sets of genes. Indeed, a new study based on a survey of dog owners suggests we’re so similar to our canine companions that dogs can—and should—be used to better understand human mental health.

“Dogs are probably the closest model to humans you’re going to get,” says Karen Overall, an animal behaviorist at University of Prince Edward Island, who was not involved with the work.

Many psychologists group human personality into five “factors”: extraversion, neuroticism, openness, agreeableness, and conscientiousness. These traits can be influenced by genetics and can affect a person’s mental health—especially neuroticism, or the tendency to feel negative emotions such as distress and sadness. Research has shown neurotic personalities are more vulnerable to depression or anxiety, whereas traits such as conscientiousness and agreeableness protect against these disorders.

Any dog owner will tell you that our canine pals have distinct personalities just like you and me. Some are bold and others are cautious; some are lazy and others are highly active.

Milla Salonen, a canine researcher at the University of Helsinki, and other researchers have proposed seven personality factors for grouping dogs: insecurity, energy, training focus, aggressiveness/dominance, human sociability, dog sociability, and perseverance. Some of these factors overlap with those in people, Salonen explains. Insecurity in dogs parallels neuroticism in humans, for example.

Twenty years ago, Overall and other experts began to suggest the dog be used as a model for human psychiatry. The same types of mental illness don’t occur naturally in rodents; researchers have to induce them.

- More Here 

But don't be stupid to put dogs in research labs for this bullshit. Live in reality and yeah, we are all going to die. Nothing can be so real than that. So stop overblowing everything as psychiatric issues. 

Instead, share your precious life with other animals and learn from them. I am writing this from experience. 





Thursday, March 17, 2022

The Insect Crisis

Entomologists are instinctively disdainful of any suggestion that pollinating insects could somehow be matched by technology, even on a basic logistical level. Biologist Dave Goulson points out that bees are rather adept at pollinating flowers, given they’ve been honing their skills for around 120 million years, and that, besides, there are around 80 million honeybee hives in the world, each stuffed with tens of thousands of bees feeding and breeding for free. “What would the cost be of replacing them with robots?” Goulson asks. “It is remarkable hubris to think that we can improve on that.” 

To be fair to those devoted to appropriating the characteristics of insects for our use, there is widespread awe at the evolutionary brilliance of flies and bees and scant joy at the crisis that has brought us to the point where the meanderings of academic curiosity are being seized upon as possible salvation from our degenerate ways. When we consider technological solutions, we should perhaps spend less time judging the supply and more time judging the reasons why there’s demand in the first place.

Excerpts from The Insect Crisis: The Fall of the Tiny Empires That Run the World by Oliver Milman

People never learn from history. This history wasn't too long ago - in 2010 during Iraq war Pentagon spend whopping $19 billion to reverse engineering dog's nose (in vain) to find a replacement for bomb sniffing dogs! 


Monday, March 14, 2022

Bioelectrical Approaches To Cancer As A Problem Of The Scaling Of The Cellular Self

Abstract

One lens with which to understand the complex phenomenon of cancer is that of developmental biology. Cancer is the inevitable consequence of a breakdown of the communication that enables individual cells to join into computational networks that work towards large-scale, morphogenetic goals instead of more primitive, unicellular objectives. This perspective suggests that cancer may be a physiological disorder, not necessarily due to problems with the genetically-specified protein hardware. One aspect of morphogenetic coordination is bioelectric signaling, and indeed an abnormal bioelectric signature non-invasively reveals the site of incipient tumors in amphibian models. Functionally, a disruption of resting potential states triggers metastatic melanoma phenotypes in embryos with no genetic defects or carcinogen exposure. Conversely, optogenetic or molecular-biological modulation of bioelectric states can override powerful oncogenic mutations and prevent or normalize tumors. The bioelectrically-mediated information flows that harness cells toward body-level anatomical outcomes represent a very attractive and tractable endogenous control system, which is being targeted by emerging approaches to cancer.

- Full paper here



Saturday, March 12, 2022

Modular Cognition, Pattern Completion Et Al., - A Hypothesis

This is intelligence in action: the ability to reach a particular goal or solve a problem by undertaking new steps in the face of changing circumstances. It’s evident not just in intelligent people and mammals and birds and cephalopods, but also cells and tissues, individual neurons and networks of neurons, viruses, ribosomes and RNA fragments, down to motor proteins and molecular networks. Across all these scales, living things solve problems and achieve goals by flexibly navigating different spaces – metabolic, physiological, genetic, cognitive, behavioural.

But how did intelligence emerge in biology? The question has preoccupied scientists since Charles Darwin, but it remains unanswered. The processes of intelligence are so intricate, so multilayered and baroque, no wonder some people might be tempted by stories about a top-down Creator. But we know evolution must have been able to come up with intelligence on its own, from the bottom up.

Darwin’s best shot at an explanation was that random mutations changed and rearranged genes, altered the structure and function of bodies, and so produced adaptations that allowed certain organisms to thrive and reproduce in their environment. (In technical terms, they are selected for by the environment.) In the end, somehow, intelligence was the result. But there’s plenty of natural and experimental evidence to suggest that evolution doesn’t just select hardwired solutions that are engineered for a specific setting. For example, lab studies have shown that perfectly normal frog skin cells, when liberated from the instructive influence of the rest of the embryo, can reboot their cooperative activity to produce a novel proto-organism, called a ‘xenobot’. Evolution, it seems, doesn’t come up with answers so much as generate flexible problem-solving agents that can rise to new challenges and figure things out on their own.

The urgency of understanding intelligence in biological terms has become more acute with the ‘omics’ revolution, where new techniques are amassing enormous amounts of fresh data on the genes, proteins and connections within each cell. Yet the deluge of information about cellular hardware isn’t yielding a better explanation of the intelligent flexibility we observe in living systems. Nor is it yielding sufficient practical insights, for example, in the realm of regenerative medicine. We think the real problem is not one of data, but of perspective. Intelligence is not something that happened at the tail end of evolution, but was discovered towards the beginning, long before brains came on the scene.

From the earliest metabolic cycles that kept microbes’ chemical parameters within the right ranges, biology has been capable of achieving aims. Yet generation after generation of biologists have been trained to avoid questions about the ultimate purpose of things. Biologists are told to focus on the ‘how’, not the ‘why’, or risk falling prey to theology.

[---]

 Modularity provides stability and robustness, and is the first part of the answer to how intelligence arose. When changes occur to one part of the body, its evolutionary history as a nested doll of competent, problem-solving cells means subunits can step up and modify their activity to keep the organism alive. This isn’t a separate capacity that evolved from scratch in complex organisms, but instead an inevitable consequence of the ancient ability of cells to look after themselves and the networks of which they form a part. 

But just how are these modules controlled? The second step on the road to the emergence of intelligence lies in knowing how modules can be manipulated. Encoding information in networks requires the ability to catalyse complex outcomes with simple signals. This is known as pattern completion: the capacity of one particular element in the module to activate the entire module. That special element, which serves as a ‘trigger’, starts the activity, kicking the other members of the module into action and completing the pattern. In this way, instead of activating the entire module, evolution needs only to activate that trigger. 

Pattern completion is an essential aspect of modularity which we’re just beginning to understand, thanks to work in developmental biology and neuroscience. For example, an entire eye can be created in the gut of a frog embryo by briefly altering the bioelectric state of some cells. These cells are triggered to complete the eye pattern by recruiting nearby neighbours (which were not themselves bioelectrically altered) to fill in the rest of the eye. Similar outcomes can be achieved by genetic or chemical ‘master regulators’, such as the Hox genes that specify the body plan of most bilaterally symmetrical animals. In fact, one could relabel these regulator genes as pattern completion genes, since they enable the coordinated expression of a suite of other genes from a simple signal. The key is that modules, by continuing to work until certain conditions are met, can fill in a complex pattern when given only a small part of the pattern. In doing so, they translate a simple command – the activation of the trigger – and amplify it into an entire program. 

[---] 

We have sketched a set of approaches to biology that rely heavily on concepts from cybernetics, computer science, and engineering. But there’s still a lot of work to do in reconciling these approaches. Despite recent advances in molecular genetics, our understanding of the mapping between the genome on the one hand, and the (changeable) anatomy and physiology of the body on the other, is still at a very early stage. Much like computer science, which moved from rewiring hardware in the 1940s to a focus on algorithms and software that could control the device’s behaviour, biological sciences now need to change tack.

The impact of understanding nested intelligence across multiple scales cuts across numerous fields, from fundamental questions about our evolutionary origins to practical roadmaps for AI, regenerative medicine and biorobotics. Understanding the control systems implemented in living tissue could lead to major advances in biomedicine. If we truly grasp how to control the setpoints of bodies, we might be able to repair birth defects, induce regeneration of organs, and perhaps even defeat ageing (some cnidarians and planarian flatworms are essentially immortal, demonstrating that complex organisms without a lifespan limit are possible, using the same types of cells of which we are made). Perhaps cancer can also be addressed as a disease of modularity: the mechanisms by which body cells cooperate can occasionally break down, leading to a reversion of cells to their unicellular past – a more selfish mode in which they treat the rest of the body as an environment within which they reproduce maximally.

- More Here

The idea of modular cognition is beautiful and I almost fell head-over-heels for it. 

But... once again, people conveniently forget that we are dealing with complex systems. 

Just a cursory second reading of this piece will expose the "know" missing pieces. Microbiomes for starters and there is this thing called "exposome" which covers those "little" things namely the environmental factors. And of-course there are myriads of unknowns. 

Nevertheless, the "why" question they ask - "why biology acts this way" is extremely important. 

I am convinced our generation and many generations to come will fail to answer this question only because people are not used to asking the why question at the micro level. Maybe, someday this question will be answered and should be answered. The hypothesis of "Pattern Completion" is small step forward and kudos to those who are working on such hard problems. 


 

Wednesday, March 9, 2022

America Needs More Than Innovation; It Needs Wisdom

Wisdom. It's my favorite word in English. The reason being it's the least probable word where bullshit can seek refuge (hopefully) and one of the least used colloquial words. 

They will envy you for your success, your wealth, for your intelligence, for your looks, for your status - but rarely for your wisdom.

- Nassim Taleb

Arthur C. Brooks makes an urgent call for the tech world::

The first, according to the British-American psychologist Raymond Cattell in his 1971 book Abilities: Their Structure, Growth, and Action, is essentially the ability to solve abstract problems; the second represents a person’s knowledge gained during a lifetime of learning. In other words, as a young adult, you can solve problems quickly; as you get older, you know which problems are worth solving. Crystallized intelligence can be the difference between an enterprise with no memory that makes lots of rookie errors and one that has deep experience—even if the company is brand new.

In the first decades of your career, whether you’re a lawyer or an electrician, you’ll probably get ahead faster by focusing on work that involves ingenuity and quick thinking, which is a function of fluid intelligence. You might call it your Forbes 30 Under 30 Brain.

People who successfully navigate their professional lives into their 50s, 60s, and 70s tend to rely more and more on their abilities to synthesize knowledge, compare current facts with past patterns, and teach others. This is crystallized intelligence, a.k.a. your Dalai Lama Brain. Used right, these abilities can be even more professionally valuable than their fluid counterparts.

[---]

For example, Chip Conley, who left his career as a hotel entrepreneur at age 52 to serve as a strategic adviser to the founders of Airbnb, developed what he calls his Modern Elder Academy to cultivate crystallized intelligence in mature leaders and the skills to share it. “Our physical peak may be in our 20s, and our financial or salary peak may be at 50,” Conley told me in an email. But our deepest wisdom and widest perspective come only after that, “because we’ve developed pattern recognition about ourselves and others.” 

[---]

This isn’t exactly an original idea; it was proposed in the first century B.C. by the Roman philosopher and statesman Marcus Tullius Cicero. At age 62, he wrote De Officiis, in which he described how older people should serve others with their high crystallized intelligence: “The old … should, it seems, have their physical labors reduced; their mental activities should be actually increased. They should endeavor, too, by means of their counsel and practical wisdom to be of as much service as possible to their friends and to the young, and above all to the state.”



Cow - New Documentary

“The hegemony of America in the community of the free world creates some curious moral hazards. We are ironically held responsible for disparities in wealth and well-being which are chiefly due to differences in standards of productivity. But they lend themselves with a remarkable degree of plausibility to the Marxist indictment, which attributes all such differences to exploitation. Thus, every effort we make to prove the virtue of our “way of life” by calling attention to our prosperity is used by our enemies and detractors as proof of our guilt. Our experience of an ironic guilt when we pretend to be innocent is thus balanced by the irony of an alleged guilt when we are comparatively innocent.”

- Reinhold Niebuhr, The Irony of American History

Watch this documentary, pause and reflect for a moment - understand how much suffering you are unleashing just for the sake of your gastro intestinal pleasures...



Monday, March 7, 2022

Facts About Bones

Over the past couple of decades, scientists have discovered that bones are participants in complex chemical conversations with other parts of the body, including the kidneys and the brain; fat and muscle tissue; and even the microbes in our bellies.

It’s as if you suddenly found out that the studs and rafters in your house were communicating with your toaster.

Scientists are still deciphering all the ways that bone cells can signal other organs, and how they interpret and respond to molecular messages coming from elsewhere. Already, physician-scientists are starting to consider how they might take advantage of these cellular conversations to develop new treatments to protect or strengthen bone.



- More Here



Sunday, March 6, 2022

War 101 - “Rule of Threes”

Over the past 2 decades I had read and practiced cognitive load, decision making under uncertainty etc., and all those time spent came to my rescue when Max was ill. 

For close to 2 years, I kept my intent simple - keeping Max alive and simple tasks to achieve that intent were making sure he eats well, gets enough exercise and this home was peaceful. That's it. Nothing else mattered. 

In a literal sense, nothing else was in my head so that my decision making process is free of any other man made bullshit. I have caught myself making good judgements during crucial stages of Max illness. There was no magic nor miracle but sheer hard work for decades. I knew shit will hit the fan for me like it does for everyone and I needed to be clear headed. 

Brilliant brilliant piece on how the same simple principle's applies to war: 

Modern militaries are usually organized according to the “Rule of Threes.” Three fire teams in a squad, three squads in a platoon, three platoons in a company. Why three? Because under the stress of combat, you can’t really keep more than three things in mind.

In Ukraine, the Rule of Threes will be the most practical organizing principle. First, it ensures clear lines of authority, responsibility, and management. This is critical in ground combat. When you take accurate enemy fire, chaos ensues. To maintain cohesion, your responsibility can’t extend beyond three people, be they team members or unit leaders.

Likewise, according to the Rule of Threes, you should have three tasks: a main task and two supporting tasks. If you have three people under your command, you have someone to cover the main task, and someone to cover each of the supporting tasks. The Rule of Three may sound primitive on paper, but the stress of ground combat will rid you of your fine motor coordination, your peripheral vision, and your ability to think past three. Sticking to “three tasks” reduces friction, keeps things simple, and gives you as little extraneous to deal with as possible—while allowing you to do the most you reasonably can.

[---]

A firefight can last minutes or hours, you say. In a fight, whether we’re attacking or defending, we need to shoot, move, and communicate. We can rotate positions throughout the room and take turns shooting out of this window—that’s the main effort. If that position is taken out, we switch to another window, or another room on this floor, or another floor. If all of our positions are taken out and we’re still alive, we go support another squad.

“If you’re not shooting, you’re moving,” you say, which you remember from this article. “If you’re not moving, you’re communicating—use hand and arm signals if you have to. You need to make sure everyone knows what you’re doing.”

Shooting is important to your new job. But to shoot effectively, other things have to happen. This is why the military loves clichĂ©s, acronyms, and profanity: The simpler the slogan, the easier it is to remember, and the easier it is to remember, the better your chances of executing under stress. “Shoot, move, and communicate,” Irina repeats. “Shoot, move, and communicate.” “Shoot, move, and communicate.” She looks at you uncertainly.

“Seriously. ‘Shoot, move, and communicate.’ 


Wednesday, March 2, 2022

Evolution Of Whales

We know more about the universe outside our solar system than we do about the depths of our own ocean.

- Josh Trosvig, Captain of Fishing Boat in Alaska

We know that dogs evolved from wolves, chickens are the closest relative of dinosaurs  and sapiens from apes. But evolution of whales, the largest mammal is not common knowledge. 

Whales are among the largest animals to ever exist on Earth, with some adult blue whales reaching 180 tonnes, nearly 21 times the weight of a Tyrannosaurus rex. And the long history of whales, which spans more than 50 million years, is chock-full of surprises. The earliest whales lived near water—but not in it—and they looked very different from the whales we know today. Pakicetus, for example, was a wolf-sized animal with four legs, a long snout, and a big tail. It hunted small prey along the coastal margins of Pakistan some 50 million years ago. But what links Pakicetus and the other early whales to modern cetaceans is a distinctive anatomical feature they all share: a bulbous structure in their ears known as an involucrum. This ancient structure may assist today’s whales and dolphins in hearing underwater. Early whales also had distinctive double-pulley ankle bones seen only in even-toed hoofed mammals, like camels and cows, which are now understood to be whales’ closest relatives.

As cetaceans evolved, forelimbs became flippers, nostrils shifted back to become blowholes, and legs eventually disappeared. It took whales about 10 million years to transition from land to sea, and they may have done so for a variety of reasons, which include escaping predation on land and capitalizing on abundant marine prey. But once whales were completely aquatic, they spent the next 40 million years adapting fully to life in the ocean. For much of this time, most cetaceans were little bigger than a humpback whale. Then, beginning around 4.5 million years ago, whales underwent another remarkable transformation. Many began bulking up dramatically, eventually reaching their current extreme sizes. That allowed them to bump up the amount of prey they consumed in one gulp, swim vast distances to reach places with abundant food sources, and fight off most marine predators.

That’s the basic, broad trajectory, but huge gaps remain in our knowledge of whales—including how baleen evolved in some species. And that’s where whale fossils come in. Fossil bones preserve enormous amounts of information, and whale fossils from the Oligocene era are particularly valuable, given the many changes that cetaceans went through at that time. The trouble is that marine fossils from that era are exceedingly difficult to find in most parts of the world. Sea levels during the Oligocene were much lower than today, so fossilized marine life from that time tends to lie deep beneath the ocean—beyond the reach of paleontologists.

- More Here