Showing posts with label Math. Show all posts
Showing posts with label Math. Show all posts

Saturday, October 25, 2025

Well-Defined Problems vs. Poorly-Defined Problems

I hate compliments. This is not fake-humility but I really hate compliments and to make it worse, my red flags light up about the person who compliments me. In other words, I don't trust the humans who compliment me. 

A few times in my life I received a compliment, I liked it since I work hard for it. 

That word is - wisdom. A few times in my life, I heard someone utter the phrase - you are wise. 

And I gladly took that compliment as a commitment to work harder.

Word-hard for what? To be not bad at poorly defined problems a.k.a trying to be little less stupid tomorrow than I am today. 

This such an wonderful article on the same - Why aren't smart people happier?

I think all of our various tests of intelligence aren’t as different as they seem. They’re all full of problems that have a few important things in common:

  • There are stable relationships between the variables.
  • There’s no disagreement about whether the problems are problems, or whether they’ve been solved.
  • There have clear boundaries; there is a finite amount of relevant information and possible actions.
  • The problems are repeatable. Although the details may change, the process for solving the problems does not.

I think a good name for problems like these is well-defined. Well-defined problems can be very difficult, but they aren’t mystical. You can write down instructions for solving them. And you can put them on a test. In fact, standardized tests items must be well-defined problems, because they require indisputable answers. Matching a word to its synonym, finding the area of a trapezoid, putting pictures in the correct order—all common tasks on IQ tests—are well-defined problems.

Spearman was right that people differ in their ability to solve well-defined problems. But he was wrong that well-defined problems are the only kind of problems. “Why can’t I find someone to spend my life with?” “Should I be a dentist or a dancer?” and “How do I get my child to stop crying?” are all important but poorly defined problems. “How can we all get along?” is not a multiple-choice question. Neither is “What do I do when my parents get old?” And getting better at rotating shapes or remembering state capitals is not going to help you solve them.

We all share some blame with Spearman, of course, because everybody talks about smarts as if they’re one thing. Google “smartest people in the world” and most of the results will be physicists, mathematicians, computer scientists, and chess masters. These are all difficult problems, but they are well-defined, and that makes it easy to rank people. The best chess player in the world is the one who can beat everybody else. The best mathematician is the one who can solve the problems that nobody else could solve. That makes it seem like the best chess players and mathematicians are not just the smartest in their fields, but the smartest in the whole world.

THE POORLY DEFINED PROBLEM OF BEING ALIVE

There is, unfortunately no good word for “skill at solving poorly defined problems.” Insight, creativity, agency, self-knowledge—they’re all part of it, but not all of it. Wisdom comes the closest, but it suggests a certain fustiness and grandeur, and poorly defined problems aren’t just dramatic questions like “how do you live a good life”; they’re also everyday questions like “how do you host a good party” and “how do you figure out what to do today.”

One way to spot people who are good at solving poorly defined problems is to look for people who feel good about their lives; “how do I live a life I like” is a humdinger of a poorly defined problem. The rules aren’t stable: what makes you happy may make me miserable. The boundaries aren’t clear: literally anything I do could make me more happy or less happy. The problems are not repeatable: what made me happy when I was 21 may not make me happy when I’m 31. Nobody else can be completely sure whether I’m happy or not, and sometimes I’m not even sure. In fact, some people might claim that I’m not really happy, no matter what I say, unless I accept Jesus into my heart or reach nirvana or fall in love—if I think I’m happy before all that, I’m simply mistaken about what happiness is!

This is why the people who score well on intelligence tests and win lots of chess games are no happier than the people who flunk the tests and lose at chess: well-defined and poorly defined problems require completely different problem-solving skills. Life ain’t chess! Nobody agrees on the rules, the pieces do whatever they want, and the board covers the whole globe, as well as the inside of your head and possibly several metaphysical planes as well.

[---]

So if you’re really looking for a transformative change in your happiness, you might be better off reading something ancient. The great thinkers of the distant past seemed obsessed with figuring out how to live good lives: Socrates, Plato, Aristotle, Epicurus, Buddha, Confucius, Jesus, Marcus Aurelius, St. Augustine, even up through Thoreau and Vivekananda. But at some point, this kind of stuff apparently fell out of fashion.

And hey, maybe that’s because there’s just no more progress to make on the poorly defined problem of “how do we live.” But most well-defined problems were once defined poorly. For example, “how do we land on the moon” was a hopelessly poorly defined problem for most of human history. It only makes sense if you know that the moon is a big rock you can land on and not, say, a god floating in the sky. We slowly put some definitions around that problem, and then one day we sent an actual dude to the moon and he walked around and was like “I’m on the moon now.” If we can do that, maybe we can also figure out how to live good lives. It certainly seems worth it to keep trying.


 

Sunday, June 1, 2025

The Logic Of Buddhist Philosophy Goes Beyond Simple Truth

Let’s start by turning back the clock. It is India in the fifth century BCE, the age of the historical Buddha, and a rather peculiar principle of reasoning appears to be in general use. This principle is called the catuskoti, meaning ‘four corners’. It insists that there are four possibilities regarding any statement: it might be true (and true only), false (and false only), both true and false, or neither true nor false.

We know that the catuskoti was in the air because of certain questions that people asked the Buddha, in exchanges that come down to us in the sutras. Questions such as: what happens to enlightened people after they die? It was commonly assumed that an unenlightened person would keep being reborn, but the whole point of enlightenment was to get out of this vicious circle. And then what? Did you exist, not, both or neither? The Buddha’s disciples clearly expected him to endorse one and only one of these possibilities. This, it appears, was just how people thought.

At around the same time, 5,000km to the west in Ancient Athens, Aristotle was laying the foundations of Western logic along very different lines. Among his innovations were two singularly important rules. One of them was the Principle of Excluded Middle (PEM), which says that every claim must be either true or false with no other options (the Latin name for this rule, tertium non datur, means literally ‘a third is not given’). The other rule was the Principle of Non-Contradiction (PNC): nothing can be both true and false at the same time.

Writing in his Metaphysics, Aristotle defended both of these principles against transgressors such as Heraklitus (nicknamed ‘the Obscure’). Unfortunately, Aristotle’s own arguments are somewhat tortured – to put it mildly – and modern scholars find it difficult even to say what they are supposed to be. Yet Aristotle succeeded in locking the PEM and the PNC into Western orthodoxy, where they have remained ever since. Only a few intrepid spirits, most notably G W F Hegel in the 19th century, ever thought to challenge them. And now many of Aristotle’s intellectual descendants find it very difficult to imagine life without them.

That is why Western thinkers – even those sympathetic to Buddhist thought – have struggled to grasp how something such as the catuskoti might be possible. Never mind a third not being given, here was a fourth – and that fourth was itself a contradiction. How to make sense of that?

Well, contemporary developments in mathematical logic show exactly how to do it. In fact, it’s not hard at all.

At the core of the explanation, one has to grasp a very basic mathematical distinction. I speak of the difference between a relation and a function. A relation is something that relates a certain kind of object to some number of others (zero, one, two, etc). A function, on the other hand, is a special kind of relation that links each such object to exactly one thing. Suppose we are talking about people. Mother of and father of are functions, because every person has exactly one (biological) mother and exactly one father. But son of and daughter of are relations, because parents might have any number of sons and daughters. Functions give a unique output; relations can give any number of outputs. Keep that distinction in mind; we’ll come back to it a lot.

Now, in logic, one is generally interested in whether a given claim is true or false. Logicians call true and false truth values. Normally, and following Aristotle, it is assumed that ‘value of’ is a function: the value of any given assertion is exactly one of true (or T), and false (or F). In this way, the principles of excluded middle (PEM) and non-contradiction (PNC) are built into the mathematics from the start. But they needn’t be.

To get back to something that the Buddha might recognise, all we need to do is make value of into a relation instead of a function. Thus T might be a value of a sentence, as can F, both, or neither. We now have four possibilities: {T}, {F}, {T,F} and { }. The curly brackets, by the way, indicate that we are dealing with sets of truth values rather than individual ones, as befits a relation rather than a function. The last pair of brackets denotes what mathematicians call the empty set: it is a collection with no members, like the set of humans with 17 legs.

Thus the four kotis (corners) of the catuskoti appear before us.

In case this all sounds rather convenient for the purposes of Buddhist apologism, I should mention that the logic I have just described is called First Degree Entailment (FDE). It was originally constructed in the 1960s in an area called relevant logic. Exactly what this is need not concern us, but the US logician Nuel Belnap argued that FDE was a sensible system for databases that might have been fed inconsistent or incomplete information. All of which is to say, it had nothing to do with Buddhism whatsoever.

Even so, you might be wondering how on earth something could be both true and false, or neither true nor false. In fact, the idea that some claims are neither true nor false is a very old one in Western philosophy. None other than Aristotle himself argued for one kind of example. In the somewhat infamous Chapter 9 of De Interpretatione, he claims that contingent statements about the future, such as ‘the first pope in the 22nd century will be African’, are neither true nor false. The future is, as yet, indeterminate. So much for his arguments in the Metaphysics.

The notion that some things might be both true and false is much more unorthodox. But here, too, we can find some plausible examples. Take the notorious ‘paradoxes of self-reference’, the oldest of which, reputedly discovered by Eubulides in the fourth century BCE, is called the Liar Paradox. Here’s its commonest expression:

This statement is false.

Where’s the paradox? If the statement is true, then it is indeed false. But if it is false, well, then it is true. So it seems to be both true and false.


- More Here


Friday, January 24, 2025

Two Types of Uncertainty

Uncertainty is thus not an intrinsic property of events, Spiegelhalter writes, but rather a reflection of the knowledge, perspective and assumptions of the person trying to understand or predict those events. It varies from person to person and situation to situation, even when the circumstances are identical. It is subjective and shaped by what we know or don’t know at a given time.

Spiegelhalter distinguishes two main types of uncertainty: aleatory uncertainty, that which we cannot know, and epistemic uncertainty, that which we do not know. Understanding this distinction is crucial for making informed decisions. Whereas aleatory uncertainty is often irreducible, epistemic uncertainty can be minimized through better data collection, refined models or deeper investigation.

- Review of the book The Art of Uncertainty: How to Navigate Chance, Ignorance, Risk and Luck by David Spiegelhalter


Saturday, April 2, 2022

Fish Can Learn Basic Arithmetic

This is an anthropomorphic comparison and fish succeeded. But there are zillion more fish emotions and intelligence we cannot comprehend plus to top it off - human arrogance doesn't allow to comprehend . 

It is sheer stupidity and barbaric to slaughter billions of fish everyday for gastro intestinal pleasures (a.k.a your taste buds, tradition, culture, and nostalgia - whatever those crap means)

The rules were simple: If the shapes in the original image were blue, head for the door with one extra shape; if they were yellow, go for the door with one fewer. Choosing the correct door earned the fish a food reward: pellets for cichlids, and earthworms, shrimp, or mussels for stingrays.

Only six of the eight cichlids and four of the eight stingrays successfully completed their training. But those that made it through testing performed well above chance, the researchers report today in Scientific Reports.

When shown three blue shapes, for example, the animals correctly chose the door with four blue shapes, instead of two, with over 96% and 82% accuracy for stingrays and cichlids, respectively. Both species found subtraction slightly more difficult than addition on all the tests—a feeling likely shared by most toddlers.

To make sure the animals weren’t just memorizing patterns, the researchers mixed in new tests varying the size and number of the shapes. In one trial, fish presented with three blue shapes were asked to choose between doors with four or five shapes—a choice of “plus one” or “plus two” instead of the usual “plus one” or “minus one.” Rather than simply selecting the larger number, the animals consistently followed the “plus one” directive—indicating they truly understood the desired association.

The results aren’t all that surprising, given that fish have been shown to distinguish between relative quantities before. But this new study shows fish have a different strategy for dealing with small numbers that allows them to memorize and manipulate specific values—without the help of fingers to count, says zoologist Vera Schluessel, who led the study. And because cichlids and cartilaginous stingrays last shared an ancestor more than 400 million years ago, the study suggests this talent arose early in fish evolution.

“It certainly didn’t blow my mind that they’re capable of doing it,” says Culum Brown, a behavioral ecologist at Macquarie University who was not involved in the study. “But the fact that they could separate these two strategies out was really cool.”

Other animals, including parrots and bees, have demonstrated a similar aptitude for working with numbers. Despite not having the brain structures humans rely on for cognition, they manage to match our basic arithmetic skills, Schluessel notes.

“Many people think that they’re really stupid—fish in general,” Schluessel says. “They actually do have personalities … and they also can learn quite complex tasks.”

People often use the presumed ignorance of fish to excuse “awful” commercial fishing practices and callous pet maintenance, she adds. She hopes her work will encourage humans to see fish as sentient creatures like us that deserve to be treated with more respect.

“That’s the trend, you know—we’re basically chipping away at human arrogance,” Brown says. “We think that we’re the pinnacle of evolution, but we’re not.”

Saturday, March 19, 2022

Wise Words On AI

I have skin in the game and I am "in" the field. Wise and timely piece from Gary Marcus: 

In time we will see that deep learning was only a tiny part of what we need to build if we’re ever going to get trustworthy AI.

[---]

When the stakes are higher, though, as in radiology or driverless cars, we need to be much more cautious about adopting deep learning. When a single error can cost a life, it’s just not good enough. Deep-learning systems are particularly problematic when it comes to “outliers” that differ substantially from the things on which they are trained.

[---]

As AI researchers Emily Bender, Timnit Gebru, and colleagues have put it, deep-learning-powered large language models are like “stochastic parrots,” repeating a lot, understanding little.

[---]

For at least four reasons, hybrid AI, not deep learning alone (nor symbols alone) seems the best way forward:

  • So much of the world’s knowledge, from recipes to history to technology is currently available mainly or only in symbolic form. Trying to build AGI without that knowledge, instead relearning absolutely everything from scratch, as pure deep learning aims to do, seems like an excessive and foolhardy burden.
  • Deep learning on its own continues to struggle even in domains as orderly as arithmetic.A hybrid system may have more power than either system on its own.
  • Symbols still far outstrip current neural networks in many fundamental aspects of computation. They are much better positioned to reason their way through complex scenarios,22 can do basic operations like arithmetic more systematically and reliably, and are better able to precisely represent relationships between parts and wholes (essential both in the interpretation of the 3-D world and the comprehension of human language). They are more robust and flexible in their capacity to represent and query large-scale databases. Symbols are also more conducive to formal verification techniques, which are critical for some aspects of safety and ubiquitous in the design of modern microprocessors. To abandon these virtues rather than leveraging them into some sort of hybrid architecture would make little sense.
  • Deep learning systems are black boxes; we can look at their inputs, and their outputs, but we have a lot of trouble peering inside. We don’t know exactly why they make the decisions they do, and often don’t know what to do about them (except to gather more data) if they come up with the wrong answers. This makes them inherently unwieldy and uninterpretable, and in many ways unsuited for “augmented cognition” in conjunction with humans. Hybrids that allow us to connect the learning prowess of deep learning, with the explicit, semantic richness of symbols, could be transformative.

[---]

For the first time in 40 years, I finally feel some optimism about AI. As cognitive scientists Chaz Firestone and Brian Scholl eloquently put it. “There is no one way the mind works, because the mind is not one thing. Instead, the mind has parts, and the different parts of the mind operate in different ways: Seeing a color works differently than planning a vacation, which works differently than understanding a sentence, moving a limb, remembering a fact, or feeling an emotion.” Trying to squash all of cognition into a single round hole was never going to work. With a small but growing openness to a hybrid approach, I think maybe we finally have a chance.

With all the challenges in ethics and computation, and the knowledge needed from fields like linguistics, psychology, anthropology, and neuroscience, and not just mathematics and computer science, it will take a village to raise to an AI. We should never forget that the human brain is perhaps the most complicated system in the known universe; if we are to build something roughly its equal, open-hearted collaboration will be key. 

Amongst all these hype; the reality of what's going on currently with AI is nothing but quintessential and slow progress as always happens in any scientific discipline (sprinkled with ego clashes). 

We should be prudent and use AI only where stakes aren't high. We should force ourselves to have human-in-loop in most AI applications. If not anything, it will help improve the AI models. 

Remember - whatever we have as AI is only because it is powered by data. If there are no relevant data then there will be no magical AI. 

Last time I checked, most humans refuse to change their mind even with right data. So let's give AI models a break and use it for our advantage where its applicable and safe.

The evolution of  all problems with AI are inherited from the age old human problem of longing for panacea and magic. As usual, the remedy is to self-reflect and grow up. 


  

Sunday, February 6, 2022

Why Feature Engineering, Domain & Holistic Knowledge Is Important In Modeling & Life

E.O.Wilson called this Consilience: The Unity of Knowledge

Still, if history and science have taught us anything, it is that passion and desire are not the same as truth. The human mind evolved to believe in the gods. It did not evolve to believe in biology. Acceptance of the supernatural conveyed a great advantage throughout prehistory when the brain was evolving. Thus it is in sharp contrast to biology, which was developed as a product of the modern age and is not underwritten by genetic algorithms. The uncomfortable truth is that the two beliefs are not factually compatible. As a result those who hunger for both intellectual and religious truth will never acquire both in full measure.

[---]

The greatest challenge today, not just in cell biology and ecology but in all of science, is the accurate and complete description of complex systems.

Robert Rubin in his book In an Uncertain World: Tough Choices from Wall Street to Washington explains why brute force math based modeling is pure bullshit: 

Sound decisions are based on identifying relevant variables and attaching probabilities to each of them. That's an analytic process but also involves subjective judgements. The ultimate decision then reflects all of this input, but also instinct, experience, and 'feel'. All the time bearing in mind that reality is always more complex than concepts and models.

A true probabilistic view of life quickly leads to the recognition that almost all significant issues are enormously complex and demand that one delve into those complexities to identify the relevant considerations and the inevitable trade-offs. With an enormous number of competing considerations, the key to reaching the best possible decision is to identify all of them and decide what odds and import to attach to each.

In order to live sanely in an uncertain world without believing in magic; we need to embrace a probabilistic view of the world plus constantly update that probability via a bayesian loop. 

We humans are the victims of an asymmetry in the perception of random events. We attribute our successes to our skills, and our failures to external events outside our control, namely to randomness. 
- Nassim Nicholas Taleb

Thursday, December 23, 2021

Bullshit Detector Check List Before Adapting & Paying For Any Trendy Technology

Great bullshit detector check list for new technologies. But such comprehensive list also applies to soviet style central planning systems like tradition, culture, religion, economist, silicon valley saviors et al. 

  1. What sort of person will the use of this technology make of me?
  2. What habits will the use of this technology instill?
  3. How will the use of this technology affect my experience of time?
  4. How will the use of this technology affect my experience of place?
  5. How will the use of this technology affect how I relate to other people?
  6. How will the use of this technology affect how I relate to the world around me?
  7. What practices will the use of this technology cultivate?
  8. What practices will the use of this technology displace?
  9. What will the use of this technology encourage me to notice?
  10. What will the use of this technology encourage me to ignore?
  11. What was required of other human beings so that I might be able to use this technology?
  12. What was required of other creatures so that I might be able to use this technology?
  13. What was required of the earth so that I might be able to use this technology?
  14. Does the use of this technology bring me joy? 
  15. Does the use of this technology arouse anxiety?
  16. How does this technology empower me? At whose expense?
  17. What feelings does the use of this technology generate in me toward others?
  18. Can I imagine living without this technology? Why, or why not?
  19. How does this technology encourage me to allocate my time?
  20. Could the resources used to acquire and use this technology be better deployed?
  21. Does this technology automate or outsource labor or responsibilities that are morally essential?
  22. What desires does the use of this technology generate?
  23. What desires does the use of this technology dissipate?
  24. What possibilities for action does this technology present? Is it good that these actions are now possible?
  25. What possibilities for action does this technology foreclose? Is it good that these actions are no longer possible?
  26. How does the use of this technology shape my vision of a good life?
  27. What limits does the use of this technology impose upon me?
  28. What limits does my use of this technology impose upon others?
  29. What does my use of this technology require of others who would (or must) interact with me?
  30. What assumptions about the world does the use of this technology tacitly encourage?
  31. What knowledge has the use of this technology disclosed to me about myself?
  32. What knowledge has the use of this technology disclosed to me about others? Is it good to have this knowledge?
  33. What are the potential harms to myself, others, or the world that might result from my use of this technology?
  34. Upon what systems, technical or human, does my use of this technology depend? Are these systems just?
  35. Does my use of this technology encourage me to view others as a means to an end?
  36. Does using this technology require me to think more or less?
  37. What would the world be like if everyone used this technology exactly as I use it?
  38. What risks will my use of this technology entail for others? Have they consented?
  39. Can the consequences of my use of this technology be undone? Can I live with those consequences?
  40. Does my use of this technology make it easier to live as if I had no responsibilities toward my neighbor?
  41. Can I be held responsible for the actions which this technology empowers? Would I feel better if I couldn’t?
Brilliant list but it is impossible to come up with such an exhaustive list for each of our decisions. 

Garrett Hardin in his book Filters Against Folly has a much easier way to tackle this issue by asking a simple question - "Then What?". 

He coined the phrase Ecolacy
In Filters Against Folly, Hardin outlines his approach to rational thinking through three major filters: literacy, numeracy, and “ecolacy.”

  • Literacy is easy to define: What do the words mean? Language, as Hardin points out, can be used to inhibit or enhance clear thinking. (Think about how politicians use certain words and phrases to frame issues.)  
  • Numeracy is straight-forward as well: What are the quantities involved? As Hardin saw it, the failure to invoke quantities is a major weak-point in critical analysis. Any competent analyst (not just in business, but in all human endeavor) must be in tune with quantities, numbers, and scale.  
  • Ecolacy: As for his “ecolate” filter, Hardin focuses on the first law of ecology: You can never merely do one thing. Even the most numerate and literate analyses usually forget to ask the crucial question: “And then what?” It’s a messy question; asking it leads you to a lot of dead ends. But that doesn’t mean it should be ignored. The second order of effects can often dwarf the first."
So always think of the second order effects. Always. It goes without saying there could be third and nth order effects. 

Tuesday, May 4, 2021

Condorcet's Jury Theorem

Condorcet's jury theorem is a political science theorem about the relative probability of a given group of individuals arriving at a correct decision. The theorem was first expressed by the Marquis de Condorcet in his 1785 work Essay on the Application of Analysis to the Probability of Majority Decisions.

The assumptions of the simplest version of the theorem are that a group wishes to reach a decision by majority vote. One of the two outcomes of the vote is correct, and each voter has an independent probability p of voting for the correct decision. The theorem asks how many voters we should include in the group. The result depends on whether p is greater than or less than 1/2:

  • If p is greater than 1/2 (each voter is more likely to vote correctly), then adding more voters increases the probability that the majority decision is correct. In the limit, the probability that the majority votes correctly approaches 1 as the number of voters increases.
  • On the other hand, if p is less than 1/2 (each voter is more likely to vote incorrectly), then adding more voters makes things worse: the optimal jury consists of a single voter.
- That's from Wikipedia

In other words, the wisdom of crowds depends on each individual sapiens' "enhanced" wisdom. If most people are knowledgeable, then the crowd's wisdom works, but if most people are leading just a "busy" self-centered life or worse biased view of the world, then the wisdom of the crowd goes down the drain. 

This has huge consequences on democracy, the free market, and yeah, civilization as we know. 

I rest my case.  It's amazing how we made it this far. 



Sunday, January 17, 2021

Nancy Andreasen's Insights on Creativity and Mental Disorder

Maria Popova of Brain Picking has another gem; this time illustrating life long work of Nancy Andreasen to discover the causes behind creativity which includes mental disorders such as Bipolar, Asperger's syndrome, depression, etc.

I have written many times over the years on depression and how it has helped me; especially in the past year after Max passed away. 

One of the benefits for evolution to "preserve" depression in us is to focus and reflect on "things" sans distraction to reach a solution. Perpetual busyness never helps to focus and reflect; hence people tend to outsource their questions and problems to cable news morons to positivity gurus. 

Here are some brilliant insights from Nancy Andreasen's book The Creating Brain: The Neuroscience of Genius

Although many writers had had periods of significant depression, mania, or hypomania, they were consistently appealing, entertaining, and interesting people. They had led interesting lives, and they enjoyed telling me about them as much as I enjoyed hearing about them. Mood disorders tend to be episodic, characterized by relatively brief periods of low or high mood lasting weeks to months, interspersed with long periods of normal mood (known as euthymia to us psychiatrists). All the writers were euthymic at the time that I interviewed them, and so they could look back on their periods of depression or mania with considerable detachment. They were also able to describe how abnormalities in mood state affected their creativity. Consistently, they indicated that they were unable to be creative when either depressed or manic.

[---]

One point of view … is that gifted people are in fact supernormal or superior in many ways. My writers certainly were. They were charming, fun, articulate, and disciplined. They typically followed very similar schedules, getting up in the morning and allocating a large chunk of time to writing during the earlier part of the day. They would rarely let a day go by without writing. In general, they had a close relationship with friends and family. They manifested the Freudian definition of health: lieben und arbeiten, “to love and to work.” On the other hand, they also manifested the alternative common point of view about the nature of genius: that it is “to madness near allied.” Many definitely had experienced periods of significant mood disorder. Importantly, though handicapping creativity when they occurred, these periods of mood disorder were not permanent or long-lived. In some instances, they may even have provided powerful material upon which the writer could later draw, as a Wordsworthian “emotion recollected in tranquility".

[---]

Many personality characteristics of creative people … make them more vulnerable, including openness to new experiences, a tolerance for ambiguity, and an approach to life and the world that is relatively free of preconceptions. This flexibility permits them to perceive things in a fresh and novel way, which is an important basis for creativity. But it also means that their inner world is complex, ambiguous, and filled with shades of gray rather than black and white. It is a world filled with many questions and few easy answers. While less creative people can quickly respond to situations based on what they have been told by people in authority — parents, teachers, pastors, rabbis, or priests — the creative person lives in a more fluid and nebulous world. He or she may have to confront criticism or rejection for being too questioning, or too unconventional. Such traits can lead to feelings of depression or social alienation. A highly original person may seem odd or strange to others. Too much openness means living on the edge. Sometimes the person may drop over the edge… into depression, mania, or perhaps schizophrenia.

[---]

All human beings (and their brains) have to cope with the fact that their five senses gather more information than even the magnificent human brain is able to process. To put this another way: we need to be able to ignore a lot of what is happening around us — the smell of pizza baking, the sound of the cat meowing, or the sight of birds flying outside the window — if we are going to focus our attention and concentrate on what we are doing (in your case, for example, reading this book). Our ability to filter out unnecessary stimuli and focus our attention is mediated by brain mechanisms in regions known as the thalamus and the reticular activating system. 

 

Monday, January 4, 2021

What I've Been Reading

Science, the discipline in which we should find the harshest skepticism, the most pin-sharp rationality, and the hardest-headed empiricism, has become home to a dizzying array of incompetence, delusion, lies, and self-deception.

[---]

That moral case - that making errors in science is much more than just an academic matter, because of the harm it can cause - applies similarly to fields of research that directly sacrifice lives. I'm referring, of course, to research on non-human animals, where the subjects are often 'euthanized' - that is killed - as part of the experiment (for example, to examine their brains after a new drug has been administered). This kind of research is usually strictly regulated by government agencies since virtually everyone agrees it would be immoral to kill lab animals, or even just to cause them to suffice, for no good scientific reason. So animal studies don't just carry the usual of trying to produce accurate, replicable results without wasting resources. They also have an additional responsibility: ensuring that errors in their design and analysis don't render pointless pain and death that they inevitably cause. Unfortunately, a considerable proportion - by some measures, a majority - of animal research studies fail this test. 

Science Fictions: How Fraud, Bias, Negligence, and Hype Undermine the Search for Truth by Stuart Ritchie. 

There is a common misconception that religion, socialism et al., are the only sources that unleash the pain, destruction, and death. No question they caused and still causing immense pain, destruction, and death but we conveniently forget the only common factor amongst these is humans. Science is no exception; last time I checked scientists are humans. 

Unless, we as a society at a meta-level change the incentives from money and fame to morality - this is not going to change. 

The road to hell is paved with good intentions.

- Henry G. Bohn, A Hand-Book of Proverbs

Ultra-Hyped Fields:

Stem cells, genetics, epigenetics, machine learning, and brain imagining; for the past few years, a strong contender for the 'most hyped' award has been research on the microbiome - the countless millions of microbes that inhabit our bodies. 

Perverse Incentives:

Because studies reporting positive, flashy, novel, newsworthy results are rewarded so much more than others, scientists are incentivized to generate them to the detriment of everything else. To convenience the reviewers and editors that their papers really do have all those qualities, too many of them end up bending or breaking the rules (of Mertonian norms of universalism, commonality, disinterestedness, and organized skepticism). 

[---]

The system incentives scientists not to practice science, but simply to meet its own perverse demands. The incentives are at the root of so many of the dubious practices that undermine our research. 

Fixing Science:(addresses symptoms but not causes - which is basic human nature)

Some of the proposed solutions for the corresponding issues:

Fake data and negligence: Algorithms (services such as GRIM and Statcheck) do and could help with these issues. 

Novelty Bias: Journals should also publish null results and journals making the authors responsible for publishing further work checking whether it replicates. 

Statistical bias and p-hacking: Cannot remove them completely since its scary to move towards a subjective metric (current issue with nutrition studies), use more of the Bayesian approach (although the prior is subjective), and other metrics such as multi-verse analysis (if we imagine infinite parallel universes, in each of which you ran the analysis slightly differently, in what proportion of them would you find the complete opposite? Would all these analyses converge to the same overall result?)

Preprints and Pre-registration take out a lot of issues. Registering a study involves positing a public, time-stamped document online that details what the researchers are planning to do, in advance of collecting any data. It allows us to see the hypothesis the researchers intended to test, so we can check if any of them were switched mid-study. This is all about transparency. 

Replication crisis: Team Science - Large Collaborative Projects such as 'Plan S', and Open Access with funding from government and major funders, help force changes in research practice. These large-scale projects can directly address the applicability of their respective fields and because the results are being shared around a larger community of usually very opinionated scientists, they can also, in theory, act as a check on the biases of any individual scientist. 

Just as publishing more null results and replication studies is a more dependable way to build our knowledge, becoming more aware of the uncertain and preliminary nature of research is, in the long run, a better way to appreciate science fully. Let's work to resist our neophiliac, magpie-like focus on shiny research findings, and instead learn to value results that are solid, even if they're less immediately thrilling. In other words, let's Make Science Boring Again. 

[---]

Treating each study as a tentative step towards an answer, rather than as the answer itself.



Tuesday, September 8, 2020

How Pseudoscientists Get Away With It

Attempts to inculcate what are called “scientific habits of mind” are of little practical help. These habits of mind are not so easy to adopt. They invariably require some amount of statistics and probability, much of which is counterintuitive. One of the great values of science is to help us counter our normal biases and expectations by showing the actual measurements may not bear them out. Then there’s the math. No matter how much you try to hide it, much of the language of science is math (Galileo said that). And half the audience is gone with each equation (Stephen Hawking said that). It’s hard to imagine a successful program of making a non-scientifically trained public interested in adopting the rigors of scientific habits of mind. Indeed, I suspect there are some people who would be rightfully suspicious of changing their thinking to being habitually scientific. Many scientists are frustrated by the public’s inability to think like a scientist, but in fact it is neither easy nor always desirable to do so. And it is certainly not practical.

There is a more intuitive and simpler way to tell the difference between the real thing and the cheap knock-off. In fact, it is not so much intuitive as counterintuitive, so it takes a little bit of mental work. But the good thing is it works almost all the time. True science is mostly concerned with the unknown and the uncertain. If someone claims to have the ultimate answer or they know something for certain, the only thing for certain is they are trying to fool you. Mystery and uncertainty may not strike you right off as desirable or strong traits, but that is precisely where one finds the creative solutions that science has historically arrived at. Yes, science accumulates factual knowledge, but it is at its best when it generates new and better questions. Uncertainty is not a place of worry, but of opportunity. Progress lives at the border of the unknown. 

[---]

Good science provides clear evidence that may only go so far. Scientists have to speculate, which could go one of two or three ways, or maybe some way they haven’t seen yet. But like your blood pressure medicine, the stuff we know is reliable even if incomplete. Unsettled science is not unsound science. The honesty and humility of someone willing to tell you that they don’t have all the answers, but they have some thoughtful questions to pursue, is easy to distinguish from the charlatans who have ready answers or claim that nothing should be done until we are an impossible 100 percent sure. 

- More Here


Sunday, June 28, 2020

Modeling the Human Trajectory

I do not know whether most of the history of technological advance on Earth lies behind or ahead of us. I do know that it is far easier to imagine what has happened than what hasn’t. I think it would be a mistake to laugh off or dismiss the predictions of infinity emerging from good models of the past. Better to take them as stimulants to our imaginations. I believe the predictions of infinity tell us two key things. First, if the patterns of history continue, then some sort of economic explosion will take place again, the most plausible channel being AI. It wouldn’t reach infinity, but it could be big. Second, and more generally, I take the propensity for explosion as a sign of instability in the human trajectory. Gross world product, as a rough proxy for the scale of the human enterprise, might someday spike or plunge or follow a complicated path in between. The projections of explosion should be taken as indicators of the long-run tendency of the human system to diverge. They are hinting that realistic models of long-term development are unstable, and stable models of long-term development unrealistic. The credible range of future paths is indeed wide.

- More Here

Tuesday, June 2, 2020

Karl Friston On Generative Models & Immunological Dark Matter

How do the models you use differ from the conventional ones epidemiologists rely on to advise governments in this pandemic?

Conventional models essentially fit curves to historical data and then extrapolate those curves into the future. They look at the surface of the phenomenon – the observable part, or data. Our approach, which borrows from physics and in particular the work of Richard Feynman, goes under the bonnet. It attempts to capture the mathematical structure of the phenomenon – in this case, the pandemic – and to understand the causes of what is observed. Since we don’t know all the causes, we have to infer them. But that inference, and implicit uncertainty, is built into the models. That’s why we call them generative models, because they contain everything you need to know to generate the data. As more data comes in, you adjust your beliefs about the causes, until your model simulates the data as accurately and as simply as possible.

[---]

This is the first time the generative approach has been applied to a pandemic. Has it proved itself in other domains?

These techniques have enjoyed enormous success ever since they moved out of physics. They’ve been running your iPhone and nuclear power stations for a long time. In my field, neurobiology, we call the approach dynamic causal modelling (DCM). We can’t see brain states directly, but we can infer them given brain imaging data. In fact, we have pushed that idea even further. We think the brain may be doing its own dynamic causal modelling, reducing its uncertainty about the causes of the data the senses feed to it. We call this the free energy principle. But whether you’re talking about a pandemic or a brain, the essential problem is the same – you’re trying to understand a complex system that changes over time. In that sense, I’m not doing anything new. The data is generated by Covid-19 patients rather than neurons, but otherwise it’s just another day at the office.

You say generative models are also more efficient than conventional ones. What do you mean?

Epidemiologists currently tackle the inference problem by number-crunching on a huge scale, making use of high-performance computers. Imagine you want to simulate an outbreak in Scotland. Using conventional approaches, this would take you a day or longer with today’s computing resources. And that’s just to simulate one model or hypothesis – one set of parameters and one set of starting conditions. Using DCM, you can do the same thing in a minute. That allows you to score different hypotheses quickly and easily, and so to home in sooner on the best one.

[---]

Once the pandemic is over, will you be able to use your models to ask which country’s response was best?

That is already happening, as part of our attempts to understand the latent causes of the data. We’ve been comparing the UK and Germany to try to explain the comparatively low fatality rates in Germany. The answers are sometimes counterintuitive. For example, it looks as if the low German fatality rate is not due to their superior testing capacity, but rather to the fact that the average German is less likely to get infected and die than the average Brit. Why? There are various possible explanations, but one that looks increasingly likely is that Germany has more immunological “dark matter” – people who are impervious to infection, perhaps because they are geographically isolated or have some kind of natural resistance. This is like dark matter in the universe: we can’t see it, but we know it must be there to account for what we can see. Knowing it exists is useful for our preparations for any second wave, because it suggests that targeted testing of those at high risk of exposure to Covid-19 might be a better approach than non-selective testing of the whole population.

- Full interview here

Sunday, April 12, 2020

Why "AI" Has Been Useless In This Pandemic?

AI has been one of the biggest let down in this pandemic.

There was so much noise for over a decade and when it came to complex systems even Google's of the world didn't do shit (location tracking app is NOT AI).

Bill and Melinda Gates's philanthropic venture has been phenomenal while Microsoft's AI team has been and still taking a nap (chatbots don't count).

Finally, someone spoke what it needs to be said (and I do modeling n AI for a living).
Thank you, Cheryl Rofer - Read her entire twitter thread and I am posting it here too:

1. There are a hundred gazillion models out there. Few of their owners have bothered to compare their model to others to see what is working and what isn't.
2. By the standards of the models I've worked with, they are all simple - a few differential equations, a curve fit. I've worked with a hundred or more elementary reactions and then a mass- and heat-transfer model that incorporated those in. (Hint: we had to boil them down to six)
3. The only model I have seen that is at all transparent about its parameters is the Imperial College model. All the others I have seen are curve fits. They mumble about social distancing as a variable but never say which parameter it fits into.
4. All the curve fits are with different functions. At least, back in the day when chemical kinetics was curve fitting, we always used the same function.
5. It looks like amateur hour. Everyone's got their pet model, but they're not telling us what it is. And a further layer of amateurs on Twitter say solemn words like "assumptions" that they have nothing to back up.
6. I've spent a lot of time on the Imperial College and IHME models. I don't intend to do that for every single model out there. (The Imperial College Modeling Of The Coronavirus)
7. My assumption from here on in is that any model but the Imperial College model is crap until it's explained. With explicit connection of assumptions to parameters.
8. Tyler Cowen (and probably other economists who think they've got something to say) has never looked closely at the epidemiological models, judging from this. His questions are obnoxious. Ignore stuff like this - What does this economist think of epidemiologists?

Yes, this is a huge issue now - people cannot shut up. Most of them became overnight economists, virologists, epidemiologists, doctors, modelers and experts in everything under the sun (so much for epistemological modesty).
The best minds of my generation are thinking about how to make people click ads. That sucks.

- Jeff Hammerbacher
This is what we get from AI when we forget the sense of the reality of complex systems vs. earning million-dollar salaries by making people click on links.

As J.D Salinger's Holden would say - I could puke next time someone gives an intellectual fart talk on "the great AI threat".

But to be fair:

It takes lots of data, domain knowledge, a range of diverse knowledge (foxes and not hedgehogs) and more importantly patience and time to train a generalized model.  One cannot instantly "deep learn" the way out of a pandemic. 

There are some great exceptions - Five-Thirty-Eight had a good post on Why It’s So Freaking Hard To Make A Good COVID-19 Model and of-course Nassim Taleb.

There are some hidden treasures who have been modeling this for a long time - kudos to them:

Thursday, April 2, 2020

On The Statistical Differences Between Binary Forecasts And Real World Payoffs - Taleb

The fact that an "event" has some uncertainty around its magnitude carries some mathematical consequences. Some verbalistic papers still commit in 2019 the fallacy of binarizing an event in      [0, ∞): A recent paper on calibration of beliefs, [14] says "...if a person claims that the United States is on the verge of an economic collapse or that a climate disaster is imminent..." An economic "collapse" or a climate "disaster" must not be expressed as an event in {0, 1} when in the real world it can take many values. For that, a characteristic scale is required. In fact, under fat tails, there is no "typical" collapse or disaster, owing to the absence of characteristic scale, hence verbal binary predictions or beliefs cannot be used as gauges.

The point can be made clear as follows. One cannot have a binary contract that adequately hedges someone against a "collapse", given that one cannot know in advance the size of the collapse or how much the face value or such contract needs 
to be. On the other hand, an insurance contract or option with continuous payoff would provide a satisfactory hedge. Another way to view it: reducing these events to verbalistic "collapse", "disaster" is equivalent to a health insurance payout of a lump sum if one is "very ill" –regardless of the nature and gravity of the illness – and 0 otherwise.

And it is highly flawed to separate payoff and probability in the integral of expected payoff. Some experiments of the type shown in Fig. I-5 ask agents what is their estimates of deaths from botulism or some such disease: agents are blamed for misunderstanding the probability. This is rather a problem with the experiment: people do not necessarily separate probabilities from payoffs.

[---]

Misunderstanding of Hayek’s knowledge arguments: "Hayekian" arguments for the consolidation of beliefs via prices does not lead to prediction markets as discussed in such pieces as [25], or Sunstein’s [26]: prices exist in financial and commercial markets; prices are not binary bets. For Hayek [27] consolidation of knowledge is done via prices and arbitrageurs (his words)–and arbitrageurs trade products, services, and financial securities, not binary bets.

-
Full paper by Taleb here


Friday, March 27, 2020

This Pandemic Is Not A Black Swan - Nassim Taleb

Furthermore, some people claim that the pandemic is a “Black Swan”, hence something unexpected so not planning for it is excusable. The book they commonly cite is The Black Swan (by one of us). Had they read that book, they would have known that such a global pandemic is explicitly presented there as a white swan: something that would eventually take place with great certainty. Such acute pandemic is unavoidable, the result of the structure of the modern world; and its economic consequences would be compounded because of the increased connectivity and overoptimization. As a matter of fact, the government of Singapore, whom we advised in the past, was prepared for such an eventuality with a precise plan since as early as 2010.

-
Nassim Taleb

Saturday, November 23, 2019

Wisdom Of The Week

Through his career, Hilbert was interested in the ultimate limits of mathematical knowledge: what can humans know about mathematics, in principle, and what (if any) parts of mathematics are forever unknowable by humans? Roughly speaking, Hilbert’s 1928 problem asked whether there exists a general algorithm a mathematician can follow which would let them figure out whether any given mathematical statement is provable. Hilbert’s hoped-for algorithm would be a little like the paper-and-pencil algorithm for multiplying two numbers. Except instead of starting with two numbers, you’d start with a mathematical conjecture, and after going through the steps of the algorithm you’d know whether that conjecture was provable. The algorithm might be too time-consuming to use in practice, but if such an algorithm existed, then there would be a sense in which mathematics was knowable, at least in principle.

In 1928, the notion of an algorithm was pretty vague. Up to that point, algorithms were often carried out by human beings using paper and pencil, as in the multiplication algorithm just mentioned, or the long-division algorithm. Attacking Hilbert’s problem forced Turing to make precise exactly what was meant by an algorithm. To do this, Turing described what we now call a Turing machine: a single, universal programmable computing device that Turing argued could perform any algorithm whatsoever.

Today we’re used to the idea that computers can be programmed to do many different things. In Turing’s day, however, the idea of a universal programmable computer was remarkable. Turing was arguing that a single, fixed device could imitate any algorithmic process whatsoever, provided the right program was supplied. It was an amazing leap of imagination, and the foundation of modern computing.

[---]

There’s a wrinkle in this story. Deutsch is a physicist with a background in quantum mechanics. And in trying to answer his question, Deutsch observed that ordinary, everyday computers based on Turing’s model have a lot of trouble simulating quantum mechanical systemsResearchers such as Yu Manin and Richard Feynman had previously observed this, and as a result had speculated about computers based on quantum mechanics.. In particular, they seem to be extraordinarily slow and inefficient at doing such simulations. To answer his question affirmatively, Deutsch was forced to invent a new type of computing system, a quantum computer. Those quantum computers can do everything conventional computers can do, but are also capable of efficiently simulating quantum-mechanical processes. And so they are arguably a more natural computing model than conventional computers. If we ever meet aliens, my bet is that they’ll use quantum computers (or, perhaps, will have quantum computing brains). After all, it’s likely that aliens will be far more technologically advanced than current human civilization. And so they’ll use the computers natural for any technologically advanced society.

This essay explains how quantum computers work. It’s not a survey essay, or a popularization based on hand-wavy analogies. We’re going to dig down deep so you understand the details of quantum computing. Along the way, we’ll also learn the basic principles of quantum mechanics, since those are required to understand quantum computation.

Learning this material is challenging. Quantum computing and quantum mechanics are famously “hard” subjects, often presented as mysterious and forbidding. If this were a conventional essay, chances are that you’d rapidly forget the material. But the essay is also an experiment in the essay form. As I’ll explain in detail below the essay incorporates new user interface ideas to help you remember what you read. That may sound surprising, but uses a well-validated idea from cognitive science known as spaced-repetition testing. More detail on how it works below. The upshot is that anyone who is curious and determined can understand quantum computing deeply and for the long term.

-
 Quantum Computing for the Very Curious

Monday, December 17, 2018

What I've Been Reading

Ironically, the need for a theory of causation began to surface at the same time that statistics came into being. In fact, modern statistics hatched from the causal questions that Galton and Pearson asked about heredity and their ingenious attempts to answer them using cross-generational data. Unfortunately, they failed in this endeavor, and rather than pause to ask why, they declared those questions off limits and turned to developing a thriving, causality-free enterprise called statistics.

[---]

My emphasis on language also comes from a deep conviction that language shapes our thoughts. You cannot answer question that you cannot ask, and you cannot ask a question that you have no words for. 
The Book of Why: The New Science of Cause and Effect by Judea Pearl.

Wow ! one of the best books of the year. Easy ready even for a non-technical person.

1. My research on machine learning has taught me that a causal learner must master at least three distinct levels of cognitive ability: seeing, doing and imagining.
2. Bayes's rule informs our reasoning in cases where ordinary intuition fails us or where emotion might lead us astray.
3. Monty Hall Problem is a paradox because "They are accustomed to the reduction of data and ignoring the data-generating process (R.A Fisher, 1922).
4. To turn a noncausal Bayesian network into a causal model - or, more precisely, to make it capable of answering counterfactual queries - we need a dose-response relationship at each node.

Monday, April 2, 2018

What I've Been Reading

Skin in the Game: Hidden Asymmetries in Daily Life by Nassim Nicholas Taleb.

Brilliant :-)

1. The minority rule produces low variance outcomes.
2. Never compare a multiplicative, systemic, and fat-tailed risk to a non-multiplicative, idiosyncratic, and thin-tailed one.
3. Courage is when you sacrifice your own well-being for the sake of the survival of a layer higher than yours.
4. How much you truly "believe" in something can be manifested only through what you are willing to risk for it.
5. There is no love without sacrifice, no power without fairness, no facts without rigor, no statistics without logic, no teaching without experience, no complication without depth, no science without skepticism, and nothing without skin in the game.  

A great review here as well.