Saturday, March 19, 2022

Wise Words On AI

I have skin in the game and I am "in" the field. Wise and timely piece from Gary Marcus: 

In time we will see that deep learning was only a tiny part of what we need to build if we’re ever going to get trustworthy AI.

[---]

When the stakes are higher, though, as in radiology or driverless cars, we need to be much more cautious about adopting deep learning. When a single error can cost a life, it’s just not good enough. Deep-learning systems are particularly problematic when it comes to “outliers” that differ substantially from the things on which they are trained.

[---]

As AI researchers Emily Bender, Timnit Gebru, and colleagues have put it, deep-learning-powered large language models are like “stochastic parrots,” repeating a lot, understanding little.

[---]

For at least four reasons, hybrid AI, not deep learning alone (nor symbols alone) seems the best way forward:

  • So much of the world’s knowledge, from recipes to history to technology is currently available mainly or only in symbolic form. Trying to build AGI without that knowledge, instead relearning absolutely everything from scratch, as pure deep learning aims to do, seems like an excessive and foolhardy burden.
  • Deep learning on its own continues to struggle even in domains as orderly as arithmetic.A hybrid system may have more power than either system on its own.
  • Symbols still far outstrip current neural networks in many fundamental aspects of computation. They are much better positioned to reason their way through complex scenarios,22 can do basic operations like arithmetic more systematically and reliably, and are better able to precisely represent relationships between parts and wholes (essential both in the interpretation of the 3-D world and the comprehension of human language). They are more robust and flexible in their capacity to represent and query large-scale databases. Symbols are also more conducive to formal verification techniques, which are critical for some aspects of safety and ubiquitous in the design of modern microprocessors. To abandon these virtues rather than leveraging them into some sort of hybrid architecture would make little sense.
  • Deep learning systems are black boxes; we can look at their inputs, and their outputs, but we have a lot of trouble peering inside. We don’t know exactly why they make the decisions they do, and often don’t know what to do about them (except to gather more data) if they come up with the wrong answers. This makes them inherently unwieldy and uninterpretable, and in many ways unsuited for “augmented cognition” in conjunction with humans. Hybrids that allow us to connect the learning prowess of deep learning, with the explicit, semantic richness of symbols, could be transformative.

[---]

For the first time in 40 years, I finally feel some optimism about AI. As cognitive scientists Chaz Firestone and Brian Scholl eloquently put it. “There is no one way the mind works, because the mind is not one thing. Instead, the mind has parts, and the different parts of the mind operate in different ways: Seeing a color works differently than planning a vacation, which works differently than understanding a sentence, moving a limb, remembering a fact, or feeling an emotion.” Trying to squash all of cognition into a single round hole was never going to work. With a small but growing openness to a hybrid approach, I think maybe we finally have a chance.

With all the challenges in ethics and computation, and the knowledge needed from fields like linguistics, psychology, anthropology, and neuroscience, and not just mathematics and computer science, it will take a village to raise to an AI. We should never forget that the human brain is perhaps the most complicated system in the known universe; if we are to build something roughly its equal, open-hearted collaboration will be key. 

Amongst all these hype; the reality of what's going on currently with AI is nothing but quintessential and slow progress as always happens in any scientific discipline (sprinkled with ego clashes). 

We should be prudent and use AI only where stakes aren't high. We should force ourselves to have human-in-loop in most AI applications. If not anything, it will help improve the AI models. 

Remember - whatever we have as AI is only because it is powered by data. If there are no relevant data then there will be no magical AI. 

Last time I checked, most humans refuse to change their mind even with right data. So let's give AI models a break and use it for our advantage where its applicable and safe.

The evolution of  all problems with AI are inherited from the age old human problem of longing for panacea and magic. As usual, the remedy is to self-reflect and grow up. 


  

No comments: