Monday, May 25, 2015

Thought Vector + Deep Learning = Progress in NLP

Richard Socher, an artificial intelligence scientist at Stanford University, recently developed a program called NaSent that he taught to recognise human sentiment by training it on 12,000 sentences taken from the film review website Rotten Tomatoes.

Part of the initial motivation for developing “thought vectors” was to improve translation software, such as Google Translate, which currently uses dictionaries to translate individual words and searches through previously translated documents to find typical translations for phrases. Although these methods often provide the rough meaning, they are also prone to delivering nonsense and dubious grammar. Thought vectors, Hinton explained, work at a higher level by extracting something closer to actual meaning.
The technique works by ascribing each word a set of numbers (or vector) that define its position in a theoretical “meaning space” or cloud. A sentence can be looked at as a path between these words, which can in turn be distilled down to its own set of numbers, or thought vector.

The “thought” serves as a the bridge between the two languages because it can be transferred into the French version of the meaning space and decoded back into a new path between words.

The key is working out which numbers to assign each word in a language – this is where deep learning comes in. Initially the positions of words within each cloud are ordered at random and the translation algorithm begins training on a dataset of translated sentences.

At first the translations it produces are nonsense, but a feedback loop provides an error signal that allows the position of each word to be refined until eventually the positions of words in the cloud captures the way humans use them – effectively a map of their meanings.

Hinton said that the idea that language can be deconstructed with almost mathematical precision is surprising, but true. “If you take the vector for Paris and subtract the vector for France and add Italy, you get Rome,” he said. “It’s quite remarkable.”


- More Here

Quote of the Day

The key has been to adapt the Silicon Valley start-up culture to industries that had been insulated from disruptive interlopers. Musk set outrageous goals and squeezed unimaginable performance out of his staff. With little money to play with, his companies relied on moving fast and making do. When a SpaceX engineer tells him a supplier has quoted a price of $120,000 for a rocket part, for instance, Musk laughs and says it is no more complicated than a garage-door opener: the engineer eventually finds a way to make the part for $3,900.

Review of the new book, Elon Musk: How the Billionaire CEO of SpaceX and Tesla is Shaping Our Future by Ashlee Vance

Sunday, May 24, 2015

Chimps Have Feelings and Thoughts. They Should Also Have Rights

Thank you Steven Wise, I see you include only Chimps for now since our fellow human apes are too busy thinking about themselves. We all know every species on this planet have feelings and thoughts albeit it cannot be anthropomorphize.

So for centuries, there's been a great legal wall that separates legal things from legal persons. On one hand, legal things are invisible to judges. They don't count in law. They don't have any legal rights. They don't have the capacity for legal rights. They are the slaves. On the other side of that legal wall are the legal persons. Legal persons are very visible to judges. They count in law. They may have many rights. They have the capacity for an infinite number of rights. And they're the masters. Right now, all nonhuman animals are legal things. All human beings are legal persons.

But being human and being a legal person has never been, and is not today, synonymous with a legal person. Humans and legal persons are not synonymous. On the one side, there have been many human beings over the centuries who have been legal things. Slaves were legal things. Women, children, were sometimes legal things. Indeed, a great deal of civil rights struggle over the last centuries has been to punch a hole through that wall and begin to feed these human things through the wall and have them become legal persons.

But alas, that hole has closed up. Now, on the other side are legal persons, but they've never only been limited to human beings. There are, for example, there are many legal persons who are not even alive. In the United States, we're aware of the fact that corporations are legal persons. In pre-independence India, a court held that a Hindu idol was a legal person, that a mosque was a legal person. In 2000, the Indian Supreme Court held that the holy books of the Sikh religion was a legal person, and in 2012, just recently, there was a treaty between the indigenous peoples of New Zealand and the crown, in which it was agreed that a river was a legal person who owned its own riverbed.




Quote of the Day

Anger is an acid that can do more harm to the vessel in which it is stored than to anything on which it is poured.

- Seneca

Saturday, May 23, 2015

Wisdom Of The Week

Stephen S. Hall's 2010 book Wisdom: From Philosophy to Neuroscience is one my favorites. I haven't re-read it yet but came across this good summation of what Stephen calls as the “Eight Neural Pillars of Wisdom”:
  • Emotional Regulation: the art of coping.
  • Knowing What’s Important: the neural mechanism of establishing value and making a judgement.
  • Moral Reasoning: the biology of judging right from wrong.
  • Compassion: the biology of loving-kindness and empathy.
  • Humility: the gift of perspective.
  • Altruism: social justice, fairness, and the wisdom of punishment.
  • Patience: temptation, delayed gratification and the biology of learning to wait for larger rewards.
  • Dealing with Uncertainty: change, ‘meta-wisdom’ and the vulcanization of the human brain. 
And one of my favorite quotes from the book is by Adam Smith who as usual gives us a great insight into human nature:

In a lovely evocation of that timeless fork in the road between material and spiritual well-being, he spoke of two different roads - one of "proud ambition and ostentatious avidity," the other of "humble modestly and equitable justice" - that await our choice. 

Two different models, two different pictures, are held out to us, according to which fashion our own character and behavior; the one more gaudy and glittering in its coloring; the other more correct and exquisitely beautiful in its outline: the one forcing itself upon the notice of every wandering eye; the other attracting the attention of scare any body but most studious and careful observer. 

They are the wise and the virtuous chiefly, a select, though, I am afraid, but a small party, who are the real and steady admirers of wisdom and virtue. The great mob of mankind are the admirers and worshipers, and, what may seem more extraordinary, most frequently the disinterested admirers and worshipers, of wealth and greatness.



What Is Intelligence?

That was the discussion started in my class and to get the conversation started, they added following answers by some great AI researchers.

Dr. Boyang 'Albert' Li provided this link to Pei Wang's paper.  Dr. Li summarized Wang's position in the paper with this quote from the paper:

"Intelligence is the capacity of a system to adapt to its environment while operating with insufficient knowledge and resources."

I also asked Dr. Li to give a personal definition, which he kindly did.  I note that he adds that, for intelligence in general, Wang's remains the best definition.  Here was Li's personal definition:

"My personal definition is probably more like solving difficult problems, or do what humans can do, since these are immediate goals for computational narrative intelligence."

Dr. Michael Helms provided this definition:

"Intelligence is that which we ascribe to a system, for an observed set of system actions under sufficiently varied stochastic conditions, and within a sufficiently complex domain, such that those actions can be interpreted to facilitate the optimization of a self-consistent utility function, taking into account limitations of the systems knowledge and its capability to act, for some definition of sufficiently varied and sufficiently complex that are features of the observer, and not the observed."

Dr. Ashok Goel gave this very thoughtful response:

"What is intelligence? Doesnt that depends on what the meaning of "is" is? Seriously, the question assumes, as do most definitions of intelligence, that there *is* some thing, one specific thing, called intelligence. But what if, like life, like love, intelligence is no one thing with clear boundaries? What if it is just a word we use to denote a complex and intricate ensemble with unclear boundaries that we
do not yet understand? What if once we came to understand intelligence, the question would not make much sense anymore?"

Dr. Swaroop Vattam gave this answer:

"Asking AI researchers what intelligence is is akin to asking a life-sciences researcher what life is. Biology has made a lot of progress without pinning down the definition of life. I think it's the same way for us folks."

******************

I had never thought about this question very clearly and I started tried to define intelligence for the first time and here's my response:

In sprint of this exceptionally interesting class, I would like to add some basic constraints (or representations) for fun before attempting to construct the meaning of intelligence (i.e., if there is meaning):
  • I would shed the anthropomorphic view and clearly, include every species on this planet to be considered for evaluation. This finding "Squirrels and chipmunks eavesdrop on birds, sometimes adding their own thoughts." which came out this week is a prime example of our limits of knowledge and begs for our epistemological modesty.
  • Intelligence and Wisdom are probably different. The line between them is very blurry and sometimes overlaps but outside of those blurriness both are different entities.
  • If something that clearly falls under the realm of intelligence wouldn't automatically exclude itself just because we decided linguistically to add "Artificial" in front of it to make us comfortable or something of that sort. This one could be distributed cognition and similar to ants, eusocial theory of evolution.
  • Finally, I wouldn't group intelligence at species level but at an individual level in each species. I think, grouping at species level is one of the crucial reasons we tend anthropomorphize. Asperger's syndrome is one of the clear indicators of this phenomena. Under this criteria, capacity for future intelligence will be in a completely different domain and should not be confused with an entity which has already displayed its intelligence.
  • To stress the above point, mastering existing knowledge is not the same as creating new knowledge. As Peter Thiel points out in Zero to One - "Doing what we already know how to do takes the world from 1 to n, adding more of something familiar. But every time we create something new, we go from 0 to 1. The act of creation is "singular", as is the moment of creation, and the result is something fresh and strange."

Drawing from above highly simplified representations and the ideas summarized eloquently in E.O.Wilson's Consilience:Unity of Knowledge, the closet definition of intelligence I can think of is:
An entity can be considered intelligence if it has used it's existing knowledge repository to create or discover new knowledge which was previously unknown to it and does so in a stochastic environment.



Quote of the Day

Beware the barrenness of a busy life.

- Socrates

Friday, May 22, 2015

Quote of the Day

And we had damned well better not forget it, in a fog of faux remorseful “Knowing what we now know...” sanitized history.

- James Fallows, The Right and Wrong Questions About the Iraq War

Thursday, May 21, 2015

People Are Blaming Algorithms For The Cruelty of Bureaucracy

Algorithms are impersonal, biased, emotionless, and opaque because bureaucracy and power are impersonal, emotionless, and opaque and often characterized by bias, groupthink, and automatic obedience to procedure. In analyzing algorithms, critics merely rediscover one of the oldest and most fundamental issues in social science: the pathology of bureaucracy and structural authority and power. Algorithms are not products of a “black box”; rather, they are the computational realization and machine representation of the “iron cage” of bureaucracy. As sociologist Max Weber noted a century ago, bureaucratic rationality consists of hierarchal authority, impersonal decision-making, codified rules of conduct, promotion based on achievement, specialized division of labor, and efficiency. Any kind of rational, cost/benefit thinking, however, presupposes a goal or objective. That goal may not always be in the interests of the individuals that a bureaucracy governs. Moreover, institutions may default to standard operating procedures even when doing so has counterproductive, harmful, and even absurd implications.

Today’s automation and data-driven programs are merely the latest and greatest of a long movement toward the automation, optimization, and control of social life—and this story begins not with a revolution in computing but a revolution in human understanding of social relations and governance. Sometime around the mid-19th century, scholars believe, the basic technology of social relations and governance shifted dramatically. Fueled by economic and philosophical thinking and sociological changes, some argue, the notion of society was upended and replaced with notions of utility, preference, and collective welfare. The notion of collective society was replaced by the image of an autonomous and self-interested individual who made rational choices to attain the objectively best outcome for him- or herself. Similarly, political governance became dominated by attempts to achieve social and political control through quantification, measurement, and rational bureaucratic processes. Such “scientific” measures would allow authorities to treat society as a machine that they could program and manipulate to achieve desired objectives. This is not a criticism as much as a simple historical and sociological observation. Such a shift also explains, after all, the origin, nature, and folkways of modern bureaucracy and how governmental and corporate metaphorical machines became slowly infiltrated by real machines.

Modern bureaucracy, as a form of power, was originally justified in terms of scientific and enlightened governance of society and optimization and control of corporate business processes. Another feature of bureaucratic and technocratic thinking was the assumption of paternalism. Whether it was early 20th-century thinking about the madness of crowds or trendy modern behavioral psychology influenced policy ideas about the importance of “nudges,” reformers believed efficient procedures and mechanisms could be designed to help otherwise hapless individuals make better decisions.


- More Here



Quote of the Day

A very disturbing feature of overconfidence is that it often appears to be poorly associated with knowledge - that is, the more ignorant the individual, the more confident he or she might be.

- Robert Trivers, Deceit and Self-Deception: Fooling Yourself the Better to Fool Others