This probably is the future of Information Technology (and yeah, IT jobs as well) - Here:
Back in the 1960s, AI systems started to show great promise for replicating key aspects of the human mind. Scientists began by using mathematical logic to both represent knowledge about the real world and to reason about it, but it soon turned out to be an AI straightjacket. While logic was capable of being productive in ways similar to the human mind, it was inherently unsuited for dealing with uncertainty.
Yet after spending so long shrouded in a self-inflicted winter of discontent, the much-maligned field of AI is in bloom again. And Domingos is not the only one with fresh confidence in it. Researchers hoping to detect illness in babies, translate spoken words into text and even sniff out rogue nuclear explosions are proving that sophisticated computer systems can exhibit the nascent abilities which sparked interest in AI in the first place: the ability to reason like humans, even in a noisy and chaotic world.
Lying close to the heart of AI's revival is a technique called probabilistic programming, which combines the logical underpinnings of the old AI with the power of statistics and probability. "It's a natural unification of two of the most powerful theories that have been developed to understand the world and reason about it," says Stuart Russell, a pioneer of modern AI at the University of California, Berkeley. This powerful combination is finally starting to disperse the fog of the long AI winter. "It's definitely spring," says cognitive scientist Josh Tenenbaum at the Massachusetts Institute of Technology.
The first glimmer of spring came with the arrival of neural networks in the late 1980s. The idea was stunning in its simplicity. Developments in neuroscience had led to simple models of neurons. Coupled with advances in algorithms, this let researchers build artificial neural networks(ANNs) that could learn, ostensibly like a real brain. Invigorated computer scientists began to dream of ANNs with billions or trillions of neurons. Yet it soon became clear that our models of neurons were too simplistic and researchers couldn't tell which of the neuron's properties were important, let alone model them.
Neural networks, however, helped lay some of the foundations for a new AI. Some researchers working on ANNs eventually realised that these networks could be thought of as representing the world in terms of statistics and probability. Rather than talking about synapses and spikes, they spoke of parameterisation and random variables. "It now sounded like a big probabilistic model instead of a big brain," says Tenenbaum.
The key is a Bayesian network, a model made of various random variables, each with a probability distribution that depends on every other variable. Tweak the value of one, and you alter the probability distribution of all the others. Given the value of one or more variables, the Bayesian network allows you to infer the probability distribution of other variables - in other words, their likely values. Say these variables represent symptoms, diseases and test results. Given test results (a viral infection) and symptoms (fever and cough), one can assign probabilities to the likely underlying cause (flu, very likely; pneumonia, unlikely).
By the mid-1990s, researchers including Russell began to develop algorithms for Bayesian networks that could utilise and learn from existing data. In much the same way as human learning builds strongly on prior understanding, these new algorithms could learn much more complex and accurate models from much less data. This was a huge step up from ANNs, which did not allow for prior knowledge; they could only learn from scratch for each new problem.
Back in the 1960s, AI systems started to show great promise for replicating key aspects of the human mind. Scientists began by using mathematical logic to both represent knowledge about the real world and to reason about it, but it soon turned out to be an AI straightjacket. While logic was capable of being productive in ways similar to the human mind, it was inherently unsuited for dealing with uncertainty.
Yet after spending so long shrouded in a self-inflicted winter of discontent, the much-maligned field of AI is in bloom again. And Domingos is not the only one with fresh confidence in it. Researchers hoping to detect illness in babies, translate spoken words into text and even sniff out rogue nuclear explosions are proving that sophisticated computer systems can exhibit the nascent abilities which sparked interest in AI in the first place: the ability to reason like humans, even in a noisy and chaotic world.
Lying close to the heart of AI's revival is a technique called probabilistic programming, which combines the logical underpinnings of the old AI with the power of statistics and probability. "It's a natural unification of two of the most powerful theories that have been developed to understand the world and reason about it," says Stuart Russell, a pioneer of modern AI at the University of California, Berkeley. This powerful combination is finally starting to disperse the fog of the long AI winter. "It's definitely spring," says cognitive scientist Josh Tenenbaum at the Massachusetts Institute of Technology.
The first glimmer of spring came with the arrival of neural networks in the late 1980s. The idea was stunning in its simplicity. Developments in neuroscience had led to simple models of neurons. Coupled with advances in algorithms, this let researchers build artificial neural networks(ANNs) that could learn, ostensibly like a real brain. Invigorated computer scientists began to dream of ANNs with billions or trillions of neurons. Yet it soon became clear that our models of neurons were too simplistic and researchers couldn't tell which of the neuron's properties were important, let alone model them.
Neural networks, however, helped lay some of the foundations for a new AI. Some researchers working on ANNs eventually realised that these networks could be thought of as representing the world in terms of statistics and probability. Rather than talking about synapses and spikes, they spoke of parameterisation and random variables. "It now sounded like a big probabilistic model instead of a big brain," says Tenenbaum.
The key is a Bayesian network, a model made of various random variables, each with a probability distribution that depends on every other variable. Tweak the value of one, and you alter the probability distribution of all the others. Given the value of one or more variables, the Bayesian network allows you to infer the probability distribution of other variables - in other words, their likely values. Say these variables represent symptoms, diseases and test results. Given test results (a viral infection) and symptoms (fever and cough), one can assign probabilities to the likely underlying cause (flu, very likely; pneumonia, unlikely).
By the mid-1990s, researchers including Russell began to develop algorithms for Bayesian networks that could utilise and learn from existing data. In much the same way as human learning builds strongly on prior understanding, these new algorithms could learn much more complex and accurate models from much less data. This was a huge step up from ANNs, which did not allow for prior knowledge; they could only learn from scratch for each new problem.
No comments:
Post a Comment