A summary definition of some 70 descriptions of intelligence provides a definition for all other organisms including plants that stresses fitness. Barbara McClintock, a plant biologist, posed the notion of the 'thoughtful cell' in her Nobel prize address. The systems structure necessary for a thoughtful cell is revealed by comparison of the interactome and connectome. The plant root cap, a group of some 200 cells that act holistically in responding to numerous signals, likely possesses a similar systems structure agreeing with Darwin's description of acting like the brain of a lower organism. Intelligent behavior requires assessment of different choices and taking the beneficial one. Decisions are constantly required to optimize the plant phenotype to a dynamic environment and the cambium is the assessing tissue diverting more or removing resources from different shoot and root branches through manipulation of vascular elements. Environmental awareness likely indicates consciousness. Spontaneity in plant behavior, ability to count to five and error correction indicate intention. Volatile organic compounds are used as signals in plant interactions and being complex in composition may be the equivalent of language accounting for self and alien recognition by individual plants. Game theory describes competitive interactions. Interactive and intelligent outcomes emerge from application of various games between plants themselves and interactions with microbes. Behavior profiting from experience, another simple definition of intelligence, requires both learning and memory and is indicated in the priming of herbivory, disease and abiotic stresses.
The soil is the great connector of lives, the source and destination of all. It is the healer and restorer and resurrector, by which disease passes into health, age into youth, death into life. Without proper care for it we can have no community, because without proper care for it we can have no life.
If dogs could talk, Melody Jackson knows what they would say. Or at least, what she'd like them to say.
Jackson, an associate professor at the Georgia Institute of Technology, has developed technology that is giving dogs a voice, an ability she says is crucial for search and rescue, bomb detection and therapy dogs. The dogs wear vests equipped with sensors that can send either audible cues or text notifications to a smartphone.
"A bomb-sniffing dog has pretty much one alert that says, 'Hey, I found an explosive." But that dog knows what explosive is in there. ... They know if it's something stable like C4 or something unstable and dangerous like TATP that needs to be handled carefully," Jackson says. The problem is "they have no way to tell their handler." Jackson and her research team have also developed a medical alert vest that allows a dog to find a missing or trapped person, activate a sensor, and let that person know that help is on the way. This task could be instrumental during an earthquake or disaster rescue where a trapped or injured person is in need of assistance. This vest is being beta tested by a real service dog team in California, Jackson says.
Georgia Tech is also working to develop a vest that allows the handler to track the dog wearing it. When the dog finds its target, the dog activates a sensor that sends GPS coordinates back to the handler. The dog then tells the person in jeopardy that help is on the way, and the rescue canine does not have to leave the victim's side.
Deep learning works well across many applications when there is a lot of data, but what about one-shot or zero-shot learning, in which it is necessary to transfer and adapt knowledge from other domains to the current domain? What kinds of abstractions are formed by deep networks, and how can we reason with these abstractions and combine them? Networks can be fooled by adversarial inputs; how do we defend against these, and do they represent a fundamental flaw, or an irrelevant trick?
How do we deal with structure in a domain? We have recurrent networks to deal with time, and recursive networks to deal with nested structure, but it is too early to tell whether these are sufficient.
So I'm excited about Deep Learning because so many long-standing fields are excited about it. And I'm interested in understanding more because there are many remaining questions, and answers to these questions will not only tell us more about Deep Learning, but may help us understand Learning, Inference, and Representation in general.
Is there any place for software engineers that do not learn AI or Machine Learning in the next 10 years or does everyone have to learn it?
Machine Learning will be (or perhaps already is) such an important part of software engineering that everyone will have to understand where it fits in. But just like, say, database administration or user interface design, that doesn’t mean every engineer will have to be an expert in doing machine learning—it will be acceptable to work with others who are expert. But the more you know about machine learning, the better you will be at architecting a solution.
I also think that it will be important for machine learning experts and software engineers to come together to develop best practices for software development of machine learning systems. Currently we have a software testing regime where you define unit tests with calls to methods like assertTrue or assertEquals. We will need new testing processes that involve running experiments, analyzing the results, comparing today’s results to past results to look for drift, deciding if the drift is random variation or non-stationarity of the data, etc. This is a great area for software engineers and machine learning people to work together to build something new and better.
Here is what power does to just about every human being. It’s going to make you not pay attention to people as well as you used to pay attention to them. You may find yourself swearing at a colleague or telling them that their work is horseshit. You will be a little less careful in the language you use. You will be a little less thoughtful about how things look from their perspective. So just practise a little gratitude. Listen empathetically. It shouldn’t be that difficult.
The experiment the AI performed was the creation of a Bose-Einstein condensate, a hyper-cold gas, the process for which won three physicists the Nobel Prize in 2001. It involves using directed radiation to slow a group of atoms nearly to a standstill, producing all manner of interesting effects.
The Australian National University team cooled a bit of gas down to 1 microkelvin — that’s a millionth of a degree above absolute zero — then handed over control to the AI. It then had to figure out how to apply its lasers and control other parameters to best cool the atoms down to a few hundred nanokelvin (i.e. a billionth of a second), and over dozens of repetitions, it found more and more efficient ways to do so.
“It did things a person wouldn’t guess, such as changing one laser’s power up and down, and compensating with another,” said ANU’s Paul Wigley, co-lead researcher, in a news release. “I didn’t expect the machine could learn to do the experiment itself, from scratch, in under an hour. It may be able to come up with complicated ways humans haven’t thought of to get experiments colder and make measurements more precise.”