Thursday, July 20, 2017

Quote of the Day

Just because someone screwed up your past, it doesn’t mean you should give them permission to screw up your future.

- Zig Ziglar

Wednesday, July 19, 2017

Quote of the Day

The more precise you are, in general the more likely you are to be wrong.

- J.L. Austin

Tuesday, July 18, 2017

Quote of the Day

What man actually needs is not a tensionless state but rather the striving and struggling for some goal worthy of him.

- Viktor Frankl

Monday, July 17, 2017

Quote of the Day

I repeat this history because I don’t think moral obliviousness is built in a day. It takes generations to hammer ethical considerations out of a person’s mind and to replace them entirely with the ruthless logic of winning and losing; to take the normal human yearning to be good and replace it with a single-minded desire for material conquest; to take the normal human instinct for kindness and replace it with a law-of-the-jungle mentality.

It took a few generations of the House of Trump, in other words, to produce Donald Jr.

- David Brooks

Sunday, July 16, 2017

Quote of the Day

If you seek tranquility, do less. Or (more accurately) do what’s essential. Do less, better. Because most of what we do or say is not essential. If you can eliminate it, you’ll have more tranquility. But to eliminate the necessary actions, we need to eliminate unnecessary assumptions as well.

-  Marcus Aurelius, Meditations

Saturday, July 15, 2017

Wisdom Of The Week

Lesson 1 - Scope Matters

In discussing large-scale models, it’s difficult to avoid mentioning Jorge Luis Borges’s thought experiment about a 1:1 scale map from “On Exactitude in Science”:

“In time, . . . the cartographers guilds struck a map of the empire whose size was that of the empire, and which coincided point for point with it. The following generations, who were not so fond of the study of cartography as their forebears had been, saw that that vast map was useless, and not without some pitilessness was it, that they delivered it up to the inclemencies of sun and winters. In the deserts of the west, there are tattered ruins of that map, inhabited by animals and beggars, in all the land there is no other relic of the disciplines of geography.”

The point we take from Borges (and from Cheramie) is that no model can be a complete recapitulation of the real world. Instead, we bracket off parts of the world, model those parts, and use the insights it gives us to make interventions in the world. The Army Corps couldn’t model the entire Missippi Basin drainage system either. They could only follow tributaries so far upstream before having to make generalized assumptions about the inputs to the system they modeled. They also couldn’t model all the outputs - their model doesn’t extend past Baton Rouge, let alone out into the Gulf of Mexico.

Similarly, the inputs for computer models are the outputs of other processes not captured by the model itself, and so the outputs of a model are only as valid as the understanding of the conditions that feed into it. If a minor creek jumps its bank upstream from region modeled by the Mississippi Basin Model, it could have downstream effects that the model could never capture. If the conditions that produce the data points we use to feed our model change, so too can the validity of our model change. The success of projects like AlphaGo rely on modeling closed systems, e.g. the game of Go, which is why AI for games are (relatively) easy and applied, real-world AI is much harder. Machine learning is great at predicting the future when the future resembles the past, but it takes a lot more to predict the lay of the land when the ground shifts under our feet.

Lesson 2 - Materials Matter

In building their Mississippi Basin Model, the Army Corps had to approximate the “real world” with the materials they had at their disposal. The engineers shaped and textured concrete, installed brass plugs, and accordion-folded sheet metal to approximate the incredibly complex effects of trees, sand, clay, roads, and crops on the speed, direction, and volume of water passing over the landscape in high-water conditions. They had to develop a measure of “frictional resistance” to translate between the real world of rocks and trees and the model world of concrete and metal. In computer modeling, the proxies we choose to represent the real world are just as important. We don’t know where people are, necessarily, but we do have a great degree of confidence about where their GPS-enabled phones are. Similarly, another example of this comes from the world of computer vision, where attempts to produce soccer highlights from video struggled with following the ball (exciting moments are more likely the closer the ball is to the goal). Eventually, one team discovered that players tend to follow the ball, and players are easier to track, so the players became a useful proxy for addressing a harder question.

It is from these approximations of reality that we’re able to train the coefficients of our models, and so, importantly, the proxies we choose are the materials that shape how inputs relate to outputs. The models themselves have a material affect on outputs, too. If we assume that inputs are linear, and put them in to a linear model, they will produce a linear output. If the relationship between inputs and outputs is not actually linear, then the model will not fit, in every sense of the word. The Mississippi Basin Model had to pick and choose what it could approximate, and reduce everything else to coefficients. Wetlands disappeared form the model, as did evaporation and siltation. The lesson Cheramie draws from this is that “it doesn’t matter how much territory the model covers if it relies on the amputation of inconvenient complexities to be manageable. The simulation becomes thin.” Computer models can manage a great deal more complexity than physical models, but the crucial complexity that data scientists should pay careful attention to is the material relationship between the reality we hope to model and the proxies we choose to represent that reality. Neural networks with external memory, that learn to remember and recollect, are attempts to build “context awareness” and long-term memory into neural networks. This can be understood as an attempt to model a larger chunk of the world, to bring in more materials without having to explicitly declare every variable worth considering.

Learning from Real-World Models: The Mississippi Basin Model and Machine Learning

Quote of the Day

The most exciting phrase to hear in science, the one that heralds new discoveries, is not ‘Eureka!’ but ‘That’s funny…’

- Isaac Asimov

Thursday, July 13, 2017

Quote of the Day

I have seen many storms in my life. Most storms have caught me by surprise, so I had to learn very quickly to look further and understand that I am not capable of controlling the weather, to exercise the art of patience and to respect the fury of nature.

- Paulo Coelho

Wednesday, July 12, 2017