Saturday, July 15, 2017

Wisdom Of The Week

Lesson 1 - Scope Matters

In discussing large-scale models, it’s difficult to avoid mentioning Jorge Luis Borges’s thought experiment about a 1:1 scale map from “On Exactitude in Science”:


“In time, . . . the cartographers guilds struck a map of the empire whose size was that of the empire, and which coincided point for point with it. The following generations, who were not so fond of the study of cartography as their forebears had been, saw that that vast map was useless, and not without some pitilessness was it, that they delivered it up to the inclemencies of sun and winters. In the deserts of the west, there are tattered ruins of that map, inhabited by animals and beggars, in all the land there is no other relic of the disciplines of geography.”

The point we take from Borges (and from Cheramie) is that no model can be a complete recapitulation of the real world. Instead, we bracket off parts of the world, model those parts, and use the insights it gives us to make interventions in the world. The Army Corps couldn’t model the entire Missippi Basin drainage system either. They could only follow tributaries so far upstream before having to make generalized assumptions about the inputs to the system they modeled. They also couldn’t model all the outputs - their model doesn’t extend past Baton Rouge, let alone out into the Gulf of Mexico.

Similarly, the inputs for computer models are the outputs of other processes not captured by the model itself, and so the outputs of a model are only as valid as the understanding of the conditions that feed into it. If a minor creek jumps its bank upstream from region modeled by the Mississippi Basin Model, it could have downstream effects that the model could never capture. If the conditions that produce the data points we use to feed our model change, so too can the validity of our model change. The success of projects like AlphaGo rely on modeling closed systems, e.g. the game of Go, which is why AI for games are (relatively) easy and applied, real-world AI is much harder. Machine learning is great at predicting the future when the future resembles the past, but it takes a lot more to predict the lay of the land when the ground shifts under our feet.


Lesson 2 - Materials Matter


In building their Mississippi Basin Model, the Army Corps had to approximate the “real world” with the materials they had at their disposal. The engineers shaped and textured concrete, installed brass plugs, and accordion-folded sheet metal to approximate the incredibly complex effects of trees, sand, clay, roads, and crops on the speed, direction, and volume of water passing over the landscape in high-water conditions. They had to develop a measure of “frictional resistance” to translate between the real world of rocks and trees and the model world of concrete and metal. In computer modeling, the proxies we choose to represent the real world are just as important. We don’t know where people are, necessarily, but we do have a great degree of confidence about where their GPS-enabled phones are. Similarly, another example of this comes from the world of computer vision, where attempts to produce soccer highlights from video struggled with following the ball (exciting moments are more likely the closer the ball is to the goal). Eventually, one team discovered that players tend to follow the ball, and players are easier to track, so the players became a useful proxy for addressing a harder question.


It is from these approximations of reality that we’re able to train the coefficients of our models, and so, importantly, the proxies we choose are the materials that shape how inputs relate to outputs. The models themselves have a material affect on outputs, too. If we assume that inputs are linear, and put them in to a linear model, they will produce a linear output. If the relationship between inputs and outputs is not actually linear, then the model will not fit, in every sense of the word. The Mississippi Basin Model had to pick and choose what it could approximate, and reduce everything else to coefficients. Wetlands disappeared form the model, as did evaporation and siltation. The lesson Cheramie draws from this is that “it doesn’t matter how much territory the model covers if it relies on the amputation of inconvenient complexities to be manageable. The simulation becomes thin.” Computer models can manage a great deal more complexity than physical models, but the crucial complexity that data scientists should pay careful attention to is the material relationship between the reality we hope to model and the proxies we choose to represent that reality. Neural networks with external memory, that learn to remember and recollect, are attempts to build “context awareness” and long-term memory into neural networks. This can be understood as an attempt to model a larger chunk of the world, to bring in more materials without having to explicitly declare every variable worth considering.


Learning from Real-World Models: The Mississippi Basin Model and Machine Learning


No comments: