Sunday, February 12, 2017

What Makes A Predictive Model Interpretable?

First off, we should ask what exactly we want interpretability for. For some it is a trust issue - but here I prefer to go with solid evaluation. Our evolved brain can easily be tricked into ‘understanding’ a model which in fact is completely wrong. For others it is a matter of transparency - my sense is that for that it might be more useful to simple poke the model as a black box by varying the inputs to get a sense of the sensitivity. Say I ask the model what it would have predicted if I was 10 years older than I actually am. This is not exactly consistent with proper IID sampling theories, but in my opinion a fair test.

On the final holy grail of using a model to understand a domain or even the causal relationships between different variables: many approaches have been developed but most of them involve much more stringent rules on the data generation, variability and observability of all relevant information than is usually possible. What is interesting about the work in the field of observational methods (i.e. TMLE and other double robust estimators) for estimating causal impact: we really do not care much about the interpretability of a model. The causal interpretation is derived from the model predictions, not its structure. In fact, the robustness guarantees arise from at least one of the models to be consistent and unbiased - in other words, less interpretable ...


- Claudia Perlich

No comments: