“A deep-learning system doesn’t have any explanatory power,” as Hinton put it flatly. A black box cannot investigate cause. Indeed, he said, “the more powerful the deep-learning system becomes, the more opaque it can become. As more features are extracted, the diagnosis becomes increasingly accurate. Why these features were extracted out of millions of other features, however, remains an unanswerable question.” The algorithm can solve a case. It cannot build a case.
Yet in my own field, oncology, I couldn’t help noticing how often advances were made by skilled practitioners who were also curious and penetrating researchers. Indeed, for the past few decades, ambitious doctors have strived to be at once baseball players and physicists: they’ve tried to use diagnostic acumen to understand the pathophysiology of disease. Why does an asymmetrical border of a skin lesion predict a melanoma? Why do some melanomas regress spontaneously, and why do patches of white skin appear in some of these cases? As it happens, this observation, made by diagnosticians in the clinic, was eventually linked to the creation of some of the most potent immunological medicines used clinically today. (The whitening skin, it turned out, was the result of an immune reaction that was also turning against the melanoma.) The chain of discovery can begin in the clinic. If more and more clinical practice were relegated to increasingly opaque learning machines, if the daily, spontaneous intimacy between implicit and explicit forms of knowledge—knowing how, knowing that, knowing why—began to fade, is it possible that we’d get better at doing what we do but less able to reconceive what we ought to be doing, to think outside the algorithmic black box?
I spoke to David Bickers, the chair of dermatology at Columbia, about our automated future. “Believe me, I’ve tried to understand all the ramifications of Thrun’s paper,” he said. “I don’t understand the math behind it, but I do know that such algorithms might change the practice of dermatology. Will dermatologists be out of jobs? I don’t think so, but I think we have to think hard about how to integrate these programs into our practice. How will we pay for them? What are the legal liabilities if the machine makes the wrong prediction? And will it diminish our practice, or our self-image as diagnosticians, to rely on such algorithms? Instead of doctors, will we end up training a generation of technicians?”
He checked the time. A patient was waiting to see him, and he got up to leave. “I’ve spent my life as a diagnostician and a scientist,” he said. “I know how much a patient relies on my capacity to tell a malignant lesion from a benign one. I also know that medical knowledge emerges from diagnosis.”
The word “diagnosis,” he reminded me, comes from the Greek for “knowing apart.” Machine-learning algorithms will only become better at such knowing apart—at partitioning, at distinguishing moles from melanomas. But knowing, in all its dimensions, transcends those task-focussed algorithms. In the realm of medicine, perhaps the ultimate rewards come from knowing together.
- More Here
Yet in my own field, oncology, I couldn’t help noticing how often advances were made by skilled practitioners who were also curious and penetrating researchers. Indeed, for the past few decades, ambitious doctors have strived to be at once baseball players and physicists: they’ve tried to use diagnostic acumen to understand the pathophysiology of disease. Why does an asymmetrical border of a skin lesion predict a melanoma? Why do some melanomas regress spontaneously, and why do patches of white skin appear in some of these cases? As it happens, this observation, made by diagnosticians in the clinic, was eventually linked to the creation of some of the most potent immunological medicines used clinically today. (The whitening skin, it turned out, was the result of an immune reaction that was also turning against the melanoma.) The chain of discovery can begin in the clinic. If more and more clinical practice were relegated to increasingly opaque learning machines, if the daily, spontaneous intimacy between implicit and explicit forms of knowledge—knowing how, knowing that, knowing why—began to fade, is it possible that we’d get better at doing what we do but less able to reconceive what we ought to be doing, to think outside the algorithmic black box?
I spoke to David Bickers, the chair of dermatology at Columbia, about our automated future. “Believe me, I’ve tried to understand all the ramifications of Thrun’s paper,” he said. “I don’t understand the math behind it, but I do know that such algorithms might change the practice of dermatology. Will dermatologists be out of jobs? I don’t think so, but I think we have to think hard about how to integrate these programs into our practice. How will we pay for them? What are the legal liabilities if the machine makes the wrong prediction? And will it diminish our practice, or our self-image as diagnosticians, to rely on such algorithms? Instead of doctors, will we end up training a generation of technicians?”
He checked the time. A patient was waiting to see him, and he got up to leave. “I’ve spent my life as a diagnostician and a scientist,” he said. “I know how much a patient relies on my capacity to tell a malignant lesion from a benign one. I also know that medical knowledge emerges from diagnosis.”
The word “diagnosis,” he reminded me, comes from the Greek for “knowing apart.” Machine-learning algorithms will only become better at such knowing apart—at partitioning, at distinguishing moles from melanomas. But knowing, in all its dimensions, transcends those task-focussed algorithms. In the realm of medicine, perhaps the ultimate rewards come from knowing together.
- More Here
No comments:
Post a Comment