In a 2013 study published in the Journal of Neurophysiology, scientists measured the response from primary visual cortex neurons in macaque monkeys (one of the few animals that see like we do) as they prompted them with chromatic contrasts. They were trying to prove the existing biological model of color perception by testing neuron response after isolating the function of neural channels. If the researchers could see the way the brain mixes colors, they could once and for all find red. But it didn’t work. Instead, “analysis of neuronal contrast—response functions and signal-to-noise ratios yielded no evidence for a special set of ‘cardinal color directions,’ for which visual cortex neurons are particularly sensitive.” Not only couldn’t they find the red-green and blue-yellow channels, nearly half the color neurons also fired in response to brightness changes, suggesting the tripartite channel model is inaccurately simplistic.
Scientific attempts to reduce colors to wavelengths have been equally unsuccessful. If it were simple, then colors as we observe them should match up with what’s called the “surface spectral reflectance,” (SSR) which can be measured with a digital receptor. But if you try to replicate color vision this way, you get far too many colors, and objects lack internal constancy because of the effects of lighting conditions. Where we see shadows, a computer sees a different color. But even when they apply highly sophisticated algorithms for illumination in a 3-D space, researchers have been unable to replicate human levels of color constancy. NYU professor of psychology and neural science Laurence T. Maloney writes that the failure of these SSR models thus far suggests “there are cues present in real scenes that we do not understand.” People are able to use color to judge scenes better than computers and cameras can, and scientists aren’t sure how.
- Review of the new book, Outside Color: Perceptual Science and the Puzzle of Color in Philosophy by M. Chirimuuta
Scientific attempts to reduce colors to wavelengths have been equally unsuccessful. If it were simple, then colors as we observe them should match up with what’s called the “surface spectral reflectance,” (SSR) which can be measured with a digital receptor. But if you try to replicate color vision this way, you get far too many colors, and objects lack internal constancy because of the effects of lighting conditions. Where we see shadows, a computer sees a different color. But even when they apply highly sophisticated algorithms for illumination in a 3-D space, researchers have been unable to replicate human levels of color constancy. NYU professor of psychology and neural science Laurence T. Maloney writes that the failure of these SSR models thus far suggests “there are cues present in real scenes that we do not understand.” People are able to use color to judge scenes better than computers and cameras can, and scientists aren’t sure how.
- Review of the new book, Outside Color: Perceptual Science and the Puzzle of Color in Philosophy by M. Chirimuuta
No comments:
Post a Comment