Google engineers decided that instead of asking the software to
generate a specific image, they would simply feed it an arbitrary image
and then ask it what it saw. Here’s how Google describes the experiment:
Then things got really weird. Google started feeding images into the highest layer—the one that can detect whole objects within an image—and asked the network, “Whatever you see there, I want more of it!”
- More Here
We then pick a layer and ask the network to enhance whatever it detected. Each layer of the network deals with features at a different level of abstraction, so the complexity of features we generate depends on which layer we choose to enhance. For example, lower layers tend to produce strokes or simple ornament-like patterns, because those layers are sensitive to basic features such as edges and their orientations.When feeding an image into the first layer, this is what the network created, something akin to a familiar photo filter.
Then things got really weird. Google started feeding images into the highest layer—the one that can detect whole objects within an image—and asked the network, “Whatever you see there, I want more of it!”
This creates a feedback loop: if a cloud looks a little bit like a bird, the network will make it look more like a bird. This in turn will make the network recognize the bird even more strongly on the next pass and so forth, until a highly detailed bird appears, seemingly out of nowhere.The result is somewhat akin to looking into the subconscious of an AI. When an image of clouds was fed to a network trained on identify animals, this is what happened:
- More Here
No comments:
Post a Comment