Feeding The Google Monster
This has been an experiment in feeding screenshots of Image Search Results from Google, BACK INTO Google Image Search. I ran several experiments and used the Google.com Screenshot experiment as an example.
Obviously, what you end up with is a grid of images, because that’s what Google Image gives you as results, a grid of images. But Google is intelligent enough to start to see patterns in the grid of images.
How about best guesses? From start to finish, I got:
Best guess for this image: new google logo png
Best guess for this image: screenshot
Best guess for this image: share buttons
Best guess for this image: software
Best guess for this image: diagram
Best guess for this image: uml class diagram example
Best guess for this image: facebook database schema
Best guess for this image: crm database schema
It’s interesting to see how quickly this process leads to “Best Guesses” of DIAGRAM, or PATTERN. I even got Human Settlement at one point. But diagram and pattern come up a lot, and they sometimes end up as best guesses for “collection”.
Naturally, you are feeding the machine with grids of images.. I often distort the images, add noise, do blend modes with various search results, and feed it back into Google Image Search to find what is “visually similar”. Note: The above were all results from what was “visually similar”. It turns out that Google Image Search is not that intelligent. It should have realized what I was doing and jumped ahead. It could have easily done this with Deep Learning, but it turns out Deep Learning is not involved in my Google Image Searches. At least it has not obvious visual effect on the grids of results.