Google’s artificial neural network has been making extreme headway lately. You’ve probably seen a huge surge of these types of thought provoking images on social networking sites, and wondered “what the hell is going on here?” A neural network is a dense and complex idea. It is essentially a computer brain that can learn things, make decisions in a surprisingly humanlike way, and recognize patterns.
It all stemmed from Google buying Deep Mind, a British artificial intelligence startup. They bought the company at the beginning of the year for $400 million. It’s now segued into a very powerful synthetic neural network trained by Google, and it processes images in a dreamlike fashion.
Basically, the computer brain is able to modify an image and stitch together thousands and thousands of individual images as it sees fit. It’s like a dreamy mechanical piece of art. As a prime example, here’s the first ‘piece’ I saw for myself several months ago.
This is not a digital acid trip, it’s what a highly engaged computer brain with insane image recognition capabilities can create. The original image was a normal squirrel. Notice all of the individual intricate images that seem to melt your thoughts the deeper you look into it. It’s been described as “the image containing a thousand other images.” Lurking images of dogs, humanoid figures, and buildings emerge in a fashion resembling what you’d expect from Inception. A swirly psychedelic looking background is both captivating and endless.
This is the product of an computer brain, essentially a form of non-human artwork.
Another example comes from a basic photo of the sky with some lingering clouds.
The photo is about as straight-forward as possible, with very limited color, form, and shape. This is how the neural network depicts the image.
But what a computer brain sees is much different; it’s much more tangled with layers of detailed images.
To further elaborate what the creative ‘mind’ can do, here are some more trippy images it’s constructed. The ways this ‘brain’ makes connections within image recognition is both unusual and logical at the same time.
Google has posted a couple of research pieces on its neural network technology. One of the first articles published gives the following explanation:
“Artificial Neural Networks have spurred remarkable recent progress in image classification and speech recognition. But even though these are very useful tools based on well-known mathematical methods, we actually understand surprisingly little of why certain models work and others don’t…We train an artificial neural network by showing it millions of training examples and gradually adjusting the network parameters until it gives the classifications we want. The network typically consists of 10-30 stacked layers of artificial neurons. Each image is fed into the input layer, which then talks to the next layer, until eventually the “output” layer is reached. The network’s “answer” comes from this final output layer.”
Basically, the process begins by exposing the manufactured brain millions of images. The ‘brain’ segues from there on it’s own, and generates a unique and complex understanding. In one example, the network is given an image that is nothing but random noise. Despite the lack of any form or reason within the grainy and chaotic image, a deep level of interpretation takes place. Patterns and identifiable images appear and circulate.
Images of bananas surface from the dense noisy image.
Other images and patterns surface from the noise.
The concept of a computer brain and the discipline of big data are very intertwined. The Four V’s of big data are present in the decision making process of a digital mind. Artificial neural networks use volume, variety, velocity, and veracity. Volume is present in the overall vast scope of images that exist on the internet, and variety follows suit. Velocity and veracity both correlate to the synthetic brain’s decision making process. When the decision making process is automated, veracity is related to the steps the fake mind takes to infer something and it uses velocity when it analyzes and organizes.
Big data is employed with an eye to statistical analysis. In this case, the “big data” is every online image, and instead of graphs, Google’s computer brain is inventing new images rendered from its own understanding of the visual “data”.
After researching and coming to understand how these images work, I was left with the taste of desire in my mouth. I wanted to see what my own photos would look like if they were ran through the artificial neural network. Fortunately, an app that makes this possible has been created! Dream Scope is a pioneer of this idea. The app allows you to generate these type of images based on patterns that Google’s neural brain processes and stitches together.
I was motivated to plug my own photo, here’s a promo shot of the band I play in, before and after.
Photo by Matthew Wordell