Connect with us

News

Google’s Deep Mind Plans to Create a Computer Able to Program Itself

Published

on

British startup artificial intelligence company Deep Mind, which was bought by Google for $400 million in January this year, is working on the development of a computer that will be so intelligent it will be capable of programming itself.

It seems that Google has been constantly accelerating its research efforts in the field of artificial intelligence and machine learning. In September, the company teamed up with scientists of the University of California led by professor John Martinis to develop super-fast quantum computer chips based on the human brain. The aim is to make machines more ‘human’, as the chips would enhance their intuitive decision making skills and predictive ability.

As for implementing its ambitious plans for creating a self-programming computer, Google recently started partnership with two AI research teams of the Oxford University. The research will be aimed to help machines better understand their users by improving visual recognition systems and natural language processing.

The computer, called «Neural Turing Machine», is a combination of the way a conventional computer works with the way the human brain works. In particular, the computer mimics the short-time memory of the human brain, which allows it to learn when storing new memories and then use them to perform logical tasks other than those for which it has been initially programmed.

We have introduced the Neural Turing Machine, a neural network architecture that takes inspiration from both models of biological working memory and the design of digital computers,” the research team wrote.

According to the results of the first tests, the neural network computer can successfully create its own simple programming algorithms, such as copying, sorting, and recalling, and then use them to make classifications and data correlations. “The combined system is analogous to a Turing Machine or Von Neumann architecture but is differentiable end-toend, allowing it to be efficiently trained with gradient descent,” wrote the researchers.

However, some express their concerns about the accelerating progress in artificial intelligence. Elon Musk, the founder and CEO of SpaceX and TeslaMotors, believes that smart machines can be more dangerous than even nuclear weapons in case they become too autonomous. He referred to Swedish philosopher Nick Bostrom, known for his theory that we are living in a computer simulation, who wrote about the potential threats that the artificial intelligence may pose to humanity in his book «Superintelligence: Paths, Dangers, Strategies».

Recognizing the potential dangers, Google says that they’ve set up a special ethics board to oversee all company’s research in the field of artificial intelligence, which will put a series of restrictive rules on the use of this technology.

What do you think?

ABOUT THE AUTHOR

Anna LeMind is the owner and lead editor of the website Learning-mind.com, and a staff writer for The Mind Unleashed.

Featured image: Swide.com

Like this article? Get the latest from The Mind Unleashed in your inbox. Sign up right here.

Typos, corrections and/or news tips? Email us at Contact@TheMindUnleashed.com

Advertisement