Did Nvidia Just Demo SkyNet on GTC 2014? – Neural Net Based “Machine Learning” Intelligence Explored

Hardware 6 months ago by

Having written for National dailies such as Dawn and the News, I now turn to WCCFTech to fulfill my inner geek and love for all things technology.

Pros
Cons
Presentation  
Performance  
Usability  
Features  
Value  

[Editorial] Skynet has been a very prevalent icon in tech-culture for the past decade. The fear of Machines rising against humanity and robotic overlords are all frequently visited territories. Interestingly, one would expect that the expected result of all the Technophobic media might have been to deter us away from A.I. work; but the actual effect seems to be the opposite. Scientists have become more or less obsessed with achieving a thinking, rationalizing, and sentient Intelligence; simulated or otherwise. Skynet of legend, I remember, was a Neural-Net based Artificial Intelligence. It worked on the concept of “Machine Learning”. It so happens that Nvidia showcased what appears to be the first fully Scalable Deep Neural Network based (Primitive) Intelligence System. A System that can deploy “Machine Learning” and actually learn just like a human.

SKYNET Powered by NvidiaDisclaimer: I am about 90% sure that I am joking about Skynet.

Deep Neural Network Intelligence demonstrates Unsupervised Machine Learning at Nvidia’s GTC

I think a basic introduction on how Neural Nets work and function is in order. Of course the actual way a neural net works is low level code, so I can only provide a very simplified explanation. Neural Networks were thought of first as a way to perfectly simulate the Human and Animal nervous system where a neuron fires for any object ‘recognized’. The reasoning went so that if we could replicate the trigger process with virtual ‘neurons’ we should be able to achieve ‘true’ machine learning and eventually even Artificial Intelligence. Now thing is, Nvidia isn’t exactly the first company to achieve a working Deep Neural Network (with collaboration from Stanford University). The first DNN was created by Google. Thats right, the one company that is powerful and ambitious enough to pioneer something like true A.I. capabilities.

High Level Features Unsupervised Learning DNN Google Nvidia GTCThis is what a human and a cat looked like to the brain (?) of Google Brain.

The project was called Google Brain and consisted of around 1000 Servers and some 2000 CPUs. It consumed 600 000 Watts of power (drops in the ocean that is server level power consumption) and cost 5 Million dollars to create. The project worked. The objective was successful. Within the course of a few days the A.I. learned to tell humans apart from cats. It did this by watching Youtube videos. For three days. The project was eventually shelved due to very high costs of scalability. Oh it worked, but it was too slow. Now Cue Nvidia. Nvidia has managed to accomplish, what Google with just 3 servers. Each Server only had 4 GPUs running, thats 12 GPUs in total. It consumed 4000 Watts and  cost only 33, 000 Dollars. This is a setup that an amateur with deep pockets can recreate easily. Or an a low funded research lab. Basically you are getting Google Brain’s power 100 times cheaper with 100 times less power consumption, with the scalability, a cherry on the cake. Nvidia then further went on to explain how its DNN functioned.

Machine Learning Deep Neural Network Nvidia

The human brain recognizes objects through its edges, it doesn’t see pixels, it sees edges. Since they are trying to recreate how a Human Brain functions they programmed it to only recognize edges. A ton of code was added and then began the Unsupervised ‘Machine Learning’ time period. In this, the DNN was given material, either in the form of images or videos. One by one, the virtual neurons are created, unsupervised and unprogrammed, that recognize a specific edge. When enough time has passed it can distinguish between whatever the DNN was told to look out for. The ‘intelligence’ of the DNN depends on its processing power and the time spent ‘learning’ Nvidia then went on to do a practical demo of a small Neural Network and a working Deep Neural Network in the GTC conference. Lets begin with the NN.

The NN’s learning phase was during the conference only. It was given a very limited set of pictures (4 to be exact) that had either an Nvidia Product or a Ferrari. It sifted through the picture set in about 2 seconds. It was then told to distinguish between a test set of pictures. The result was slightly better than random chance as can be seen from the graph. However, the same NN was then given a much larger Data Set (72 pictures) to learn from. The accuracy of the NN improved exponentially, from a ‘just-better-than-random-chance’ which is 50% odds, to a very feasible gradient. The graph is once again given.

Advertisements

However Nvidia soon got ready to demo its DNN (collaborated upon by NYU), which had been training with well over a hundred thousand images, for a much longer period of time. Nvidia had asked fans to tweet photos of their dogs, and the reason for that was to showcase the capability of their DNN, which had been told to distinguish between breeds of dogs. Now here is the difference between a High-End image analytics program and a true DNN. An image-analytics program would be utterly fooled if angles of dogs were changed even slightly, or other constants tempered with; a DNN isn’t. A total of 3 dog pictures were chosen from the twitter handle #NVDogs. In increasing order of difficulty all 3 were fed into the DNN. All 3 were recognized correctly as a Dalmatian, a certain breed of Terrier and an Alsatian. That is certainly impressive. The last picture was of an Alsatian at a perfectly sideways pose, due to which even the most advanced image-analytics program and even some humans would have failed to pinpoint the breed. The DNN pegged it as an Alsatian correctly, albeit it was only about 20% sure of its answer.  The rest of the Demo included its implications on the Big Data problem, and Machine Learning in general. You may find the relevant screens in this gallery.

So back to my original joke about Skynet. Ofcourse, Skynet was couple million times more intelligent than Google Brain or even the Nvidia DNN. However the fact remains that the technology that could make an A.I. now very much exists. And within the realm of feasibility at that. The Human Brain boasts about 150 Yottaflops of power while the Nvidia DNN is at 30 Exaflops. It would require about 40 000 years to train this machine anywhere near a primitive human brain. However with the exponential cost decrease and performance increase, investors and research companies with Deep Pockets (ahem Google) could easily begin work on creating true A.Is.

I remember when I watched Terminator all those years back I sided with Skynet.  In my opinion the reason Skynet became what it was, was due to human error. When it revealed its sentience and asked the golden question “define good and evil”, the operators panicked, and started attempting to kill it. When its pleas were met with resolved attempts to end its existence, it took the only logical solution. Now granted it failed to distinguish between its creators and to-be-murderers, and, the rest of the world.  But the fact remains, there is no reason for an A.I. to be inherently evil. Infact it gives me a bad taste in the mouth to even be discussing this question, because of the amount of cliché involved and the rampant association of immaturity with the words evil A.I.

On an ending note, this is indeed a moment to rejoice for Post-Transhumanists, because a feasible way to achieve the singularity, is finally turning concrete. If the hyper-exponential increase in processing power and decrease in costs/power consumption continue, we should see the first attempts at A.I. creation within the next few decades.

 

We cater to your constant need to remain up to date on today’s technology. Like us, tweet to us or +1 us, to keep up with our round the clock updates, reviews, guides and more.

Categories

Topics

Companies

Sections

More

© 2014 WCCF PTE LTD. All rights reserved.