Google Is Utilizing Its Deep Learning Tech To Diagnose Diseases
The idea of having machines do most of our medical diagnosis is not new. We have pretty much relied on different forms of technology for medical diagnosis. Examples that come to mind are the MRI and the CT scan to start with. So how much smarter can computers get? Well, as it turns out, if you give a computer a set of images along with the perfect algorithm, it will learn to see and turns out that Google is perfecting the way of diagnosing diseases in the future.
What Makes Google So Different In Diagnosing?
Diabetes is getting common day by day. People with diabetes may suffer from a condition called diabetic retinopathy. Basically, you have tiny blood vessels at the back of your eyes that pop, leak and becoming damaged. About one in three diabetic patients suffer from this condition. How lethal is this condition? It can cause permanent blindness if not treated.
Here’s the trick though: if you diagnose it early, you can be treated without suffering at all. However, this diagnosis is a bit difficult for human doctors to catch, and most people do not even have access to ophthalmologists who can diagnose astutely. That is where Google comes in, helping us diagnose this very condition which is the leading cause of vision impairment and blindness in the working-age population.
How Does Google Diagnose This Disease?
Google has designed a way in which the neural networks are able to detect diabetic retinopathy. The article was published in the journal of the American Medical Association on Tuesday. Neural networks is a simplified version of the brain, to sum it up.
By showing a set of regular and irregular images of the eyes, the computers can be trained to distinguish between the healthy eye and the diseased eye. After the computers were trained, Google tested it for efficiency. As it turns out, Google’s algorithm was able to detect the condition slightly better than human doctors.
Why slightly better? You may think that technology can surpass human intelligence in every possible way but that is not entirely true. Computers do not see images the way a human brain does. When a computer is given an image, all it reads is pixels and the colors in them while our brain sees the picture and the activity in it.
The computer can only read an image like our brain if it has computer vision but that too is not as effective as our eyes. Google has a potential to change all that with its new algorithm. It is already utilizing this computer vision in Google photos. You can type in a scene of a photo and the photo will automatically appear even though it has no such tag on it.
Diabetic retinopathy is just the first of the many diagnosis Google is aiming to achieve. If proven successful, the same algorithms can be used for other diagnoses. Another win for the medical field.