NN_Header2-3

by Sherif Gerges
figures by Olivia Foster

Most of a physician’s working hours involve pattern recognition and high-level problem solving. Throughout his or her professional tenure, a dermatologist will analyze over two hundred thousand skin lesions, while a radiologist will look at millions of medical images. Yet becoming sufficiently proficient at diagnosing these images is no cakewalk: physicians spend decades building a mental reference database which they continuously refine as their expertise grows. This is an extensive, methodical process and a hallmark of medicine – but what if we could hasten the process to take only weeks instead of years?

Machine Learning: The Art of Learning Deeply

Recently, scientists have begun using machine learning as a tool to deliver diagnoses with accuracy comparable to medical doctors, while acquiring these skills within substantively shorter timeframes. In particular, a subset of machine learning known as deep learning is showing potential as an automated diagnosis platform that could revolutionize medicine. Just as physicians utilize their training and experience for diagnosis and prognosis, deep learning algorithms are being used to “learn” from medical images in order to diagnose a pathology when presented with one.

This isn’t a novel phenomenon – computers have aided physicians in diagnoses for years. However, their usefulness has been limited by their failure to make adjustments after misdiagnosing multiple cases; that is, they do not “learn” from their mistakes. These first-generation tools followed a series of diagnostic rules, essentially making them incapable of improving beyond the set of instructions coded into them. For example, imagine teaching a child what a leopard is. We could teach her a set of rules: leopards have ears, whiskers, four legs, etc., but this approach collapses once the child sees another similar-looking cat like a jaguar. What actually happens is that once this child errs, she makes minor adjustments to recognize unique features (jaguars are bulkier and have larger spots) that discriminate “jaguar” and “leopard” into two distinct classifications.

This is akin to how these deep learning algorithms work. In practice, they are fed thousands of classified images (also known as training data) that carry the terms “jaguar” and “leopard.” If the algorithm makes mistakes, it makes internal adjustments to reclassify them correctly. Coincidentally, it turns out that this has immediate applications in medical images. At their most fundamental level, MRI, CT or X-ray scans are bits of information that can be mined and fed into an algorithm. However, as was the case with our big cat analogy, the variation within these images is extensive and small differences can be misleading. How do you discriminate between a harmless minor depression in the skin and skin cancer? Using the general-purpose learning procedure described earlier, deep learning has the ability to “learn” autonomously and help distinguish a real skin malady from cosmetic discoloration.

Figure 1. Cancerous and benign lesions appear similar to the naked eye. These images are from the ISIC 2016 melanoma diagnosis dataset.
Figure 1: Cancerous and benign lesions appear similar to the naked eye. These images are from the ISIC 2016 melanoma diagnosis dataset.

Doctor Who?

An illustration of this machine-learning ability was shown by Sebastian Thrun’s group at Stanford. The group used over one hundred thousand images to train a deep learning algorithm to classify various forms of skin cancer with accuracy comparable to a group of board-certified dermatologists. In fact, by some metrics, it performed even better. For example, melanomas (a type of skin cancer) look very similar to moles (Figure 1), making them difficult to diagnose. Yet, compared to trained human professionals, Thrun’s algorithm was less likely to miss a melanoma or mistake a lesion for melanoma when it was not. These metrics highlight just how powerful these algorithms are at picking up the faintest features of skin cancer.

Another example of these high-level diagnostic capabilities was demonstrated by researchers from the University of North Carolina, Chapel Hill. This time, the training sets were MRI scans of children at a high risk of developing autism. Children on the autism spectrum often struggle with social interactions and friendships, meaning that early interventions are critical for improving a child’s cognitive development. Previously, researchers knew that autism caused excessive brain growth, but when viewed through scans, overgrowth is often imperceptible to the human eye. To track overgrowth more precisely, they trained a deep-learning neural network with MRI scans of 6- and 12-month-old children. They detected overgrowth in the 6-month-old children, allowing for earlier diagnosis of high-risk children and thus swifter interventions.

Doctors in Our Smartphones: Rethinking The Future Using AI

While these studies illustrate the power of AI, some optimists believe that we can imagine a future in which AI can truly revolutionize healthcare. In the US, healthcare is beleaguered with cost inefficiencies, a shortage of physicians and less-than-ideal patient outcomes. For example, if a patient suspects his or her unsightly mole might be melanoma, he or she books an appointment and visits a dermatologist for inspection. If this is inconclusive, a biopsy is conducted for further evaluation. From booking an appointment to biopsy result, this will take anywhere between three weeks to two months, making this pipeline particularly prohibitive for patients without adequate healthcare, and distracts physicians from complex cases that demand more attention.

To ameliorate these inefficiencies, we could rethink the current doctor-patient pipeline. For example, before visiting a clinic, patients could connect with an intelligent personal assistant (such as Siri or Alexa) that acts diagnostic screening platform. In the particular case of the virtual dermatologist (Figure 2), patients could upload screenshots of their lesions via smartphones and get an immediate diagnosis within minutes. The advantage of this would be twofold: first, it would filter out low-risk patients, and second, it would shift physicians’ attention to more demanding cases. Furthermore, AI is highly scalable at only marginal costs, meaning a virtual diagnosis platform can reach anyone with a smartphone subscription, of which over 6.5 billion will exist by 2021.

Figure 2. The workflow of virtual dermatology. In Step One, a patient takes a picture of a wound/infection/lesion on their body. In Step Two, a computer/algorithm analyzes the image and outputs the data into a probability chart that computes the odds that the image shows something cancerous or benign. Finally, in Step Three, the data is sent to a hospital/physician (in the case of the melanoma) or a “non-urgent cases” list (in the case of the mole).
Figure 2: The workflow of virtual dermatology. In Step One, a patient takes a picture of a wound/infection/lesion on their body. In Step Two, a computer/algorithm analyzes the image and outputs the data into a probability chart that computes the odds that the image shows something cancerous or benign. Finally, in Step Three, the data is sent to a hospital/physician (in the case of the melanoma) or a “non-urgent cases” list (in the case of the mole).

There is no disputing that the abundance of extractable data is increasing, and AI – like the perfect medical resident – can mine the finest details invisible to the human eye, process the data, and then interpret those data swiftly. However, how soon we can expect people to be diagnosed by a “virtual MD” is not immediately clear, as such a system will need to be evaluated by rigorous clinical trials. If we’re running into a transformative era, we are only just learning how to walk. The underlying power of intelligence machines is already here; we just have to adapt it to our needs.

Sherif Gerges is a graduate student in the BBS Program at Harvard Medical School and a researcher at the Simches Research Center at Massachusetts General Hospital and the Broad Institute of MIT and Harvard.

This article is part of a Special Edition on Artificial Intelligence.

For more information:

  1. Deep Learning: With massive amounts of computational power, machines can now recognize objects and translate speech in real time. Artificial intelligence is finally getting smart. MIT Technology Review.
  2. Skin Cancer Classification With Deep Learning.
  3. This algorithm can spot signs of autism in children a year before they’re diagnosed. Wired.
  4. A soft introduction to Deep Learning.
  5. The Great A.I. AwakeningNew York Times.

Leave a Reply

Your email address will not be published. Required fields are marked *