According to the World Cancer Research Fund, Breast cancer is one of the most common cancers worldwide, with 12.3% of new cancer patients in 2018 suffering from breast cancer. Early detection can significantly improve treatment value, however, the interpretation of cancer images heavily depends on the experience of doctors and technicians. The cancer forms lesions of various shapes and sizes which show up on the medical images. Because these lesions can have a wide variety, it's difficult to distinguish between benign and malignant forms of the cancer.
To help solve this problem, SAS is working with a large hospital to train neural networks on the characteristics of breast cancer. Using medical images, the system is trained to recognize specific shapes and growth patterns of both malignant and benign forms of breast cancer. This helps the attending physician and the patient make a better determination on next steps and options for treatment.
How it works
The solution uses SAS Viya and NVIDIA graphics processing units (GPUs) plus a deep convolutional neural network (CNN). This network is composed of an input layer, an output layer, and any number of hidden layers. CNNs are ideal for image recognition workloads as the neurons are arranged in three dimensions (width, height, and depth dimensions). This allows CNNs to train with three dimensional data (such as images). The hidden layers by themselves are complex as they can contain convolutional layers, normalization functions and pooling layers. In short, a lot of math is happening within the CNN, and that's why it's necessary to make use of massively parallel processing power of SAS Viya and NVIDIA GPUs.
The power of parallel processing
From Bitcoin mining to data science tasks, GPUs are becoming a staple for workloads that require large amounts of parallel computing. In the past, GPUs were primarily used to accelerate graphics for the video gaming industry. As the gaming experience became more complex and realistic, it required a larger amount of calculations and GPUs delivered improved performance and reduced latency. However, with the rise of machine learning and particularly deep neural networks, GPUs found a new workload where they can shine.
All deep learning models require complex mathematical calculations, however, not all neural networks are the same. For example, convolutional neural networks are ideal for image-related tasks such as object detection, facial recognition, image classification. Others, such as recurrent neural networks or RNNs, excel at tasks related to speech and text processing. In addition to CNNs, RNNs, there are recursive neural networks, multilayer perceptron, long short-term memory among others. By using the right deep neural network, the data scientist can improve the speed and effectiveness of the overall detection process.
Each of these neural networks perform complex computations. A simple task of taking a selfie and recognition of the people in the image requires potentially millions of calculations. GPUs are ideal for these type of calculations with thousands of cores capable of solving millions of math problems in parallel (meaning all at once). Specifically, the NVIDIA Volta GPU is capable of performing at 125 teraFLOPS per second. This means that a single GPU is performing 125 trillion calculations per second.
So where can you use all this compute power? SAS and NVIDIA use the combined power of an advanced analytics engine and GPU performance for a number of real-world use cases, like the one mentioned above. We'll have plenty more to come, so stay tuned. In the meantime, learn more by reading this blog post: Advancing AI with deep learning and GPUs.