Siemens Case Studies Artificial Intelligence and the implications on Medical Imaging
Edit This Case Study Record
Siemens Logo

Artificial Intelligence and the implications on Medical Imaging

Siemens
Artificial Intelligence and the implications on Medical Imaging - Siemens Industrial IoT Case Study
Analytics & Modeling - Machine Learning
Healthcare & Hospitals
Automated Disease Diagnosis

There are several factors simultaneously driving integration of AI in radiology. Firstly, in many countries around the world there is a discrepancy between the number of doctors trained in radiology and the rising demand for diagnostic imaging. This leads to greater demands for work efficiency and productivity. For example, the number of radiology specialists (consultant work- force) in England went up 5% between 2012 and 2015, while in the same period the number of CT and MR scans increased by 29 and 26 percentage points respectively. In Scotland, the gap widened even further (The Royal College of Radiologists 2016). Today, the average radiologist is interpreting an image every three to four seconds, eight hours a day (Choi et al. 2016).

Secondly, the image resolution of today’s scanners is continuously improving – resulting in an ever greater volume of data. Indeed, the estimated overall medical data volume doubles every three years, making it harder and harder for radiologists to make good use of the available information without extra help from computerized digital processing. It is desirable, both in radiological research and in clinical diagnostics, to be able to quantitatively analyze this largely unexploited wealth of data and, for example, utilize new measurable imaging biomarkers to assess disease progression and prognosis (O’Connor et al. 2017). Experts see considerable future potential in the transformation of radiology from a discipline of qualitative interpretation to one of quantita- tive analysis, which derives clinically relevant information from extensive data sets (“radiomics”). “Images are more than pictures, they are data,” American radiologist Robert Gillies and his colleagues write (Gillies et al. 2016). Of course, this direction for radiology will require powerful, automated procedures, some of which at least will come under the field of artificial intelligence.

Read More
Large corporations involved in all types of medical imaging, attracted to accuracy and precision.
Read More
Undisclosed
Read More

The use of machine learning in medical imaging is not new – algorithms today are, however, much more powerful than traditional applications (van Ginneken 2017). The ANNs on which deep learn- ing is based always have multiple functional layers, sometimes even exceeding a hundred, which can encompass thousands of neurons with millions of connections. (Simple ANNs with, say, only one interim layer are described in contrast as “shallow” networks.) All of these connections are adjusted during an ANN’s training by gradual changes of their respective parameters – in mathematical terms: their weights. In this way, deep networks feature a virtually unimaginable number of possible combinations for processing information, and can even model highly complex, non-linear contexts. During the training procedure, the different layers of an ANN increasingly structure the input data with each consecutive layer, developing a more abstract “understanding” of the information. Of course, such deep ANNs were only made possible by advanced mathe- matical methods and the availability of higher computational power and faster graphic processors (GPU) to compute the innumerable steps during the learning process. In 2013, the MIT Technology Review identified deep learning as one of the 10 Breakthrough Technologies of the Year (Hof 2013).

For image recognition, “deep convolutional neural networks” (a specific type of ANN) have proven to be especially efficient. Similar to the visual cortex in the brain, these networks first extract fundamental image characteristics from the input data, like corners, edges and shading. In multiple abstraction steps, they then settle independently on more complex image patterns and objects. When the best of these kinds of networks are tested on non-medical image databases, their error rate is now down to just a few percent (He et al. 2015). Moreover, different network architectures and methods may be combined (e.g. deep learning with “reinforcement learning”) to achieve an optimal result depending on the problem posed.

Given this development, experts anticipate significant changes in medical imaging (Lee et al. 2017). Unlike previous AI methods, introduced in the US beginning in the late 1990s especially for mammography screening with a lot of shortcomings (Morton et al. 2006; Fenton et al. 2007; Lehman et al. 2015), today’s algorithms will likely prove to be transformative technologies for clinical diagnostics.

Read More

The promise of AI in medical imaging lies not only in higher automation, productivity and standardization, but also in an unprecedented use of quantitative data beyond the limits of human cognition. This will support better, and more personalized, diagnostics and therapies. Today, artificial intelligence already plays an important role in the everyday practice of image acquisition, processing and interpretation. Siemens Healthineers, for example, has developed a pattern recognition algorithm (Automatic Landmarking and Parsing of Human Anatomy, ALPHA) for its 3D diagnostic software “syngo.via,” which automatically detects anatomical structures, independently numbers vertebrae and ribs, and also aids in precisely overlaying different exami- nation dates or even different modalities (AI-based landmark detection and image registration). This considerably helps simplify workflows in diagnostic imaging. The same is true for award- winning algorithms like “CT Bone Reading” for virtual unfolding (2D reformatting) of the rib cage or “eSie Valve” for simultaneous 3D visualization of heart valve anatomy and blood flow (R&D Magazine 2014; R&D 100 Conference 2015). AI applications like these are already an established part of available imaging software.

Numerous other applications are in development (Comaniciu et al. 2016) and can be expected from a range of companies in coming years (Signify Research 2017). The corporate research of Siemens Healthineers alone includes 400 patents and patent applications in the field of machine learning, 75 of them in deep learning.

No less significant is the availability of comprehensive open-source tools for developing AI applications (Erickson et al. 2017). And many academic research groups are moving clinical implementa- tions of machine methods forward through pilot studies. Realistic scenarios for routine clinical use of AI might include, for example, improved assessments of chest ultrasound images and detection of pulmonary nodes in CT (Cheng et al. 2016), or quantitative analyses of neurological diseases through precise segmentation of brain structures (Akkus et al. 2017).

Intelligent algorithms would also benefit cardiac patients undergoing a coronary CT angiogram, since deep learning methods can be used to calculate the calcium score of their vessels at the same time. Until now, an additional CT scan is often performed for this purpose, with added radiation exposure (Wolterink et al. 2016).

Last but not least, the use of artificial intelligence offers remarkable prospects for countries with fewer medical resources. A recent study has shown that tuberculosis of the lung can be detected on chest x-rays with 97% sensitivity and 100% specificity, if the images are analyzed by two different deep ANNs, and only those cases in which the algorithms do not concur are then evaluated by a doctor trained in radiology. Such a workflow could have great practical relevance in regions with widespread tuberculosis, but few radiologists on hand (Lakhani & Sundaram 2017).

The acceleration of certain work steps in diagnostic imaging with artificial intelligence already is a reality today. For example, AI algorithms enable automated detection of anatomical structures, intelligent image registration and reformatting. These kind of efficiency gains will become increas- ingly important given the growing demand for diagnostic imaging and rising cost pressure.

In the longer term, AI-based image analyses with reproducible characteristic measurements (imaging biomarkers), indices and “lab-like” results will likely prevail, particularly in areas like cardiac imaging that are already quantitatively oriented. This will also favor the (semi-)automated drafting of radiology reports and the transformation of radiology to a data-driven research discipline (“radiomics”). Meanwhile, radiological data sets can not only be analyzed graphically

by artificial intelligence, but they can also be annotated textually (Shin et al. 2016).

The implementation of AI offers particularly fascinating prospects for personalized diagnostics and treatment. Through interoperable information systems, for example, clinical patient data could be linked with imaging algorithms to pursue individualized scanning strategies. In addition, AI applications would likely enable more precise diagnoses and more meaningful risk scores by gathering together large quantities of information. As a large multicenter study recently showed, for example, the long-term risk of mortality of patients with suspected cardiovascular diseases can be estimated with much greater precision if manifold clinical and CT angiogram parameters are integrated into a personalized prognosis model using machine learning procedures (Motwani et al. 2017). Such AI-based approaches could in future better identify high-risk patients, but also help prevent unnecessary treatments, and thus involve diagnostic radiology more closely in outcome- oriented clinical decisions.

Download PDF Version
test test