Olympus Deep Learning Technology Used to Help Create AI-Based
Pathology Diagnostic Tool

September 3, 2018

Olympus Corporation (President and Representative Director: Hiroyuki Sasa) today announced Olympus Corporation’s proprietary deep learning technology was used in a joint research program to develop a new approach for computer-aided diagnosis using artificial intelligence (AI) for gastric biopsy specimens, which has the potential to help streamline the workload of clinical pathologists. To develop the application, Olympus worked with the Department of Diagnostic Pathology and the Institute for Clinical Research, National Hospital Organization Kure Medical Center and Chugoku Cancer Center.

Demand for pathology as a diagnostic tool has increased as new testing systems have made the early detection of cancer easier. However, in the face of this rising demand, many hospitals face a shortage of pathologists, leading to increasingly high workloads. One approach to streamlining workloads is using AI and a computer-assisted diagnostic tool to aid pathologists.

Since 2017, Olympus has participated in this joint research program called “A New Approach to Develop Computer-Aided Diagnosis Using AI for Gastric Biopsy Specimens” with Dr. Kiyomi Taniyama, President of the Kure Medical Center and Chugoku Cancer Center. This research paired Dr. Taniyama’s knowledge and experience of pathology diagnosis of the gastric system and digital pathology with Olympus’ imaging system technology and proficiency in AI development. Olympus, with its leading market share in microscopes, will continue to develop a CAD solution using AI for pathology diagnosis based on its proprietary deep learning technology.

Details of the Joint Research Program Research

Olympus used gastric biopsy specimens collected for diagnosis at the Kure Medical Center and Chugoku Cancer Center between 2015 and 2018 to develop deep learning technology, which consisted of a multiresolutional convolutional neural network1 (CNN). The deep learning technology’s unique CNN was developed by Olympus and is designed to analyze the features of pathology sample images. Using the CNN, the deep learning technology was used to identify the area of ADC tissues on images. Based on the result, images were classified into adenocarcinoma (ADC) and non-adenocarcinoma (NADC). The research involved two stages: the learning stage where the AI learns the CNN model using digital pathology images and associated information and the prediction stage where the AI classifies ADC images and NADC images using the model that it learned (Figure 1).


Figure 1: Deep learning study design

During the learning stage, the AI was trained using 368 whole slide pathology images2 and their associated data. In the prediction stage, test 1 was conducted to determine the threshold for assessing ADC in terms of the ADC probability output by the CNN on the assumption of actual clinical settings. In test 2, the threshold from test 1 was used to evaluate the results by classifying new sample images into either ADC and NADC categories.

Test 1 used the CNN to estimate the ADC probability for 786 sample images (297 of ADC and 489 of NADC) that were not used for learning. ADC tissue areas with a probability that exceeded the threshold were identified, and these results were used to classify images into ADC and NADC (Figure 2). After the threshold was adjusted, images were classified into ADC and NADC, and the sensitivity3 and specificity4 were calculated to generate a receiver operating characteristic (ROC) curve (Figure 3).


Figure 2: Categorical image segmentation and image categorization


Figure 3: ROC curve

Results

When the threshold was set to assess all 297 ADC samples as positive (100% sensitivity), 225 out of the 489 NADC samples were assessed as negative. This represents 100% sensitivity (297 out of 297) and 46% specificity (225 out of 489) (Table 1).

Table 1: Results of Study 1

Study1 Predicted
Positive
561
Negative
225
Ground Truth Positive
297
297
(Sensitivity: 100%)
0
(FNR: 0.0%)
Negative
489
264
(FPR: 54.0%)
225
(Specificity: 46.0%)

In test 2, using the threshold set in test 1, a final evaluation was made on 140 new sample images (67 samples of ADC and 73 samples of NADC). As a result, all 67 ADC samples were assessed as positive, and 37 out of the 73 NADC samples were assessed as negative. This represents 100% sensitivity (67 out of 67) and 50.7% specificity (37 out of 73) (Table 2).

Table 2: Results of Study 2

Study2 Predicted
Positive
103
Negative
37
Ground Truth Positive
67
67
(Sensitivity: 100%)
0
(FNR: 0.0%)
Negative
73
36
(FPR: 49.3%)
37
(Specificity: 50.7%)

Future Applications

CAD software using AI with a low false negative rate5 can help pathologists detect positive samples. This software has the potential to eliminate the duplication of efforts in the workload of pathologists and further improve the accuracy of pathology diagnosis of gastric biopsies (of which, over four million tests are performed annually in Japan) by screening negative samples and helping prevent positive samples from being overlooked.

Joint Research Program Summary

  • Olympus developed propriety deep learning technology that is suitable for use in analyzing pathology images.
  • 368 whole slide images obtained during gastric biopsies were provided to the AI software to enable it to assist in the diagnosis. The specimens are kept at the Kure Medical Center and Chugoku Cancer Center and contain accurate, detailed diagnostic information.
  • In test 1, 786 samples (297 samples of adenocarcinoma (ADC) and 489 samples of non- adenocarcinoma (NADC)) were examined. The threshold was set to assess all ADC samples as positive (100% sensitivity). Using this threshold, 225 out of the 489 NADC samples were assessed as negative.
  • Based on the test 1 threshold, an additional 140 samples (67 of ADC and 73 of NADC) were examined in test 2. As a result, all 67 ADC samples were assessed positive, and 37 out of 73 NADC samples were assessed negative [100% sensitivity (67/67), 50.7% specificity (37/73)].
  • The CAD solution using AI demonstrated a low false negative rate. These results show the software’s potential to screen out negative samples and flag positive samples, helping reduce duplication in the workload of clinical pathologists.

Additional Background

In addition to conventional morphological diagnosis using an H&E stain, molecular pathology and functional diagnosis have become increasingly important for determining a patient’s optimal cancer treatment plan, including which drugs will be most effective. According to the Ministry of Health, Labor, and Welfare, the number of pathological diagnoses in Japan increased by a factor of approximately 2.2 between 2005 and 2015; from 2,143,452 to 4,762,1886. The number of times immunostaining was used to help determine a patient’s cancer treatment plan also increased significantly by a factor of approximately 2.8, from 151,248 to 426,276. Meanwhile, a chronic shortage of pathologists persists, caused, in part, by an aging workforce. It was these factors that prompted Olympus to participate in the joint research program described above.

1 A network structure which is widely used in deep learning technology for image analysis. This structure realizes the learning features of input data effectively.

2 A technical method whereby pathological pathology specimens on glass slides are digitally imaged in their entirety and displayed for view on a monitor.

3 Ratio of positive being diagnosed positive

4 Ratio of negative being diagnosed negative

5 Positive results incorrectly assessed as negative

6 Source: The Statistics of Medical Care Activities in Public Health Insurance by Director-General for Statistics and Information Policy of the Ministry of Health, Labour and Welfare

Press releases are company announcements that are directed at the news media.
Information posted on this site is current and accurate only at the time of their original publication date, and may now be outdated or inaccurate.