Study finds Google system could improve breast cancer detection
Last updated on: 02 January,2020 08:35 am
Radiologists miss about 20% of breast cancers in mammograms, the American Cancer Society says.
CHICAGO (Reuters) – A Google artificial intelligence system proved as good as expert radiologists at detecting which women had breast cancer based on screening mammograms and showed promise at reducing errors, researchers in the United States and Britain reported.
The study, published in the journal Nature on Wednesday, is the latest to show that artificial intelligence (AI) has the potential to improve the accuracy of screening for breast cancer, which affects one in eight women globally.
Radiologists miss about 20% of breast cancers in mammograms, the American Cancer Society says, and half of all women who get the screenings over a 10-year period have a false positive result.
The findings of the study, developed with Alphabet Inc’s DeepMind AI unit, which merged with Google Health in September, represent a major advance in the potential for the early detection of breast cancer, Mozziyar Etemadi, one of its co-authors from Northwestern Medicine in Chicago, said.
The team, which included researchers at Imperial College London and Britain’s National Health Service, trained the system to identify breast cancers on tens of thousands of mammograms.
They then compared the system’s performance with the actual results from a set of 25,856 mammograms in the United Kingdom and 3,097 from the United States.
The study showed the AI system could identify cancers with a similar degree of accuracy to expert radiologists, while reducing the number of false positive results by 5.7% in the U.S.-based group and by 1.2% in the British-based group.
It also cut the number of false negatives, where tests are wrongly classified as normal, by 9.4% in the U.S. group, and by 2.7% in the British group.
These differences reflect the ways in which mammograms are read. In the United States, only one radiologist reads the results and the tests are done every one to two years. In Britain, the tests are done every three years, and each is read by two radiologists. When they disagree, a third is consulted.
SUBTLE CUES’
In a separate test, the group pitted the AI system against six radiologists and found it outperformed them at accurately detecting breast cancers.
Connie Lehman, chief of the breast imaging department at Harvard’s Massachusetts General Hospital, said the results are in line with findings from several groups using AI to improve cancer detection in mammograms, including her own work.
The notion of using computers to improve cancer diagnostics is decades old, and computer-aided detection (CAD) systems are commonplace in mammography clinics, yet CAD programs have not improved performance in clinical practice.
The issue, Lehman said, is that current CAD programs were trained to identify things human radiologists can see, whereas with AI, computers learn to spot cancers based on the actual results of thousands of mammograms.
This has the potential to “exceed human capacity to identify subtle cues that the human eye and brain aren’t able to perceive,” Lehman added.
Although computers have not been “super helpful” so far, “what we’ve shown at least in tens of thousands of mammograms is the tool can actually make a very well-informed decision,” Etemadi said.
The study has some limitations. Most of the tests were done using the same type of imaging equipment, and the U.S. group contained a lot of patients with confirmed breast cancers.
Crucially, the team has yet to show the tool improves patient care, said Dr Lisa Watanabe, chief medical officer of CureMetrix, whose AI mammogram program won U.S. approval last year.
“AI software is only helpful if it actually moves the dial for the radiologist,” she said.
Etemadi agreed that those studies are needed, as is regulatory approval, a process that could take several years.