Artificial intelligence (AI) models that evaluate medical images have potential to speed up and improve accuracy of cancer diagnoses, but they may also be vulnerable to cyberattacks. In a new study, researchers simulated an attack that falsified mammogram images, fooling both an AI breast cancer diagnosis model and human breast imaging radiologist experts.
The study, published in Nature Communications, brings attention to a potential safety issue for medical AI known as “adversarial attacks,” which seek to alter images or other inputs to make models arrive at incorrect conclusions.
“What we want to show with this study is that this type of attack is possible, and it could lead AI models to make the wrong diagnosis — which is a big patient safety issue,” said the senior author. “By understanding how AI models behave under adversarial attacks in medical contexts, we can start thinking about ways to make these models safer and more robust.”
AI-based image recognition technology for cancer detection has advanced rapidly in recent years, and several breast cancer models have U.S. Food and Drug Administration (FDA) approval. According to the author, these tools can rapidly screen mammogram images and identify those most likely to be cancerous, helping radiologists be more efficient and accurate.
But such technologies are also at risk from cyberthreats, such as adversarial attacks. Potential motivations for such attacks include insurance fraud from health care providers looking to boost revenue or companies trying to adjust clinical trial outcomes in their favor. Adversarial attacks on medical images range from tiny manipulations that change the AI’s decision, but are imperceptible to the human eye, to more sophisticated versions that target sensitive contents of the image, such as cancerous regions —making them more likely to fool a human.
To understand how AI would behave under this more complex type of adversarial attack, the team used mammogram images to develop a model for detecting breast cancer. First, the researchers trained a deep learning algorithm to distinguish cancerous and benign cases with more than 80% accuracy. Next, they developed a so-called “generative adversarial network” (GAN) — a computer program that generates false images by inserting or removing cancerous regions from negative or positive images, respectively, and then they tested how the model classified these adversarial images.
Of 44 positive images made to look negative by the GAN, 42 were classified as negative by the model, and of 319 negative images made to look positive, 209 were classified as positive. In all, the model was fooled by 69.1% of the fake images.
In the second part of the experiment, the researchers asked five human radiologists to distinguish whether mammogram images were real or fake. The experts accurately identified the images’ authenticity with accuracy of between 29% and 71%, depending on the individual.
“Certain fake images that fool AI may be easily spotted by radiologists. However, many of the adversarial images in this study not only fooled the model, but they also fooled experienced human readers,” said the author. “Such attacks could potentially be very harmful to patients if they lead to an incorrect cancer diagnosis.”
According to the author, the next step is developing ways to make AI models more robust to adversarial attacks.
“One direction that we are exploring is ‘adversarial training’ for the AI model,” the author explained. “This involves pre-generating adversarial images and teaching the model that these images are manipulated.”
With the prospect of AI being introduced to medical infrastructure, The author said that cybersecurity education is also important to ensure that hospital technology systems and personnel are aware of potential threats and have technical solutions to protect patient data and block malware.
“We hope that this research gets people thinking about medical AI model safety and what we can do to defend against potential attacks, ensuring AI systems function safely to improve patient care,” the author added.
https://www.nature.com/articles/s41467-021-27577-x
http://sciencemission.com/site/index.php?page=news&type=view&id=publications%2Fa-machine-and-human&filter=22
Cancer-spotting AI and human experts can be fooled by image-tampering attacks
- 1,188 views
- Added
Edited
Latest News
Protein that helps COVID-19…
By newseditor
Posted 26 Jul
Spinal Muscular Atrophy (SM…
By newseditor
Posted 26 Jul
Link between bowel movement…
By newseditor
Posted 26 Jul
Inhibition of IL-11 signall…
By newseditor
Posted 25 Jul
Brain changes linked to obe…
By newseditor
Posted 25 Jul
Other Top Stories
Deep-sleep brain waves predict blood sugar control
Read more
Extracellular cytochrome nanowires appear to be ubiquitous in microbes
Read more
Distinct connectivity patterns for depression associated with traum…
Read more
Antisense therapy restores fragile X protein production in human cells
Read more
Biomarker for allergic reaction in kidneys identified!
Read more
Protocols
A systems biology approach…
By newseditor
Posted 24 Jul
quantms: a cloud-based pipe…
By newseditor
Posted 22 Jul
Emerging tools and best pra…
By newseditor
Posted 19 Jul
Directly selecting cell-typ…
By newseditor
Posted 17 Jul
PUFFFIN: an ultra-bright, c…
By newseditor
Posted 16 Jul
Publications
Hepatocyte-intrinsic SMN de…
By newseditor
Posted 26 Jul
Aberrant bowel movement fre…
By newseditor
Posted 26 Jul
A pseudoautosomal glycosyla…
By newseditor
Posted 26 Jul
Microglia protect against a…
By newseditor
Posted 26 Jul
Rigor and reproducibility i…
By newseditor
Posted 26 Jul
Presentations
Myelin plasticity in the ve…
By newseditor
Posted 10 Jun
Hydrogels in Drug Delivery
By newseditor
Posted 12 Apr
Lipids
By newseditor
Posted 31 Dec
Cell biology of carbohydrat…
By newseditor
Posted 29 Nov
RNA interference (RNAi)
By newseditor
Posted 23 Oct
Posters
A chemical biology/modular…
By newseditor
Posted 22 Aug
Single-molecule covalent ma…
By newseditor
Posted 04 Jul
ASCO-2020-HEALTH SERVICES R…
By newseditor
Posted 23 Mar
ASCO-2020-HEAD AND NECK CANCER
By newseditor
Posted 23 Mar
ASCO-2020-GENITOURINARY CAN…
By newseditor
Posted 23 Mar