Artificial intelligence (AI) models that evaluate medical images have potential to speed up and improve accuracy of cancer diagnoses, but they may also be vulnerable to cyberattacks. In a new study, researchers simulated an attack that falsified mammogram images, fooling both an AI breast cancer diagnosis model and human breast imaging radiologist experts.
The study, published in Nature Communications, brings attention to a potential safety issue for medical AI known as “adversarial attacks,” which seek to alter images or other inputs to make models arrive at incorrect conclusions.
“What we want to show with this study is that this type of attack is possible, and it could lead AI models to make the wrong diagnosis — which is a big patient safety issue,” said the senior author. “By understanding how AI models behave under adversarial attacks in medical contexts, we can start thinking about ways to make these models safer and more robust.”
AI-based image recognition technology for cancer detection has advanced rapidly in recent years, and several breast cancer models have U.S. Food and Drug Administration (FDA) approval. According to the author, these tools can rapidly screen mammogram images and identify those most likely to be cancerous, helping radiologists be more efficient and accurate.
But such technologies are also at risk from cyberthreats, such as adversarial attacks. Potential motivations for such attacks include insurance fraud from health care providers looking to boost revenue or companies trying to adjust clinical trial outcomes in their favor. Adversarial attacks on medical images range from tiny manipulations that change the AI’s decision, but are imperceptible to the human eye, to more sophisticated versions that target sensitive contents of the image, such as cancerous regions —making them more likely to fool a human.
To understand how AI would behave under this more complex type of adversarial attack, the team used mammogram images to develop a model for detecting breast cancer. First, the researchers trained a deep learning algorithm to distinguish cancerous and benign cases with more than 80% accuracy. Next, they developed a so-called “generative adversarial network” (GAN) — a computer program that generates false images by inserting or removing cancerous regions from negative or positive images, respectively, and then they tested how the model classified these adversarial images.
Of 44 positive images made to look negative by the GAN, 42 were classified as negative by the model, and of 319 negative images made to look positive, 209 were classified as positive. In all, the model was fooled by 69.1% of the fake images.
In the second part of the experiment, the researchers asked five human radiologists to distinguish whether mammogram images were real or fake. The experts accurately identified the images’ authenticity with accuracy of between 29% and 71%, depending on the individual.
“Certain fake images that fool AI may be easily spotted by radiologists. However, many of the adversarial images in this study not only fooled the model, but they also fooled experienced human readers,” said the author. “Such attacks could potentially be very harmful to patients if they lead to an incorrect cancer diagnosis.”
According to the author, the next step is developing ways to make AI models more robust to adversarial attacks.
“One direction that we are exploring is ‘adversarial training’ for the AI model,” the author explained. “This involves pre-generating adversarial images and teaching the model that these images are manipulated.”
With the prospect of AI being introduced to medical infrastructure, The author said that cybersecurity education is also important to ensure that hospital technology systems and personnel are aware of potential threats and have technical solutions to protect patient data and block malware.
“We hope that this research gets people thinking about medical AI model safety and what we can do to defend against potential attacks, ensuring AI systems function safely to improve patient care,” the author added.
https://www.nature.com/articles/s41467-021-27577-x
http://sciencemission.com/site/index.php?page=news&type=view&id=publications%2Fa-machine-and-human&filter=22
Cancer-spotting AI and human experts can be fooled by image-tampering attacks
- 927 views
- Added
Edited
Latest News
Humans can intermittently r…
By newseditor
Posted 04 Dec
Why young kids don't get se…
By newseditor
Posted 04 Dec
Phosphatidylinositol 3,5-bi…
By newseditor
Posted 04 Dec
Probiotic-guided CAR-T cell…
By newseditor
Posted 04 Dec
Cell atlases of the human b…
By newseditor
Posted 04 Dec
Other Top Stories
Sex of the human experimenter plays a role in drug effects of ketamine
Read more
Huntingtin isoform maintains huntingtin function
Read more
Gut-Brain Circuits for Fat Preference
Read more
Probing TDP-43 condensation using an in silico designed aptamer
Read more
Why does fasting reduce seizures?
Read more
Protocols
Cheap, cost-effective, and…
By newseditor
Posted 03 Dec
Temporally multiplexed imag…
By newseditor
Posted 02 Dec
Efficient elimination of ME…
By newseditor
Posted 01 Dec
Personalized drug screening…
By newseditor
Posted 30 Nov
Multi-chamber cardioids unr…
By newseditor
Posted 29 Nov
Publications
Behavioral and brain respon…
By newseditor
Posted 04 Dec
Toward low-cost gene therap…
By newseditor
Posted 04 Dec
The bidirectional immune cr…
By newseditor
Posted 04 Dec
Leveraging human immune org…
By newseditor
Posted 04 Dec
Single-cell long-read seque…
By newseditor
Posted 04 Dec
Presentations
Hydrogels in Drug Delivery
By newseditor
Posted 12 Apr
Lipids
By newseditor
Posted 31 Dec
Cell biology of carbohydrat…
By newseditor
Posted 29 Nov
RNA interference (RNAi)
By newseditor
Posted 23 Oct
RNA structure and functions
By newseditor
Posted 19 Oct
Posters
A chemical biology/modular…
By newseditor
Posted 22 Aug
Single-molecule covalent ma…
By newseditor
Posted 04 Jul
ASCO-2020-HEALTH SERVICES R…
By newseditor
Posted 23 Mar
ASCO-2020-HEAD AND NECK CANCER
By newseditor
Posted 23 Mar
ASCO-2020-GENITOURINARY CAN…
By newseditor
Posted 23 Mar