One Pixel Can Change the Diagnosis: Adversarial and Non-Adversarial Robustness and Uncertainty in Breast Ultrasound Classification Model​

Noorul Sama Sahel

Co-Presenters: Individual Presentation

College: Hennings College of Science Mathematics and Technology

Major: BS.COMPSCI/DATSCI

Faculty Research Mentor: Huang, Kuan  

Abstract:

Deep learning models are vulnerable to small input perturbations, often imperceptible, that can lead to drastic misclassifications. In natural image domains, even a change in a single pixel can fool strong classifiers, as demonstrated by the One-Pixel Attack and other adversarial example studies [1].In the medical imaging domain, research has shown that adversarial attacks can compromise diagnostic systems across various modalities, including chest X-ray, CT, and multiple clinical applications. While adversarial robustness is an active research area in general computer vision, its implications in medical imaging, and particularly in breast ultrasound (BUS) classification, remain underexplored. Furthermore, non-adversarial disturbances such as device-induced noise or data corruption may also degrade performance, yet their effect on model confidence has not been systematically studied in BUS. In this work, we present the first systematic investigation of pixel-level perturbations, considering both adversarial perturbations (One-Pixel Attack) and non-adversarial perturbations (One-Pixel Blackout), and examine their combined effects on classification performance and predictive uncertainty in BUS models.

Previous
Previous

The Relationship Between Accounts Receivable Growth and Stock Price Trends: Evidence from U.S. Public Companies within 5 Years

Next
Next

Four-Year Comparative Analysis of Summer Avian Biodiversity at Two Sites in the New Jersey Meadowlands