Multimodal Breast Ultrasound Segmentation: Combining Visual and Clinical Data

Armando Mendez

Co-Presenters: Individual Presentation

College: The Dorothy and George Hennings College of Science, Mathematics and Technology

Major: Information Technology

Faculty Research Mentor: Kuan Huang

Abstract:

      Breast cancer continues to be one of the top causes of cancer-related fatalities among women globally, emphasizing the urgent need for early detection and precise diagnosis. Breast ultrasound (BUS) imaging is a vital technique for the early identification of breast cancer. In this study, we introduced an innovative hybrid U-shaped network designed for the automated segmentation of breast lesions in BUS images. Additionally, we utilized a multimodal strategy that merges visual features from ultrasound images with contextual textual data, enhancing the model's comprehension of tumor characteristics. We assessed various configurations, including the selection of clinical features like tumor classification and BI-RADS scores. Our results indicated that the proposed multimodal framework surpasses some transformer-based models on a public BUS image dataset, achieving notable improvements. This research highlights the power of multimodal learning in medical image analysis and showcases the potential of transformer-language models to enhance diagnostic tools for the early detection of breast cancer.

Previous
Previous

Utilizing Therapeutic Gardening as an Intervention for Young Adults with ADHD

Next
Next

Use of MyPlate Kitchen to Adapt Healthy Recipes for College Students