Breast cancer remains a significant illness around the world, but it has become the most dangerous when faced with women. Early detection is paramount in improving prognosis and treatment. Thus, ultrasonography has appeared as a valuable diagnostic tool for breast cancer. However, the accurate interpretation of ultrasound images requires expertise. To address these challenges, recent advancements in computer vision such as using convolutional neural networks (CNN) and vision transformers (ViT) for the classification of medical images, which become popular and promise to increase the accuracy and efficiency of breast cancer detection. Specifically, transfer learning and finetuning techniques have been created to leverage pre-trained CNN models. With a self-attention mechanism in ViT, models can effectively feature extraction and learning from limited annotated medical images. In this study3, the Breast Ultrasound Images Dataset (Dataset BUSI) with three classes including normal, benign, and malignant was utilized to classify breast cancer images. Additionally, Deep Convolutional Generative Adversarial Networks (DCGAN) with several techniques were applied for data augmentation and preprocessing to increase robustness and address data imbalance. The AttentiveEfficientGANB3 (AEGANB3) framework is proposed with a customized EfficientNetB3 model and self-attention mechanism, which showed an impressive result in the test accuracy of 98.01%. Finally, Gradient-weighted Class Activation Mapping (Grad-CAM) for visualizing the model decision
Tạp chí khoa học Trường Đại học Cần Thơ
Lầu 4, Nhà Điều Hành, Khu II, đường 3/2, P. Xuân Khánh, Q. Ninh Kiều, TP. Cần Thơ
Điện thoại: (0292) 3 872 157; Email: tapchidhct@ctu.edu.vn
Chương trình chạy tốt nhất trên trình duyệt IE 9+ & FF 16+, độ phân giải màn hình 1024x768 trở lên