Enhancing Skin Cancer Detection Through Category Representation and Fusion of Pre-Trained Models

26 Pages Posted: 9 Feb 2025

See all articles by lingping kong

lingping kong

affiliation not provided to SSRN

Juan D. Velasquez

University of Chile

Vaclav Snasel

VSB - Technical University of Ostrava

Millie Pant

Indian Institute of Technology (IIT), Roorkee

Jeng-Shyang Pan

Shandong University of Science and Technology

Jana Nowakova

affiliation not provided to SSRN

Abstract

The use of pre-trained models in medical image classification has gainedsignificant attention due to their ability to handle complex datasetsand improve accuracy. However, challenges such as domain-specificcustomization, interpretability, and computational efficiency remaincritical, especially in high-stakes applications such as skin cancerdetection. In this paper, we introduce a novel interpretability-assistedfine-tuning framework that leverages category representation to enhanceboth model accuracy and transparency.Using the widely known HAM10000 dataset, which includes seven imbalancedcategories of skin cancer, we demonstrate that our method improves theclassification accuracy by 2.6\% compared to standard pre-trained models.In addition to precision, we also achieve significant improvements ininterpretability, with our category representation framework providingmore understandable insights into the model’s decision-making process.Key metrics, such as precision and recall, show enhanced performance,particularly for underrepresented skin cancer types such as Melanocytic Nevi (F1 score of 0.94) and Actinic Keratosis (F1 score of 0.66).Furthermore,  the prediction accuracy of the proposed model of the top-3 reaches 98. 21\%, which is highly significant for medical decision making.This observation in interpredibility underscores the value of top-$n$ predictions, especially in challenging cases, to support more informed and accurate decisions. The proposed fusion framework not only enhances predictive accuracy, but also offers an interpretable model output that can assist clinicians in making informed decisions. This makes our approach particularly relevant in medical applications, where both accuracy and transparency are crucial. Our results highlight the potential of fusing pretrained models with category representation techniques to bridge the gap between performance and interpretability in AI-driven healthcare solutions.

Keywords: Sparse representationSkin cancer data Model interpredibilityPre-trained Model

Suggested Citation

kong, lingping and Velasquez, Juan D. and Snasel, Vaclav and Pant, Millie and Pan, Jeng-Shyang and Nowakova, Jana, Enhancing Skin Cancer Detection Through Category Representation and Fusion of Pre-Trained Models. Available at SSRN: https://ssrn.com/abstract=5119282 or http://dx.doi.org/10.2139/ssrn.5119282

Lingping Kong

affiliation not provided to SSRN ( email )

No Address Available

Juan D. Velasquez

University of Chile ( email )

Vaclav Snasel (Contact Author)

VSB - Technical University of Ostrava ( email )

17. listopadu 2172/15
Ostrava, 708 00
Czech Republic

Millie Pant

Indian Institute of Technology (IIT), Roorkee ( email )

DOMS
Indian Institute of Technology Roorkee
Roorkee
India

Jeng-Shyang Pan

Shandong University of Science and Technology ( email )

Qingdao
China

Jana Nowakova

affiliation not provided to SSRN ( email )

No Address Available

Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Downloads
10
Abstract Views
53
PlumX Metrics