Artificial intelligence (AI) can augment human expertise by processing vast datasets to uncover patterns that are otherwise unobservable [1, 2, 3]. Explainable artificial intelligence (XAI) is a critical advancement in the field of AI, particularly in applications such as medicine. Its value lies in its ability to enhance trust, improve outcomes, and facilitate better integration of AI into medical workflows.

Deep learning algorithms using artificial neural networks operate as “black boxes”, where their decision-making processes are opaque. In medicine, this lack of transparency can undermine clinicians’ trust in AI recommendations. XAI provides explanations for AI-driven decisions, allowing clinicians to understand the “why” and “how” behind predictions [4]. Furthermore, XAI allows clinicians to verify and validate AI-generated results. By understanding the reasoning behind a diagnosis or recommendation, physicians can identify potential errors or confirm findings. This process aligns with medical standards, where every decision requires justification. For example, in radiology a XAI tool detecting lung nodules might flag specific areas and provide reasoning, such as “based on shape, irregularity and texture contrast”.

In addition, some regulatory frameworks, such as the General Data Protection Regulation (GDPR) in the European Union (EU), mandate explainability in AI systems, especially in critical domains like medicine. XAI helps meet these requirements by providing clear, interpretable outputs.

Recently, a research group comprised of professors and researchers from Nazarbayev University, University of Thessaly and Nanyang Technological University demonstrated the potential of a XAI methodology applied in breast cancer cases. In particular, the analysis concerned thermography for the detection of breast tumors through non-invasive, cost-effective infrared imaging, thus reducing dependency on radiation-intensive procedures [5, 6, 7]. Thermography works by capturing temperature variations on the skin’s surface, which can indicate underlying tumors. However, interpreting these images without the use of AI requires high levels of expertise, making results variable and potentially prone to human error.

To address this, the proposed XAI method employs a combination of Bayesian networks (BN) and convolutional neural networks (CNN), to enhance and make interpretable the diagnosis [5]. This hybrid XAI approach allows for reliable detection while also introducing explainability to the process. Using a CNN and XAI algorithm, critical parts of images are identified and consecutive factors (computational and statistical quantities) are estimated from the critical parts. Finally, these factors are included in the final BN together with medical record data. After training the BN, we reached a fully interpretable diagnosis model [6].

This innovation not only reduces the chance of misdiagnosis but also aligns with World Health Organization (WHO)’s sustainable health goals by making diagnostic methods more accessible and safer for all women, regardless of age or geographic location. XAI is indispensable in medicine, where decisions affect human lives. By making AI transparent, trustworthy, and actionable, XAI ensures safer, more reliable and equitable healthcare. Its integration into medical workflows represents a pivotal step toward achieving truly intelligent healthcare solutions. While XAI is promising, challenges remain, such as balancing explainability with accuracy and developing universally accepted methods for interpretation.

Overall, this letter offers a valuable advancement in the field of medical AI by combining interpretability with accuracy.

Author Contributions

Conceptualization, YZ, VZ and EYKN; methodology, VZ, AM, YZ, EYKN; formal analysis, VZ, YZ; investigation, AM; writing original draft preparation, AM, VZ, EYKN, YZ; writing review and editing, AM, YZ, VZ and EYKN; funding acquisition, AM, YZ. All authors have All authors read and approved the final manuscript. All authors have participated sufficiently in the work and agreed to be accountable for all aspects of the work.

Ethics Approval and Consent to Participate

Not applicable.

Acknowledgment

Authors feel to acknowledge their respective institutions for the support given in literature research and data collection. The authors would like to further thank Nurduman Aidossov and Anna Midlenko (from Nazarbayev University) for their strong commitments as a team in this AI-based breast cancer diagnostic tool project.

Funding

This research is supported by the Ministry of Science and Higher Education of the Republic of Kazakhstan AP19678197 (integrating physics-informed neural network, Bayesian and convolutional neural networks for early breast cancer detection using thermography).

Conflict of Interest

The authors declare no conflict of interest. Eddie Yin Kwee Ng is the Executive Editor of IMR Press. We declare that Eddie Yin Kwee Ng had no involvement in the review of this article and has no access to information regarding its review. Full responsibility for the editorial process for this article was delegated to Michael H. Dahan.

References

Publisher’s Note: IMR Press stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.