摘要:White blood cells (WBCs) are blood cells that fight infections and diseases as a part of the immune system. They are also known as “defender cells.” But the imbalance in the number of WBCs in the blood can be hazardous. Leukemia is the most common blood cancer caused by an overabundance of WBCs in the immune system. Acute lymphocytic leukemia (ALL) usually occurs when the bone marrow creates many immature WBCs that destroy healthy cells. People of all ages, including children and adolescents, can be affected by ALL. The rapid proliferation of atypical lymphocyte cells can cause a reduction in new blood cells and increase the chances of death in patients. Therefore, early and precise cancer detection can help with better therapy and a higher survival probability in the case of leukemia. However, diagnosing ALL is time-consuming and complicated, and manual analysis is expensive, with subjective and error-prone outcomes. Thus, detecting normal and malignant cells reliably and accurately is crucial. For this reason, automatic detection using computer-aided diagnostic models can help doctors effectively detect early leukemia. The entire approach may be automated using image processing techniques, reducing physicians’ workload and increasing diagnosis accuracy. The impact of deep learning (DL) on medical research has recently proven quite beneficial, offering new avenues and possibilities in the healthcare domain for diagnostic techniques. However, to make that happen soon in DL, the entire community must overcome the explainability limit. Because of the black box operation’s shortcomings in artificial intelligence (AI) models’ decisions, there is a lack of liability and trust in the outcomes. But explainable artificial intelligence (XAI) can solve this problem by interpreting the predictions of AI systems. This study emphasizes leukemia, specifically ALL. The proposed strategy recognizes acute lymphoblastic leukemia as an automated procedure that applies different transfer learning models to classify ALL. Hence, using local interpretable model-agnostic explanations (LIME) to assure validity and reliability, this method also explains the cause of a specific classification. The proposed method achieved 98.38% accuracy with the InceptionV3 model. Experimental results were found between different transfer learning methods, including ResNet101V2, VGG19, and InceptionResNetV2, later verified with the LIME algorithm for XAI, where the proposed method performed the best. The obtained results and their reliability demonstrate that it can be preferred in identifying ALL, which will assist medical examiners.