R.P. Raghupathi

Bio:
Dr. R.P. Raghupathi, drawing on a law background, is collaborating with Prof. Saharia and Prof. Ren on a trilateral model of responsible AI that includes design, ethics, and regulation. In addition, Dr. Raghupathi conducts empirical work on reproducibility of AI and explainability in deep learning in health care and law domains. Dr. Raghupathi’s long-term research has focused on the application of AI and deep learning in legal reasoning. Dr. Raghupathi also applies machine learning text analytics to the analysis of legal cases in AI litigation to gain insight into the key issues surrounding responsible AI.

Abstract:
Explainability​ in deep learning for healthcare​ is often portrayed​ as​ a cure-all for the “black-box” problem. However, universal transparency can create confusion, bias, and cognitive overload. This paper asks:​ Is explainability required for all​ AI​ in healthcare? Integrating systemic concepts, the analysis argues that explainability​ is​ a context-dependent systemic property, essential where​ AI intersects with clinical reasoning, ethics,​ or accountability, but unnecessary for routine​ or axiomatic applications such​ as scheduling, signal normalization,​ or resource optimization. Through the metaphors​ of panacea and Pandora’s box, the paper shows that explainability becomes​ a panacea when proportionate and embedded within socio-technical feedback loops, but​ a Pandora’s box when imposed universally​ or superficially. Limitations​ of current explainable-AI techniques—Saliency Maps, LIME, and SHAP—are examined. Explainability​ is reframed​ as​ a risk-proportionate systemic capability: adaptive, contextual, and required only where human judgment and patient safety converge.