Artificial intelligence (AI) has demonstrated significant potential in precision medicine by integrating diverse data sources, such as genomic data and electronic health records (EHR), to generate risk predictions and aid in drug and therapy screening. However, the opaque nature of the state-of-the-art AI technologies (e.g. deep learning) poses substantial challenges to the transparency of these prediction models. In response, explainable AI (XAI), which aims to elucidate AI decisions in ways comprehensible to humans, has gained traction in the technical domain as an ethical imperative. While practitioners offer technical solutions to "open the black box," the social and power dynamics that drive or hinder explainability are rarely examined. Specifically, the power dynamics during the development and deployment of XAI must be understood within the sociotechnical network, e.g. how and what metrics and criteria are determined, who is XAI accountable to, and how much power do various practitioners (developers and clinicians). In this paper, we build upon existing ethical and sociological scholarship of XAI and precision medicine, and focus on the technical embeddedness of power dynamics in current XAI techniques. We first propose an analytical framework that inspects power dynamics during XAI development. We then examine two empirical cases of using XAI in the medical domain, dissecting the power dynamics and sociopolitical assumptions embedded in existing XAI techniques. Our aim is to ground XAI techniques in sociotechnical analysis to inform technical development in AI- and data-driven genomic medicine for empowering and enabling the underpowered to deliver equitable clinical outcomes.
Authors: Elise Li Zheng, Columbia University; Weina Jin, Simon Fraser University; Ghassan Hamarneh; Simon Fraser University; Sandra Soo-Jin Lee, Columbia University