As we head into the future of health care, which is rapidly evolving thanks to remarkable progress in digital technologies and artificial intelligence, we must apply the lessons we’ve learned about health equity.
The health care industry has historically been fraught with inequalities and barriers for minority populations. With a goal of ensuring that AI integration in health care meets the needs of underserved communities, thought leaders are meeting regularly at Johns Hopkins to take a proactive stance and generate vibrant conversations on the topic.
This cross-disciplinary effort from the Center for Digital Health and Artificial Intelligence (CDHAI) at the Johns Hopkins Carey Business School, the Bloomberg School of Public Health, the Berman Institute of Bioethics, the Whiting School of Engineering, and the Johns Hopkins School of Medicine, navigates crucial issues of trust, governance, and accessibility in AI’s role in health care. At the inaugural event, “The Promises and Perils of AI in Medicine,” Co-Directors of CDHAI Ritu Agarwal and Gordon Gao stressed the need for equitable AI application, advocating for “equal outcomes across populations, consistent algorithmic performance, and equitable resource allocation” and emphasized AI’s potential as a tool for real-world equity, not merely as a catalyst for innovation.
The following session, “Who Regulates? The Role of Stakeholders in Ensuring Health Equity in AI Solutions,” explored the regulatory landscape. Ritu Agarwal framed the series’ focus on translating research “from the bench to bedside” and shaping policy and practice. “Every sector touches health care in diverse ways, catalyzing cross-industry thinking for health equity,” she noted.
Debra Mathews from the Berman Institute of Bioethics emphasized the importance of expanding the conversation around AI regulation, stating, “AI is shaped by the values and goals of its designers, a small subset of human population and experience, thus necessitating broader stakeholder input and trust.” She discussed the need for incentives to shape AI development behavior, moving from mere trust to trustworthiness.
Risa Wolf, Director of the Pediatric Diabetes Center at Johns Hopkins School of Medicine, detailed the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, outlining principles for its safe, trustworthy use, including considerations of privacy, equity, civil rights, and consumer protection.
Tamra Moore, VP of Corporate Counsel, Global Data, Privacy, and AI at Prudential Financial, highlighted the AI Bill of Rights’ role in balancing AI advancement with harm mitigation, particularly addressing the need for equity principles in AI technologies and cautioning against perpetuating biases in clinical tools, such as the use of race as a multiplier. “Our approach must not exacerbate trust issues but actively involve these communities,” Moore stated.