The National Center for Advancing Translational Sciences (NCATS) within the National Institutes of Health (NIH) is launching the Minimizing Bias and Maximizing Long-term Accuracy, Utility, and Generalizability of Predictive Algorithms in Healthcare Challenge (Challenge). Although artificial intelligence (AI) and machine learning (ML) algorithms offer promise for clinical decision support (CDS), that potential has yet to be fully realized in the clinic. Even well-designed AI/ML algorithms and models can become inaccurate or unreliable over time due to various factors; changes in data distribution, subtle shifts in the data, real world interactions, user behavior, and shifts in data capture and management practices can have repercussions for model performance. These subtle shifts over time can cause degradation of the predictive capability of an algorithm, which can effectively negate the benefits of these types of systems in the clinic.
How do we detect these shifts or changes on a continual basis to prevent erosion of the quality of predictions? Monitoring of an algorithm’s behavior and flagging of any material drifts in performance may enable timely adjustments that ensure the model’s predictions remain accurate, fair, and unbiased over time. In this way, degradation of the predictive capability of the algorithm when applied in the real world may be prevented.
As AI/ML algorithms are increasingly utilized in healthcare systems, accuracy, generalizability, and avoidance of bias and drift appropriately come to the forefront. Bias can primarily surface in the form of predictive bias—algorithmic inaccuracies in producing estimates that significantly differ from the underlying truth; and/or social bias—systemic inequities in care delivery leading to suboptimal health outcomes for certain populations.
To address these issues and improve clinician/patient trust in AI/ML-based CDS tools, this Challenge invites groups to develop bias–detection and -correction tools that foster “good algorithmic practice” and mitigate the risk of unwitting bias in CDS algorithms.
The goal of this Challenge is to identify and minimize inadvertent amplification/perpetuation of systemic biases in AI/ML algorithms utilized as CDS tools through the development of predictive and social bias detection and correction tools.
To win this Challenge, participants must submit (1) the tool and its source code to GitHub, specifying that the tool will be distributed under the BSD 3 license, (2) README and SOPs, including a sustainability plan, a generalizability plan, and implementation requirements for ensuring tool usage at various locations/environments, (3) documentation to describe the background, architectural design, and functionality of the tool via a use case on an ML-based clinical care system, and (4) URL to a video demonstrating the tool executing on CDS models and CDS data.
NCATS will award prizes to the participants who are most successful at generating a methodology to address all four items listed: (1) detect predictive and social biases, (2) identify source(s) of these biases, (3) consistently evaluate and assess CDS algorithm(s) in use to promote model accuracy and generalizability and mitigate bias/unfairness and drift, and (4) provide proof-of-concept as a tool that supports broad use of AI algorithms. In addition to awarding prizes, NCATS will also host a “demo day” where winners will be invited to showcase their tool(s).
Note: The Government will not provide any data for use in this Challenge. Participants will be expected to use their own datasets and CDS algorithms with developed tools.
Statutory Authority to Conduct the Challenge
NCATS is conducting this Challenge under the America Creating Opportunities to Meaningfully Promote Excellence in Technology, Education, and Science (COMPETES) Reauthorization Act of 2010, as amended [15 U.S.C. § 3719].
NCATS was established to coordinate and develop resources that leverage basic research in support of translational science and to develop partnerships and work cooperatively to foster synergy in ways that do not create duplication, redundancy, and competition with industry activities. This Challenge will further the mission of NCATS by spurring innovation in the AI bias mitigation space – both identification and minimizing inadvertent amplification/perpetuation of systemic biases – in AI/ML algorithms utilized as CDS tools in the healthcare setting. Through this Challenge, innovators will create tools to foster and promote the use of predictive and social bias detection and correction in order to increase the accuracy of CDS algorithms in healthcare settings.
NCATS reserves the right, in its sole discretion, to (a) cancel, suspend, or modify the Challenge, or any part of it, for any reason, and/or (b) not award any prizes if no submissions are deemed worthy.