Testing Milliman Advanced Risk Adjuster models for racial bias: Medicare model results
Recently there has been growing concern that algorithms used for care management may exhibit racial bias. Where it exists, this bias arises due to the ways certain algorithms use past experience to predict future costs, which may be used to identify individuals for targeted care management program interventions. We take this concern seriously. Mindful of the role Milliman Advanced Risk Adjuster (MARA) plays in assessing health risks, we are investigating our models for any indication of this bias. This report discusses our analysis of the data on Medicare fee-for-service (FFS) beneficiaries from the 5% sample of the Enrollment DataBase (EDB) and National Claims History Standard Analytical Files (SAFs) released by the Centers for Medicare and Medicaid Services (CMS). We tested two prospective diagnosis-based MARA models intended for use on Medicare populations, and found no indication of racial bias.
About the Author(s)
Testing Milliman Advanced Risk Adjuster models for racial bias: Medicare model results
Addressing racial bias in risk adjustment: Careful consideration with modeling choices, variable selection, and program design can help reduce the potential of perpetuating inequities.