AI lending bias
In Re: Earnest Operations LLC
Date: July 10, 2025
Issue: Earnest Operations LLC’s settlement with the Commonwealth of Massachusetts for allegedly using artificial intelligence models that disproportionately harmed Black and Hispanic applicants.
Case Summary: Student loan company Earnest Operations LLC agreed to pay $2.5 million to the Commonwealth of Massachusetts to resolve allegations it used artificial intelligence models that disproportionately harmed Black and Hispanic applicants.
The Massachusetts Attorney General (MAG) sued Earnest for violating the Consumer Protection Act by applying “knockout rules” in its lending process. Since 2014, Earnest used AI-based underwriting models to issue personal and student loans through a three-stage algorithmic process that applied knockout rules to reject applicants early. Although Earnest’s policies required senior oversight for exceptions, MAG alleged that underwriters frequently bypassed the models without clear standards or documentation, and often favored applicants based on assumptions about future income tied to their careers or education.
MAG alleged that Earnest violated the Consumer Protection Act by using a Cohort Default Rate (CDR) variable in its student loan refinancing model. More specifically, Earnest assigned a weighted subscore based on the CDR, which reflected the average loan default rate at an applicant’s college. This practice allegedly caused a disparate impact, with Black and Hispanic applicants more likely to receive worse loan terms or be denied compared to White applicants. MAG claimed that using the CDR variable was discriminatory, violated the Equal Credit Opportunity Act (ECOA), and constituted an unfair and deceptive business practice.
Finally, MAG alleged that Earnest violated the Consumer Protection Act by using a knockout rule to automatically deny applicants who lacked at least a green card, creating an ECOA disparate impact risk. In addition, MAG alleged Earnest issued inaccurate adverse-action notices that failed to provide specific reasons for credit denials and neglected to implement or follow fair lending policies to mitigate discrimination risks in its AI underwriting models, resulting in unfair and deceptive practices.
As part of the settlement, Earnest agreed to pay $2.5 million to the Commonwealth of Massachusetts. Earnest will also develop and maintain a written corporate governance system of fair lending testing, internal controls, and risk assessments for the use of AI models.
Bottom Line: Earnest denies the AG’s allegations and further denies that it has violated Massachusetts or federal law.
Document: Assurance of Discontinuance











