How the financial industry is using AI to make fairer lending decisions

  • AI-based lending decisions are under intense scrutiny in the financial industry.
  • Lenders need to understand bias testing and human feedback loops to hold AI platforms accountable.
  • This article is part of the “Innovation at Work” series exploring trends and barriers to workplace transformation.

The financial industry has a long history of unfair lending decisions.

Redlining, a discriminatory practice that began in the 1930s, involves denying a loan to a customer based on their ZIP code. These institutions have physically drawn a red line around low-income neighborhoods, separating these residents from any possibility of borrowing money.

Redlining disproportionately affects Black Americans and immigrant communities. This deprives them of opportunities such as home ownership, starting a small business and obtaining a post-secondary education.

Although it became illegal in 1974 for lenders to refuse loans based on race, sex or age under the Equal Credit Opportunity Actstudies have found laws did little to reduce loan disparities.

The rise of machine learning and big data means that decisions can be scrutinized for human bias. But simply adopting the technology is not enough to revise discriminatory lending decisions.

A 2019 analysis of US Home Mortgage Disclosure Act data by The Markup, a nonprofit dedicated to data-driven journalism, found that lenders nationwide were almost twice as likely to reject black applicants as to reject similarly qualified white applicants despite the adoption of machine learning and big data technology. Latinos, Asians and Native Americans have also been denied mortgages at higher rates than white Americans with the same financial background.

Governments around the world have signaled that there will be a crackdown on “digital redlining”, where algorithms discriminate against marginalized groups.

Rohit Chopra, the head of the US Consumer Financial Protection Bureau, said such biases should be sanctioned more harshly: “Lending algorithms can reinforce biases.” he said The Philadelphia Investigator. “There is discrimination built into the computer code.”

Meanwhile, European Union politicians plan to introduce the law on artificial intelligence for tougher rules around using AI to screen everything from job and college applicants to loan applicants.

Highlight prejudices

It’s easy to blame technology for discriminatory lending practices, Sian Townson, director of digital practice at Oliver Wyman, told Insider. But that does not deserve responsibility.

“Recent discussions have given the impression that AI has invented a bias in lending,” she said. “But all computer modeling has done is quantify the bias and make us more aware of it.”

Although identifiers such as race, gender, religion, and marital status are prohibited from factoring into credit score calculations, the algorithms can disadvantage groups of people.

For example, some applicants may have shorter credit histories due to their religious beliefs. For example, in Islam, paying interest is considered like a sin. This may be a mark against Muslims, although other factors may indicate that they would be good borrowers.

While other data points, like mobile payments, aren’t a traditional form of credit history, Townson said, they can show a pattern of regular payments. “The purpose of AI was never to repeat history. It was to make useful predictions about the future,” she added.

Test and correct for bias

Software developers like FairPlay in the United States — which recently raised $10 million in Series A funding — have products that detect and help reduce algorithmic bias for people of color, women, and other historically disadvantaged groups.

FairPlay’s clients include financial institution Figure Technologies in San Francisco, online personal loan provider Happy Money and Octane Lending.

One of its application programming interface products, Second Look, re-evaluates loan applicants who have been turned down for discrimination. It pulls data from the U.S. Census and the Consumer Financial Protection Bureau to help recognize borrowers in protected classes, since financial institutions are prohibited from directly collecting information on race, age, and sex.

Rajesh Iyer, global head of AI and machine learning for financial services at Capgemini USA, said lenders could minimize discrimination by subjecting their AI solutions to around 23 bias tests. This can be done in-house or by a third-party company.

A bias test analyzes “disproportionate impact”. This detects whether a group of consumers is more affected by AI than other groups – and, more importantly, why.

Fannie Mae and Freddie Mac, who back the majority of US mortgages, recently found people of color were more likely to list their source of income from the “gig economy”. This has disproportionately prevented them from getting mortgages, as gig income is considered unstable, even if someone has a strong history of paying rent.

In seeking to make its loan decisions fairer, Fannie Mae has announced that it start considering rental history in credit reporting decisions. By inputting new data, humans are essentially teaching AI to eliminate bias.

Human feedback to keep learning from AI

AI can only learn from the data it receives. This makes a feedback loop with human input important for AI lending platforms because it enables institutions to make fairer lending decisions.

While it’s good practice for humans to step in when decisions are too close to appeal to machines, it’s also essential that people review a proportion of clear decisions, Iyer told Insider.

“This ensures that solutions adjust on their own, as they receive input from human reviews through incremental or reinforced learning,” Iyer said.

Comments are closed.