Mortgage

How mortgage fintechs use AI to make truthful choices

With fair lending likely to be a priority for the Biden administration, lenders can turn to artificial intelligence to ensure both regulatory compliance and fair decision-making.

This technology has been touted as a way to remove human prejudice from the process. It can reveal some of the everyday payment patterns of people with and without bank account details, information that is not available in the credit repository data.

"The idea is that not everything can be properly captured by a single document, but possibly by a different set of data fields and documents that could ultimately lead you to a very skilled borrower," said Sipho Simela, Head of Mortgage Strategy at Mortgage Fintech Ocrolus. "For example, not everything is immediately transparent in a credit report."

Sipho Simela

Multiple sources of data beyond a traditional credit score could help increase approvals for Black, Indigenous, and People of Color, a group more likely than other groups to be denied a mortgage loan. In 2019, conventional home loan rejection rates were highest for blacks at 16%, compared with 10.8% for Hispanics, 8.6% for Asians, and 6.1% for whites, according to a report released in June by the CFPB emerges.

However, AI is not a panacea, especially when the underlying data contains some of the unconscious biases of previous decisions. If this option is not checked, the AI ​​can instead maintain these discrepancies.

So in creating models that remove bias, companies need to "ensure that when comparing Borrower A to Borrower B there is actually a comparable comparison? Are they actually playing on the same playing field, if not, if not? There needs to be some modularization that needs to be built into the AI, "said Simela.

Other fintechs like Capacity, which provides AI-driven help desk platforms for employees and customers, are trying to stay one step ahead of the racist bias problem by “teaching” their AI products how to make fair decisions.

Capacity mainly uses machine learning to understand natural languages. The chatbot interfaces can understand differences in phrasing, regional dialects, and other kinds of variations.

Other use cases are more operational. For example, the technology can flag potential anomalies or risks in an automatically processed paper form or in the user profile. In these cases, it may not make the final decision on whether to grant or deny a loan, but it can ensure that there is a human being in the loop who can review cases that land below a certain confidence threshold. The actions of the technology can be analyzed to help users understand what specific factors influence a particular decision of this model.

"The layman's definition of AI is that AI is software that learns," said CEO David Karandish. "And it's software that learns by essentially taking patterns into data and testing those patterns over and over again."

David Karandish

Mortgage lenders need to ensure that the initial introductory data that is entered into the AI ​​software is representative of the characteristics of the clients they are trying to serve, with characteristics that lead to biased results being eliminated from these training steps, he said.

"If you've given all training data low credit scores or data points from one region of the country or data points from a gender, etc., those variables will start to be recorded," Karandish said. "The system may match the pattern you gave him [for his decisions], but the dataset doesn't represent the entirety of your customer base."

The fintech company Finastra is developing its own technology to combat the bias problem in origins underwriting.

The FinEqual software searches for potential distortions in credit decisions. It is a component of Finastra's analytical strategy "that identifies potential anomalies or areas of inequality and uses the data to suggest to our customers where they might be and potentially suggest ways to address them," said Chris Zingo, Executive Vice President, Americas.

FinEqual is still in the exploration phase to ensure the app is working as intended, Zingo added. Once this is established, it can be brought to market quickly. The company was unable to disclose the companies that were dealing with FinEqual at the time.

When things go sideways despite proper fairness training, errors in AI can often be spotted faster than human errors, said David Smitkof, vice president of analytics at Ocrolus. He's partly optimistic about AI because the decision path is traceable and verifiable in a consistent and explainable way. Ocrolus uses optical character recognition technology to pull information from forms that can then be used by creators and servicers in decision making. For the latter, it helps in determining harm reduction.

"You can simulate using the model for different data sets and measure different impacts," said Smitkof.

"And this is where you get into that notion of ethical responsibility when building an AI system that simulates the various outcomes and actually measures the effects," he continued. "Because given the correlation between different elements of data in the world, if you can't put data into the model, you need to understand what comes out on the other end and what effect it will have."

The traceable nature of these technologies will be important to lenders as regulators in a Biden administration could be more aggressive in enforcing violations of fair housing practices.

"Creating a level of transparency about the why from which certain decisions are made is good policy, regardless of who is in the White House," Karandish said. "Every time you make a human decision, you have the possibility of a bias. The only way to correct that bias is to examine it, replay it and understand what we could or might do differently next time should."

Digitizing the workflow is the first step so that users can see from both a regulatory and an ethical perspective why they made their decision in each scenario, Karandish continued.

There will likely be more government controls on tech companies like Facebook, but Simela said that doesn't have to be a bad thing.

"I think technology has its underbelly, but [but] good technology helps people. If we are able to work with the in-depth administration – we as technologists as a whole – I think the end product will be much better. With the renewing strength of the CFPB … they will make better use of the technology, "said Simela, who added that the end result will be a better product for consumers.

Related Articles