At Clear Capital, our primary goal is to improve people's lives and enable accurate financial decisions for real estate through the use of advanced analytics, technology and valuation products. We believe that property valuation is key in the mortgage finance industry and also for the health of neighborhoods and communities. Putting people first means that anything that prevents homeowners and families from moving forward with confidence is worth our time to understand and take action. This is especially true when there is potential for racial bias.
When it comes to industry bias, we have an individual responsibility to face our conscious and unconscious tendencies and determine whether or not we perpetuate inequality. Everyone has the power to conduct their own research and self-assessment.
There are many communities who are feeling the effects of years of unequal policies and continue to worry that those effects may persist. We must take responsibility for identifying warning signs of bias and our own blind spots.
Amid the ongoing discussions about how to make better use of analytics and digitize the home inspection process, there are some clear steps we can take to reduce bias in the mortgage industry, starting with an increasing diversity in the valuation profession.
As part of an initiative to increase diversity among referees, the Appraisal Institute (AI) trade organization has partnered with Fannie Mae and the National Urban League to strengthen the Appraisal Diversity Pipeline Initiative.
This program connects minority communities with the assessment profession and leverages grants from the AI Education and Relief Foundation. Fannie Mae and the other sponsors announced the "first class of 2021 prospective real estate appraisers to receive grants through the Appraiser Diversity Initiative." AI President Rodman Schley noted that "there have been reports of bias in real estate values, and while it is painful to read about it in the press, it has enabled the valuation company to be a leader in this area and identify any issues that may occur ". exist."
A 2015 study of Harris County Tax Appraisal District data highlighted the risk of bias in household valuations due to the prevailing makeup of the industry itself – and the inherent lack of diversity.
In 2020, the dialogue within the rating industry intensified. The focus on social unrest and racial justice only helped raise awareness of the issue. Social media increased experiences of discrimination and touched the hearts and minds of millions. This is the first time we have tools that are able to objectively deconstruct bias – using structured data, machine learning, and cloud-based computing to test scoring models for potential bias.
Congress is considering legislation, the Real Estate Valuation Fairness and Improvement Act of 2021 (H.R. 2553), to establish a task force to examine guidelines and identify specific root cause differences and examine any barriers to entry that limit diversity in the valuation profession. We support the intention of the legislation to have a discussion about racial differences in assessment.
Valuation biases should be examined in three forms: systemic, implicit and explicit. Systemic biases could be built into the market due to historical guidelines and practices so that they would be inherently deeply embedded in the data we all rely on for both automated models and human-based evaluations. Implicit biases could arise if appraisers' application of best practice further isolates neighborhoods and homeowners. And explicit bias occurs when the valuation process is based on the skin color of the property owner.
The way forward
We must all work together to address these risks. We need to increase diversity among referees and industry participants – something that the National Association of Minority Mortgage Bankers of America (NAMMBA) has set out to do.
First, valuation firms should employ more people who resemble homeowners. Second, real estate analytics providers need to consider the risk that systemic biases are entrenched in market data. Data science teams can increase the neutrality within their automated evaluation models (AVM) and reduce the risk of "algorassism". AVMs need to be assessed to determine whether they are demanding in considering metrics such as comparable selection, data quality, and data availability. Checks and balances are important, but should be part of a holistic approach to assessment accuracy.
Finally, the inspection process can be digitized to increase the standardization, accuracy and objectivity of the data collection. Not only will this support alternative scoring methods, including hybrid scoring, but it will also reduce the likelihood of unconscious bias by increasing data fidelity and reducing assumptions.