Automated valuations tested on 50 ZIP codes in Chicago yielded no signs of racial bias, even on undervalued properties where disparities most frequently surface, according to a new Veros Real Estate Solutions study, which presents a contrast to other recent findings related to appraisal models.
“We looked at whether the size of the errors is dependent on the racial composition of an area, and we found that it is not,” Research Economist Reena Agrawal, said in an interview about the report she co-authored with the company’s Chief Economist Eric Fox.
Estimates for median absolute errors when comparing Veros’ AVM to purchase prices for properties in areas where the population was primarily comprised of Black, Hispanic and Asian households were statistically insignificant, at one basis point or less. The study also considered housing stock characteristics such as median age, number of rooms and the share of homes sold in a region.
The research contrasts a study by the Urban Institute that found “a greater AVM error as a percentage of the property’s sale price in neighborhoods where Black residents make up the majority than in neighborhoods where white residents do.”
The error rate in the Urban Institute study was as much as 5 percentage points higher for areas where the population consisted primarily of Black households, even after controlling for poorer property conditions, something historically absent from many AVM inputs, according to the institute.
The conflicting reports highlight a challenge mortgage companies could face given proposed rules from the Consumer Financial Protection Bureau, which would require them to ensure AVM compliance with nondiscrimination laws. Whether the technology is more of a help or a hindrance to equitable lending efforts remains an unresolved question.
Veros, which played a key role in the development of an appraisal database used by large government-sponsored mortgage buyers Fannie Mae and Freddie Mac, sees its most recent study as compelling evidence that AVM technology could encourage greater fairness.
“If you see that appraisals are coming in much higher or lower than expected you can check it and see if something significant in terms of an overvaluation or undervaluation that is going on,” said Agrawal.
However, the Urban Institute authors — while acknowledging other research indicating AVMs “tend to produce smaller biases than appraisals” — come away from their study more cautious about endorsing automated valuation models’ use for this purpose.
“AVMs may not be a surefire solution to fully closing racial inequities in the home appraisal process,” Michael Neal, Linna Zhu, Judah Axelrod and Caitlin Young wrote in the report. Neal and Zhu are research associates, Axelrod is a data scientist, and Young is a policy analyst.
The institute’s analysis of data from Cape Analytics, the American Community Survey and unnamed “major property records provider” examines Atlanta and Memphis, suggesting the AVM and areas involved are among factors that could account for differences in the two studies.
“I certainly can’t comment on their study, but I think the reason we could see different results is because we are using different models to determine our outcomes,” said Agrawal.
Veros also acknowledged mixed outcomes in the collective research done to date on appraisal bias and AVMs. It gives a nod to geographic limits of its most recent study, noting that it chose Chicago for its broad ethnic diversity.
“We plan to look at other metros in the near future to verify that our study does hold,” Agrawal said.
For its part, the Urban Institute plans to look further into what accounts for the fact that the machine-learning driven models it examined are improving in accuracy but still produce disparities.
“Continued exploration of new techniques in data and modeling will be necessary to further identify the underlying causes,” the authors of the institute’s study said.