With banks facing tougher new rules modernizing fair-lending standards and recent negative press about disparities around fair lending compliance. Predictive Analytics Group has seen a spike in interest from financial institutions wondering how they can better position themselves in advance of regulators coming to visit.
Banks have until 2026 to integrate the new rules governing compliance with the 1977 Community Reinvestment Act (CRA) regulations into their operations. Historically, they’ve been graded on how well they service low-income communities where they have branches. Now, however, they’ll be graded on how well they service low-income communities in the areas in which they provide large numbers of credit products through online and mobile lending.
Financial institutions that want to use marketing technology and artificial intelligence without introducing bias and discrimination should be looking to implement a compliance analytic solution that enables them to anticipate how examiners will assess their lending performance.
That may require them to invest in software that reduces the burden of data preparation, eliminates hours of manual clean-up, and provides insight into the training their staff will need. Leveraging fair-lending technology can help automate your collection, verification, and certification of data.
Our SOC2- and PCI-certified GOBLIN enterprise data and analytics platform can help in this area by storing and cleansing your data; deliver 20 years of accelerators across file sharing, data transformation and proprietary BI solutions; and performing advanced analytical exercises.
We can help you avoid the minefields that others have experienced. The Justice Department in mid-October said its 2-year-old Combating Redlining Initiative has secured more than $107 million in relief for communities of color.
The Justice Department also said in October that it has “more than two dozen active investigations into redlining, spanning neighborhoods across the country.”
Redlining is an illegal practice in which lenders avoid providing credit services to individuals living in or seeking to live in communities of color because of the race, color, or national origin of the residents in those communities.
Have you used any of the proxy methods such as BISG, which combines geography- and surname-based information into a single proxy probability for race and ethnicity, or built profiles of your marketing targets? If so, you’ve probably incorporated demographics, targeted advertising, and social media platforms with built-in filters that inevitably create a built-on bias that either focuses on or excludes population groups.
And if you’re considering using AI, bias can be more difficult to detect, particularly if you have fragmented back-end data systems that are not consolidated or harmonized. As a result, that data could easily be used to either build or train AI models, leaving minefields for future compliance issues if that system exhibits bias against marginalized communities thanks to algorithmic discrimination.
Who among us doesn’t mine archived lending data to make better approval decisions? But those AI-generated systems could easily – and unintentionally – create a variety of fair-lending compliance issues.
Regulators claim that the final rules simplify the ability to make higher grades more achievable and provide a more consistent grading process. For example, regulators will provide a list of the types of activities for which banks can receive credit under the grading system and provide feedback on whether another activity would count.
We thought at one point about rolling out a list of the questions we’re asked most often on this topic, but it really comes down to just one: How do I check and how do I prove that I don’t have discrimination within my lending practices?
The answer is simple: Give PAG (or me) a call.
Dave LaRoche is the managing partner for US Operations at Predictive Analytics Group.