On November 10, 2021, Avi Gesser and Anna Gressel from Debevoise’s Data Strategy and Security Group shared their insights as part of a World Bank panel on FinTech and Racial Equity, moderated by Kiril Nejkov of the International Finance Corporation. Avi and Anna, along with co-panelists Kareem Saleh of Fairplay AI and Tatiana Campello of Demarest, highlighted how artificial intelligence is transforming the financial sector on a global basis. With AI likely to become ubiquitous in FinTech applications in the future, the panel discussed the value in identifying, understanding, and mitigating the challenges AI poses to racial equity.
To view a recording of the panel discussion, please click here. The panel offered several key takeaways of note:
- AI can drive racial inequity in subtle ways:
- Deficiencies in the data collection process, the representativeness of data samples, and errors in the data, can create bias risk. Certain data inputs can also act as proxies for protected classes, which can lead to inequity in FinTech applications.
- AI’s focus on achieving its assigned task can also create risk. For example, if AI is programed to identify safe lending opportunities, it may only consider applicants with rich credit histories, resulting in inequalities for communities to which banks have historically denied credit.
- AI regulatory trends are emerging globally:
- Regulatory efforts in the U.S. financial sector and the European Union’s Draft Artificial Intelligence Act are driving regulatory scrutiny of AI globally.
- On the US side, regulators acknowledge that there is room to develop new, AI-specific regulations, but they already possess tools to enforce antidiscrimination laws. For example, the Department of Justice, along with the Office of the Comptroller of the Currency and the Consumer Financial Protection Bureau, announced an initiative to combat digital redlining that will intersect heavily with AI.
- There are some key steps companies can take to reduce risk:
- Create corporate governance structures to ensure a coherent, responsible AI strategy.
- Diverse teams build better AI systems by spotting different kinds of bias and by ensuring broader thinking. In companies where sourcing the necessary talent is difficult, companies can partner with third-party vendors.
- AI vendor management will become increasingly important. Companies cannot outsource their AI risks. Vendor diligence should be commensurate with the potential financial and reputational risks to the company.
- Reasons to be optimistic:
- Companies are trying to get this right. Stakeholders are incentivized to learn from some of the mistakes that have been made with regard to cybersecurity over the last decade.
- If executed properly, companies will not only avoid introducing AI bias, but they can use AI to ensure systems operate more fairly than traditional decision making.
- Companies are increasingly thinking about AI not just as a tool but as an extension of their corporate purpose and values. This perspective may encourage companies to align their AI initiatives more closely with their mission statements to the benefit of their customers.
To see our previous AI-related webcasts, please click here
To subscribe to our Data Blog, please click here.