Tech

Deploying a multidisciplinary strategy with embedded responsible AI

The finance sector is among the keenest adopters of machine learning (ML) and artificial intelligence (AI), the predictive powers of which have been demonstrated everywhere from back-office process automation to customer-facing applications. AI models excel in domains requiring pattern recognition based on well-labeled data, like fraud detection models trained on past behavior. ML can support employees as well as enhance customer experience, for example through conversational AI chatbots to assist consumers or decision-support tools for employees. Financial services companies have used ML for scenario modeling and to help traders respond quickly to fast-moving and turbulent financial markets. As a leader in AI, the finance industry is spearheading these and dozens more uses of AI.

In a highly regulated, systemically important sector like finance, companies must also proceed carefully with these powerful capabilities to ensure both compliance with existing and emerging regulations, and keep stakeholder trust by mitigating harm, protecting data, and leveraging AI to help customers, clients, and communities. “Machine learning can improve everything we do here, so we want to do it responsibly,” says Drew Cukor, firmwide head of AI/ML transformation and engagement at JPMorgan Chase. “We view responsible AI (RAI) as a critical component of our AI strategy.”

Understanding the risks and rewards

The risk landscape of AI is broad and evolving. For instance, ML models, which are often developed using vast, complex, and continuously updated datasets, require a high level of digitization and connectivity in software and engineering pipelines. Yet the eradication of IT silos, both within the enterprise and potentially with external partners, increases the attack surface for cyber criminals and hackers. Cyber security and resilience is an essential component of the digital transformation agenda on which AI depends.

A second established risk is bias. Because historical social inequities are baked into raw data, they can be codified—and magnified—in automated decisions leading, for instance, to unfair credit, loan, and insurance decisions. A well-documented example of this is Zip code bias. Lenders are already subject to rules that aim to minimize adverse impacts based on bias and to promote transparency, but when decisions are produced by black-box algorithms, transgressions can occur even without intent or knowledge. Laws like the EU’s General Data Protection Regulation and the U.S. Equal Credit Opportunity Act require that explanations of certain decisions be provided to the subjects of those decisions, which means financial firms must endeavor to understand how the relevant AI models reach their results. AI must be understood by internal audiences too by ensuring, for example, that AI-driven business-planning recommendations are intelligible to a chief financial officer or that model operations are reviewable by an internal auditor. Yet the field of explainable AI is nascent, and the global computer science and regulatory community has not determined precisely which techniques are appropriate or reliable for different types of AI models and use cases.

There are also macro risks related to the health of the economic system. Financial companies applying data-driven AI tools at scale could create market instability or incidents such as flash crashes through automated herd behavior if algorithms implicitly follow similar trading strategies. AI systems could even functionally collude with each other across organizations, such as by bidding to achieve the highest or lowest price for a stock, creating new forms of anticompetitive behavior.

Toward responsible AI

Most AI risks are not, however, unique to financial services. Companies from media and entertainment to health care and transportation are grappling with this Promethean technology. But because financial services are highly regulated and systematically important to economies, firms in this sector have to be at the frontier when it comes to good AI governance, and proactively preparing for and avoiding known and unknown risks. Currently, banks are familiar with using governance tools like model risk management and data impact assessments, but how these existing processes should be modified in light of AI’s impacts remains an open conversation.

Enter responsible AI (sometimes called

Read More

————

By: MIT Technology Review Insights
Title: Deploying a multidisciplinary strategy with embedded responsible AI
Sourced From: www.technologyreview.com/2023/02/14/1066582/deploying-a-multidisciplinary-strategy-with-embedded-responsible-ai/
Published Date: Tue, 14 Feb 2023 18:00:00 +0000

Trending

Exit mobile version