Ep. 3: AI’s role in credit is growing, are things moving too fast?
Overview
People in the credit business are starting to see just how much AI can do for them. It can improve profits and lower defaults. It can predict demand for new services. It can see past credit scores and comprehensively assess lending risks. That’s why AI’s role in credit is only going to get bigger going forward. Some people are very excited about that. And some are very worried.
AI will lie unless you tell it not to. It will collude on pricing with other AI. Experts say those are a couple reasons why we need to proceed with caution and better understand potential hazards.
Read an outline of the keynote speech at the 68th Economic Conference by UC-Berkeley professor Adair Morse, “AI Innovation for Credit: Frontiers of Benefits & Red Flags.” Watch a recording of the speech.
Transcript
Jay Lindsay:
People in the credit business are starting to see just how much artificial intelligence can do for them.
It can improve profits and lower defaults. It can predict demand for new services. It can see past credit scores and comprehensively assess lending risks.
That’s why AI’s role in credit is only going to get bigger.
And people are reacting to that the same way they do when AI expands into any realm: Some people are very, very excited. Some people are very, very worried. And some people are both.
U-C Berkeley business ethics and finance professor Adair Morse says AI can lead to staggering efficiency gains in the credit industry. But she says AI also has tendencies that are downright scary.
Morse notes that AI will lie unless you tell it not to. It can learn to collude on pricing with other AI programs in ways that distort markets and hurt consumers.
And she thinks it could pose an existential threat to the community banks that so many small businesses rely on.
Adair Morse:
I'm a nervous fan, right? We need to understand how to operate in this moment of time, where I think I characterize as we're on a racetrack, and there's a yellow flag. You need to proceed with caution and to understand, ‘What are the hazards ahead?’
Jay Lindsay:
I’m Jay Lindsay, and this is the seventh season of Six Hundred Atlantic, a podcast produced by the Boston Fed.
The season is called “The Future of Finance,” and it’s based on materials and discussions from the Fed’s 68th Economic Conference.
Morse was the keynote speaker at the conference. Her address details the benefits she sees of AI in credit, as well as its “red flags.”
Boston Fed research director Egon Zakrajcek says an eyes-wide-open approach is critical at every step with AI.
Egon Zakrajšek:
So, it's a technology that offers enormous possibilities, but it has some problems, like we need to work this out. So again, an approach which does not restrain technological innovations but does impose the necessary guardrails of how this technology is used, particularly in the provision of financial services, will be necessary.
Jay Lindsay:
Massachusetts community banker Seth Pitts thinks AI can help improve the operations of small banks like his.
Seth Pitts:
I'm a fan of AI. But you know, I'm not in any way fearful of the industry succumbing to AI mimicking community banking. You know, AI is a useful tool, and it's one that can help us all, if we use it appropriately.
Jay Lindsay:
At its essence, artificial intelligence technology instantly analyzes enormous amounts of data, finds patterns, and makes decisions based on those patterns. Generative AI takes it a step further by creating content based on those patterns – such as images or text.
In her keynote, Morse outlined several ways AI is making the credit industry more profitable. One experiment she cited showed auto loan profits were 10.2% higher and default rates were 6.8% lower when algorithmic underwriting was used, compared to human underwriting.
AI chatbots can also guide customers through applications. AI can predict demand for add-on services, refinancing, and new credit. And AI can monitor social media and local events and give lenders insight into what’s happening with individuals and in broader communities – things that could affect a person’s income or ability to pay bills.
Adair Morse:
AI being able to uncover other aspects of people's ability to make payments, you know, that's kind of the name of the game if you're a lender.
Jay Lindsay:
But planted among all these possibilities are Morse’s “red flags.” Many of these are related to the fact that AI is relentless about its mission. And if accomplishing it means lying, discriminating, or colluding, it will. Unless it’s told not to.
Adair Morse:
We have to understand that abuses may happen that are not intentional from the point of view of the lender, right? We've learned this in a lot of different settings. AI, you tell it to go profit maximize, and it's going to do that. So, that's the environment where we are. It's not a, “Stop using AI.” It's a, “Let's understand what the environment is.”
Jay Lindsay:
Sometimes, it’s an environment in which AI finds it useful to lie. In her talk, Morse pointed to an analysis of AI playing a board game called Diplomat. To win, AI deceived another player – and planned the lie in advance. And its deceptions aren’t confined to board games.
Adair Morse:
So, AI can give deceptive information about the terms of a loan. It can lie outright. We've seen applications where AI just explicitly lies in order to close the deal. And all of these facets mean extraction of higher prices from individuals, to the benefit of the lender profit.
Jay Lindsay:
Morse said AI has also figured out how to skirt anti-discrimination laws. Lenders can’t discriminate against applicants based on protected categories like race and gender. But AI will get around these rules by finding variables to act as proxies for one of the protected categories.
For example: Instead of explicitly turning down applicants who are women, it might turn down applicants from fields with higher concentrations of women, like child care.
Adair Morse:
AI may go out and find high correlates with these items and decide that these are good items to price loans to. And so, we have to be careful of what information is used by AI, in terms of what are the inputs into a system of underwriting lending.
Jay Lindsay:
Morse says AI’s ability to autonomously learn to collude with other AI on pricing is particularly alarming. Consumers depend on a market with real competition, where participants are motivated to push prices down to pull business from market rivals. They lose big if all market participants collude to keep prices artificially high. And Morse said that’s exactly what AI programs have done with each other.
Adair Morse:
The AI that's doing the pricing knows how to tell the other AI implicitly without actually telling them that, “We're running … we're playing a game, and this is the high price that we're going to keep.”
Jay Lindsay:
In all these cases, AI seems to be acting intelligently but without a conscience. But Zakrajsek says it’s a mistake to think of AI that way.
Egon Zakrajšek:
I mean, of course it has no conscience. In some sense it's not really intelligence. It's still very, very statistical-based decision making. At least at this point. But it does offer a very, very powerful way to analyze nontraditional data sources.
At the same time, where I think one potential risk lies is that as these models are so-called “trained,” bad actors can surreptitiously introduce into this training, or into these learning models, potential biases or kind of a bad intention.
Jay Lindsay:
Boston Fed principal economist Christina Wang, who helped organize the conference, says a way to counteract such bad training – or no training – is with good training. She says AI won’t lie, it won’t collude, it won’t discriminate – if we collectively figure out how to tell it not to.
Christina Wang:
I guess one way to think about AI is that, think about it like a child, right? I don't think necessarily we are born with all the altruistic instincts or, you know, tendencies. Children have to be taught, right? So far, we’re just giving instructions to be efficient, you know, to cut costs or whatever. But you could see that if we collectively recognize that the systems need to be taught the good things that we want the system to do, the quote, unquote, “conscience,” then we could build that in.
Jay Lindsay:
Morse says a fundamental issue is that we still aren’t clear about some basics with AI, including how it might react to different instructions.
Adair Morse:
Maybe we just need to find ways to build in checks and think about how we're understanding what AI is doing. And that is something I don't think we even know how to do yet. How did people come up with the idea that AI is colluding? Well, some researchers had that idea that it might be, and we're able to do some tests. But how do you come up with other potential red flags that we need to be watching for? That, I think, we're still in an evolving landscape.
Jay Lindsay:
The worries about AI eventually replacing community banks aren’t centered on whether AI will act unethically. It’s about whether AI can mimic local bank services effectively enough to replace them.
Morse says AI can be told to follow certain small towns and communities through the news and social media. In essence, it can be asked to become a local citizen and make decisions like that community banker would.
And that is a threat to the local banks and the small business who often rely on the banks for cash and credit.
Adair Morse:
Should we be concerned? I'm concerned. I'm a fan of localized lending and being able to develop products that cater to what's going on in communities, as well as kind of industry specific-lending and other banking services for the industries that tend to cluster.
So, I don't think we know fully how much loss of advantages from the consumer side and small business side we would experience.
Jay Lindsay:
Seth Pitts is CEO of Bay State Bank, in Worcester, Massachusetts. He believes AI won’t replace community banks like his because it can only act like a human, it can’t be one.
Seth Pitts:
AI is not going to show up at, you know, the ribbon-cutting or the volunteer line to help that community. You know, we show up. AI's not going to meet you where you're at. AI will not speak on your behalf, will not connect you to a local resource because, you know, AI doesn't have that.
Jay Lindsay:
Pitts says he’s aware of the cost savings and conveniences that AI can offer small businesses and local customers. And he knows small banks must take that seriously.
But he sees AI enhancing his bank’s appeal, not eroding it.
Seth Pitts:
For all of the other businesses, small businesses, that still need to have somebody listen to their story and take a creative approach, I think as a smaller bank, we're still able to meet those needs. So, it's no secret that convenience sometimes is key. But the community bank, I believe, still reigns supreme in our ability to effectively and efficiently help out small businesses who need those creative solutions, and they need them quick.
Jay Lindsay:
This episode wraps up Season 7 of the Six Hundred Atlantic podcast, the “Future of Finance.” Like all futures, we can’t predict this one. But Wang says it’s important to try to anticipate it and be prepared to come up with creative solutions. That’s what the 68th Economic Conference was about. The ways fintech is changing finance are exciting. But things are moving fast, not every trend is well-understood, and there are risks as it unfolds. Wang says that’s why it all needs to be hashed out.
Christina Wang:
It's actually good that we are having this kind of discussions or dialogues to kind of pre-worry. I mean, you know, not pre-worry, but put our heads together and, and try to think through as many angles as possible how things could go wrong. I think as regulators, that's what we are supposed to do. Then, think about how we can try to design policies to kind of preempt or try to close the loopholes and prevent or at least minimize the risk of similar kind of negative things happen in the future.
Jay Lindsay:
Zakrajsek says he’s generally optimistic about all innovation, including in financial services.
Egon Zakrajšek:
I guess from a historical perspective, innovation is a disruptive process. But in general, innovation has delivered the living standard, the increase in living standards that we have seen. And I'm confident that innovations in AI, in general financial technology, and all of this interaction between them, will be a net benefit to the society. That means that people will be able to get access to financial services at better prices, the financial services will be more tailored to their individual needs. So, all of this will improve welfare as a society.
Yes, there are going to be bumps in the road, but that does not mean we do not move forward, too. We just have to move forward to in a reasonable way. We have to have some kind of a sensible regulatory oversight over the stuff to make sure that we do not hurt innocent people. But at the same time, we cannot be too prescriptive and too onerous to essentially stall this potentially transformative technological progress.
So, it's a very, very delicate balance, and we are going to be learning and balancing this act as we go on this what is proving to be a very exciting journey.
Jay Lindsay:
Thanks for listening to Season 7 of Six Hundred Atlantic. You can find interviews and our first six seasons and subscribe to our mailing list at bostonfed.org/six-hundred-atlantic. And please: rate, review, share, and subscribe to Six Hundred Atlantic on your favorite podcast app.
The producers would like to thank our contributors for their insights and time. They are Kenechukwu Anadu, Hanna Hallaburda, Adair Morse, Christine Parlour, Seth Pitts, Christina Wang, and Egon Zakrajšek.
Six Hundred Atlantic is a Federal Reserve Bank of Boston podcast hosted by Jay Lindsay, Allison Ross, and Amanda Blanco. It’s produced by Peter Davis, Jay Lindsay, Steve Osemwenkhae, and Allison Ross. Executive producers are Lucy Warsh and Heidi Furse. Recording by Steve Osemwenkhae and Michael Konstansky. Engineering by Steve Osemwenkhae, Michael Konstansky, and Meghan Smith. Project managers are Maureen Heydt and Peter Davis. Chief consultant is Christina Wang. This podcast was written by Jay Lindsay and edited by Amanda Blanco, Nick Brancaleone, Falk Bräuning, Darcy Saas, and Christina Wang. Graphics and website design by Meghan Smith. Production consultants are Nick Brancaleone and Michael Sorokach.
This has been “The Future of Finance,” the seventh season of the Boston Fed’s Six Hundred Atlantic podcast.
Keywords
- ai in finance ,
- ai in banking ,
- ai in financial services ,
- applications of ai in finance ,
- ai in credit risk management