The MoneyPot

Unlocking AI's Promise in Finance: Charles Kerrigan on Mastering Legal Challenges and Compliance

Rachel Morrissey, Sheryl Chen, Ian Horne, Micky Tesfaye

Is your firm AI-ready? Tune in to The MoneyPot hosts pick the brain of the  fintech legal sage, Charles Kerrigan, and explore the legal labyrinth of AI in finance. We leave no stone unturned on how AI's potential can be harnessed within the boundaries of law, and why understanding the fine print of AI regulations is not just important, but crucial for innovation and compliance. 

This episode is a masterclass in navigating the AI terrain, as we not only dissect the nuances of fraud prevention and data management but also share insights on the evolving responsibilities of professionals when AI tools don't quite hit the mark. With Charles's expertise, we shed light on the intersection of AI capabilities with the pressing need for human oversight, and how financial services firms can future-proof their operations by integrating AI safely and strategically.

This is also part of The MoneyPot series leading up to Money20/20 in Amsterdam where Charles will be helping us guide us through this next evolution. 

Follow us on LinkedIn

Ian Horne:

Welcome to the Money Pot. I'm Ian Horne, the EU Head of Content at Money 2020. Money 2020 has an epic show in Europe coming up, and this is the last of four episodes of our Money Pop podcast that focus on the big topics that our speakers will tackle at the Rai in Amsterdam. And guess what we haven't done? Artificial intelligence yet, but we're not just going to talk about AI. To be honest, I'm sort of bored of the conversation when it's general and broad, but we are going to bring in a fintech lawyer to scare the absolute bejesus out of you. But first someone slightly less scary well, a lot less scary, I would say my co-host, Rachel Morrissey, our US head of content. So, Rachel, \undefined, Good to have you here.

Ian Horne:

I'm good. I'm good and I have a question for you to get us started. Firstly, which AI tools are you using right now and how confident are you that they comply with copyright laws or indeed any other data laws?

Rachel Morrissey:

I am using, well, I use several different AI tools, including a search tool. I don't believe any of them comply with any copyright laws or any data laws. In fact, I'm not sure. I mean, there's a bunch of suits going on in the US, but data laws and copyright laws are quite strong in the US, but data laws are quite weak, and so I think that that will be an interesting argument on the state side of this argument on the state side of this where I think it could be a little bit different in the EU.

Ian Horne:

Yeah, I'm not sure, because I'm using AI tools for image generation, for blogging and things like that and I honestly could not tell you if I'm breaking the law.

Rachel Morrissey:

Geez Louise.

Ian Horne:

I do image generation.

Rachel Morrissey:

Image generation? I don't want anybody coming to me and saying you made the president say nasty things or something like that. It's crazy.

Ian Horne:

Oh, it's all right, I write about cryptocurrency. I've kind of given up on being too careful at this point. Anyway, we should meet our interviewee, Charles Kerrigan, a partner at international law firm CMS and an expert in the legal aspect of fintech, digital assets, crypto and AI. Charlie is also a board advisor for Holistic AI and AI and Partners. Charlie, welcome to the Money Pot.

Rachel Morrissey:

Hey, okay, Charlie, let's just start right from the very, very top level. So everybody in fintech, banking payments looking into AI, they're sure it's going to be the magic pill and change their lives forever and fix everything. I'm not as sure, but from a legal perspective, what are the big concerns we should be looking at?

Charles Kerrigan:

Yeah, well, hopefully I'm not here to frighten people, but if I am, they should be frightened already because AI is highly regulated and if you're doing this in fintech, financial services is already regulated. One of its key characteristics is the work of regulators. So if we think of something like the EU AI Act which is what's got everyone talking, because we've got the final text and we're now in the implementation period we'd have, for fintech, vertical regulation from the financial regulators and horizontal regulation. So this is regulating AI. As AI, it's a general purpose technology. Therefore, it can do anything, can do everything, and it gets its own regulation in the EU. What does that mean for fintechs? Well, this question comes up a lot because fintechs, financial services, firms they all have risk and compliance people. So, because I know you guys, I can share a top tip. So if you want to reduce your risk from AI, you just don't use it. You can reduce it to zero, but that is not a good plan.

Rachel Morrissey:

It feels a little bit hard. It's sort of like secondhand smoke at this time.

Charles Kerrigan:

Half the time I'm not even sure we're aware that we're using AI anymore. I think that's exactly right and, as a large firm, you are working now in an environment where all of your customers are using AI. Customers are using AI. As you two started the discussion, we're all testing and trying all of the models and we're trying the apps that are being developed from the models. We've had AI in our devices for a long time already.

Charles Kerrigan:

I think there is sometimes a difficulty around the conversation. It feels like for the mainstream media. Ai beat Garry Kasparov at chess and then took 20 years off and then came back in November 2022 and it could make pictures, so definitely there was a lot going on in between those two periods. It was being developed in financial services because financial services institutions are industrial users of data, so they are managing and manipulating information. Ai has been fantastic at lots of things for them Because it's a pattern recognition tool.

Charles Kerrigan:

That's why we have the classic fraud prevention use case. It can detect anomalies in a way that people can't. Fraudsters covering their tracks don't have the same ability to understand random patterns that computers do, so AI is well used. I think the cliche about AI and financial services and fintech is that it will be adopted, but possibly a bit slower than in other industries. Because of the legacy regulation and the nature of those organizations, they're subject to high requirements for trust, both with regulators, consumers and policymakers, and they also don't want to be uh, culturally at the very bleeding edge of technological uses, particularly when they're facing customers yeah, and one thing I found really interesting about your well, the very start of your response is you say, yeah, you could just not use it, but you absolutely don't advise not using AI because obviously everyone else is using it.

Ian Horne:

But let's say, we look back on this and people start bringing lawsuits into it and people get litigious. Just how good of a legal defense is. Everyone else was doing it.

Charles Kerrigan:

Well, no defense at all. And yeah, you don't want to be the person who's picked on. Well, no defense at all. And yeah, you don't want to be the person who's picked on. So I think you've described some of the non-financial services suits and it's apparent from those that those organizations that have got a high profile and an ability to make their case in the public's mind as well as the court's mind are those that have been relatively quick to start this. Some of those cases. I think you can assume that the judges are pretty good readers of what's going on, so they're likely to reach good decisions.

Charles Kerrigan:

But we can't use litigation as an effective tool here. The time and cost of doing that is just so disproportionate to what you achieve. So the AI regulation is well supported by standards domestic standards, international standards and operating according to standards you can start to see yourself building some sort of a defense there. It generally isn't in financial services. The copyright problem that we've got that you referenced at the start, and I think it maybe we're saying the UK is even more challenging than the US because we don't have a fair use defence that exists in the US, so we've got stricter rules around unauthorised use of copyright material. In financial services, we're generally seeing banks working with initially their own data and then synthetic data to ensure that they're able to manage out issues of bias and discrimination. And again, maybe we could use that as a way of describing the nature of regulation of technologies.

Charles Kerrigan:

And now AI has something from the culture of the jurisdiction that you're looking at. So if we think about the EU AI Act, it's classic European legislation. It's consumer focused. The critique is that if there's a balance between consumer protection and innovation, the European Commission will lean towards the former. So the US will say it has a more pro-innovation stance, but of course, the flip side of that is that there's less consumer protection. So the regulation will have common characteristics wherever you see it, and the EU Act is establishing that now, but it will be adopted at different points on the spectrum. So maybe a quick point on those common characteristics.

Charles Kerrigan:

Well, what the EU is doing is saying that you've got to know what your AI is and plot it on a risk spectrum. There are certain kinds of AI are unacceptable risk if they are using, for example, medical data or biomedical data. It's the kind of thing that's generally ruled out social scoring, manipulation. Some things are trivial, so they don't matter. If you've got recommender engines that are just making you watch a film that you don't like, then nobody's really harmed by that. And so in the middle, it's those medium risk an acceptable risk where or sorry, unacceptable risk being managed down to acceptable levels and, if it's considered high risk, having guardrails put around it. So the one thing that's characteristic of all of these technologies is first, you've got to know what AI you're deploying in your organization. You've got to be able to make an assessment of the risk. You've got to be able to justify that to regulators.

Ian Horne:

Yeah, I think the cultural differences in how this is regulated are very interesting too, especially when you think that AI is largely going to appear online. I'm not thinking so much in terms of financial services here, but purely in terms of copyright infringement. So I will move it along, because what I'm thinking about is those fake football kits you can get online which cost a fraction of the one. Yeah, you know. Anyway, I will skip along, because I had a chat recently, charlie, with Michael Borelli, who's also with AI and Partners, and he mentioned something that I think would give any compliance team a headache. So we talk about companies using AI, but perhaps companies are using AI without realizing it. So what Michael and I were talking about was that if you have a thousand employees and each of them uses their own AI tools three or four each, say you've got so many different exposure points to AI, and that itself could cause some legal problems, right? Is that the case, and how the hell do you stay on top of that?

Charles Kerrigan:

Yeah. So one of the things when we go into large financial services firms is we first ask the question how many AI deployments have you got? So generally, a big financial services firm or a fintech is not going to be a developer of AI yet. I say because I think in time, people will have their own proprietary models doing the things that they want. I think what we're first seeing with the LLMs is we're all using a generic tool. It's an amazing tool, but it's a generic tool and what's going to get tuned up over the next few years is that those generic tools will be applied to specific use cases and specific organizations. So that hasn't happened yet.

Charles Kerrigan:

So they're using other people's tools. How many do they have in the bank environment? So I've already referenced that. They use for fraud prevention. They use for optimization of things like trading, algorithm trading. Every large firm's HR department uses AI tools because you've got to have something that helps sift CVs. So you have the risk there about introducing bias.

Charles Kerrigan:

So we always ask the question how many deployments have you got? And we almost always get the answer we don't know. So the first phase is to go off and make a list. So generally, it's not the case that firms have AI deployments that they don't know they're using, and generally they know that they've got some element of AI in them. So what this is an initial exercise in doing is getting that list together and starting to do some work in clustering. What types of tools are being used, what have they got in common, what tasks are they being made to do? What safety rails are being put around them? So there's a fairly wide exercise that is to some extent going on within those organizations anyway, because they all have risk and compliance teams that are working on this, but not necessarily seeing all of the tools in the same framework, and that's one of the things that directly applicable regulation like the EU AI Act is bringing along.

Rachel Morrissey:

So I need to ask a little bit about content creation with the AI, and before we do that, I wanted to ask are we differentiating, or is there any differentiation between AI, which can be any number of things, from a particular function, to beating cast profit, chess kind of a thing, to beating cast profit, chess kind of a thing, to AGI, which is this kind of what ChatGPT has sort of spilled into the consciousness of the world. This A lies, particularly in the United States, are really being challenged is that people are utilizing copyrighted material to teach and feed an AGI, and so does that have any differentiation? Because I'll confess this podcast, I use AI and then I try and fix it. Like we use AI to like spit out a transcript of this conversation, we use it. I've used AI to like spit out a blog and then desperately tried to fix it because it's terrible. But that isn't a reflection of the podcast, just a reflection of AGI.

Ian Horne:

Glad you clarified that.

Rachel Morrissey:

I'm curious about that. Is there any kind of separation that people are really thinking about?

Charles Kerrigan:

Regulators can't get into the weeds to such an extent that they can see what all the tools are doing differently and the same, so they'll focus on outcomes-based regulation.

Charles Kerrigan:

I think a great way of thinking about AI even leaving aside the AI regulation that's coming in and we're having to think about is by reference to the UK's consumer duty.

Charles Kerrigan:

So, however you reach a conclusion about a product that is suitable for a consumer is kind of up to you, as long as you can justify it and the consumer has a positive outcome. So the positive outcome is relatively easy to test. The justification raises that question around what's called explainability or transparency. So, in particular, if the consumer has a bad outcome and you can't demonstrate to the regulator how that decision was reached, then you've got two kinds of a problem that are related. If the customer has a questionable outcome but you can demonstrate to a regulator that you were using the right tools, the right information and the use of that information reached a justifiable conclusion, then you've got a chance of being able to put forward a good explanation, a good defense, if it gets to that. So, thinking about the tools, the regulators are for sure interested and will use them themselves, but will probably retain a similar kind of approach to them of saying we already have ways of looking at this. We we see what it means for consumers in a consumer facing organization that's very interesting.

Ian Horne:

So I guess you need to know, we need to say why you've chosen to use ai in a particular tool, but you don't need to, you know, demonstrate the thought process of the ai. Um. But then if there's a buggy ai model, who's? Who's kind of liable? If someone's AI tool produces a wrong result, you know who's on the hook for that.

Charles Kerrigan:

Yeah, well, no surprises. I think in relation to your, you have in mind, I think, some of the famous cases. So in the case of the lawyers who were handing things into court that they didn't check.

Rachel Morrissey:

For fake cases- the ones that they didn't check for fake cases?

Charles Kerrigan:

Obviously they should have checked them because lawyers have got lots of database tools, so if someone describes a case to me, I can look up the case. Does it exist? So I think we're probably through that phase. We'll have a lot more examples of us all, including me, tripping over the next problem that we identify, or we identify it when it happens to us.

Charles Kerrigan:

I think one of the ways of thinking about what AI is good at is for the GPTs. If they're trained essentially on information on the internet and, you should assume, non-paywalled information on the internet then if you're thinking about a task, they'd generally be good at tasks that they can supply the answer from fishing around in that type of information. So maybe an example would be if I, as a lawyer, want some information about a piece of legislation that's relatively new and every law firm around me has published a client briefing. So an AI system is going to do a pretty good job of doing a sort of paraphrased agglomeration of all of those briefings, it may not have access to the regulation. So one of the things that most of the bots have not done is read the text of the EU AI Act. So if you're questioning it deeply on European Commission legislation. It probably hasn't read it, but it will have read materials about it, so it will give you a pretty good general view of it.

Charles Kerrigan:

But if you start pressing it, it's got nothing else, so it's going to recycle what it's told you already.

Charles Kerrigan:

But if you start pressing it, it's got nothing else, so it's going to recycle what it's told you already. And if you're pressing it again, then it'll just start making things up because it doesn't like to disappoint you. So you can to some extent, anticipate when it's going to go off piste and just say stuff because you're asking it to, and when it's going to tell you things that it's used real sources for. So the short answer for that for lawyers is we need to have our own bots of some sort that we're feeding the legislation into, or we need to work with the content providers that lawyers use for our databases, and of course, that's going on. So anything that we're raising on here, I think we're probably all aware of the fact that someone somewhere has spotted that, seen an opportunity, and is digging away to try and fix it, which is one of the amazing things about AI. You can use AI to fix things quicker once you've spotted something that needs addressing.

Ian Horne:

Yeah, let's harness that positivity. Actually, no, Rach, you had a question there, didn't you? I'll step back.

Rachel Morrissey:

I was reading a book that was written by Kevin Kelly who wrote he was the editor for Wired for years and he had been talking and he was like the trick is I mean, this was years ago, this was seven years ago, this was not after ChatGPT or than humans that it kind of does learn differently than humans and that maybe we should guard against AI gaining any kind of consciousness but harness AI specifically for these kinds of things. When you were talking there, it made me kind of giggle because I kept thinking about this loop where AI could learn the regulations of AI and then fix its own AI. And then you start wondering who's fixing what and where we go with that. Like if we rely on AI to fix AI, where's the check?

Ian Horne:

You know like how do we?

Rachel Morrissey:

manage that, Like I just my brain started worrying.

Ian Horne:

Yeah, then who's fixing the AI? That fixes the AI, and I wasn't even thinking about it being dystopian.

Rachel Morrissey:

It could be very utopian, it could be very nice. I'm like, oh, I have a 25-hour work week now because AI fixed AI and now you know the productivity level is what it is. That's not dystopian, but it's just something that I see kind of wondering if that will happen.

Charles Kerrigan:

Yeah, I think for sure we go down that road Inevitably because AI before generative AI one of the tools is natural language processing, so AI can read. A generative AI means it can now read and write, so it can digest texts and it can produce answers. That's how it should be. So, if we think about is it a good or bad thing for AI to let me be blunt about the implications of what you're saying disintermediate lawyers, yes, or content creators or anybody that when we had NLP AI.

Charles Kerrigan:

We have crossed the boundary and the AI can review regulation. It can review the reams of financial regulation that's out there, provided that you're feeding it the text of the regulation, not the text of summaries of the regulation the way that I described before. So we've been working on that for a few years, getting the systems to review what are long, complex texts. So the first thing you can get them to do is a bit like a librarian they kind of bring you the relevant rule back and then you've still got a lawyer, but the lawyer's time has been saved scanning the text to find the relevant rule. So you've kind of got through stage one. The lawyer then reads the rule and the difference between a good lawyer and a bad lawyer is a lawyer who's not earning their keep will just effectively pass the rule back onto you. So, as a second librarian saying here's what the regulation says you can and can't do, a good lawyer will say well, this is the regulation and how it fits with. Your problem is as follows and so let us tell you what you can do rather than what you can't do, and this is the approach that you should take. So you've still got a lawyer in the loop there. You've still got a lawyer in the loop there. The next phase of this, I think, is that the AI systems, which can already read the rules, can bring the rules to a client and can say to the client this is roughly how the rule fits in practice, how regulators use it. We've got texts and decisions from regulators and how that can be applied to your particular case, so that AI is not quite telling you what to do, but it's sort of prompting you, and then the piece that the lawyer is doing is a kind of last mile, to say, okay, in that case, applying it specifically to your facts and using some judgment about the lay of the land in regulatory and commercial terms, this is what you should do. So it pushes, I think, lawyers to be providing more of the role that people would like them to. You're relying on your lawyer now to tell you what you should do rather than to tell you what the rules are, and I think that's one of the things that we see in relation to AI regulation.

Charles Kerrigan:

There's a lot of information floating around that summarizes and repackages the AI regulation. You need to know that for sure. But going back to the point about a large financial services firm. They can have 5, 10, 15,000 AI deployments. Well, you can't give them to a lawyer one at a time and then wait for 25 years for them to come back with a little bit about each one. You can't say to your lawyer do you have 14,999 colleagues so that you can all look at this at the same time? So you've got to use digital tools to operate in a digital environment, and that for sure will have to apply to regulatory services and financial regulation, and the regulators will be there with us. I think it's much easier for them to use tools to do the first.

Charles Kerrigan:

Look at reporting.

Charles Kerrigan:

If firms are reporting under some of the big European legislation MIFID for example it generates vast amounts of information that would overwhelm purely human reviewers. So all of these things start to fit together. I think sometimes the thinking lawyers definitely have a problem with this. You grow up as a lawyer learning how to do something and then you kind of repeat it. So I spend a lot of my time in my team with us all encouraging each other to think about.

Charles Kerrigan:

If that has changed, ie the technology then we shouldn't assume that everything else will stand still, in other words, the way that we support clients in looking at it. So the legal industry has got a real opportunity to provide better service. I think you'd say this about any firm using this. Why would banks use it? Because you can do a better job for your customers. That's ultimately what's in it for you and that will be a distinguishing feature. I think we often say to firms that we have this conversation with Don't be alive to the risks of AI, but think about how you can say that you are safely using AI to provide a better product and a better service for your customers. That's what it's for.

Rachel Morrissey:

And also I mean, when you're talking like that, it's like a better sense of security and a feeling of compliance so that you don't ever worry that you're going to get tagged from outside because you were doing something dumb you didn't know you were doing. I mean, it feels like a lot of places that are highly regulated sort of live with that sense of dread that they might slip somewhere.

Charles Kerrigan:

That's a great way of putting it. Yeah, I think we often talk about it during the first review, but, as you say, it can also do a final review. So it is a failsafe with humans in the loop. The EU is very human rights and human in the loop for sure it doesn't want purely automated environments.

Rachel Morrissey:

For really good reasons. I think you know the way that the biases and things like that can work, that we don't need to delve into. But thank you, that was really interesting.

Ian Horne:

Yeah, it really was. I think I've got one last question. I don't think we've got time for any more. I mean, this has been real, proper tomorrow's world stuff. Honestly, I think we're looking not just at the future of finance, but the future of technology and, to some extent, humanity.

Charles Kerrigan:

I hope that's not too much hyperbole, but what I would say if I can tie this into something more tangible, sober and perhaps boring is if you're in finance right now and you're trying to use ai, as you suggest is a good idea, what should you actually do? What are the best companies doing to use ai safely and productively? Yeah, sure. So I think, um, any large organization, your existing it vendors, are all over this. Um, any larger organization has probably got four or five innovation teams, not necessarily talking to each other. So, bringing together what you've already got and what you already know for sure, having a view on how you're going to deal with directly applicable AI regulation the EU Act is the most obvious, but the likelihood is that we'll see more of that coming through other jurisdictions. We've got in the UK a private members bill by Lord Holmes of Richmond, who has compared to the EU Act, a very short piece of text which describes how AI should be regulated, and it's difficult to disagree with the headline themes around safety, transparency, contestability.

Charles Kerrigan:

So we've got pillars that everything relies on, but making a practical start in finance, you're unlikely to swing for the fences on the first day, but ensuring that you've made a start with the teams that are going to be innovators and supporters in this role teams that are going to be innovators and supporters in this role. I think one of the cliches that we do get is like lots of innovation in finance, the most senior executives are favourable towards it and pushing for it, but it can seem that the risk is sitting in the middle of the organisation. So I think if, if I was, we're sometimes talking to boards of large institutions on the in the financial sector and saying to them uh, you really do need to empower folks down the chain, otherwise they won't be able to deliver on what you're saying you'd like to do interesting yeah, absolutely.

Ian Horne:

One more question, rachel, or shall I wrap it up?

Rachel Morrissey:

wrap it up. I'm, I'm good, I I don't think I can actually take in much more yeah, this is.

Ian Horne:

This is exhausting, isn't it? My brain is starting in the best way, uh, in a lot of different directions yeah, you and me both. Okay, let me finish thank you, charlie yeah, thank you, charlie. That's it for this episode of the money pot. Thank you, charlie, again, for joining us today and if you want to hear more from him, come to Money 2020 in June at the Rye in Amsterdam, where he'll be speaking on a high profile panel. We hope to see you there.

Rachel Morrissey:

And, of course, you can be part of the Money Pot at our Money 2020 shows. Please send us your pitches to podcast at money2020.com. Don't forget to follow us wherever you listen to podcasts. It really helps people find the show. Thank you so much for listening. We love our fintech nerds.

People on this episode