AI-wary: The three big risks you need to know about
Learn how to manage the three biggest risks of using AI in finance: data security, hallucinations and fraud. This chapter offers practical advice for protecting sensitive information, spotting unreliable outputs and creating sensible policies that support innovation without compromising trust.

📘 Definitions #6
Hallucination: An AI response that sounds plausible but is wrong or fabricated. Hallucinations often contain specific figures, dates or citations to sources that don't exist. They occur because AI systems work by predicting what comes next, based on patterns in their training data, rather than checking facts in a database.
Avoiding two extremes

You’re happily feeding sensitive financial information into a new and imperfect technology that learns by sucking up all the data it can find. What could possibly go wrong?
While AI can massively improve your productivity, using it comes with some risks. What’s more, the technology has opened up new exciting new areas for fraud.
It’s not surprising that many organisations’ early reaction was to ban their staff from using AI at all – but that approach won’t hold. You’d be missing out on substantial productivity gains and may find yourself putting off customers, prospective investors and potential staff.
iplicit's COO, Olivia McMillan, says: “The question of governance around AI is challenging. Where do you start? It’s OK at the very start of your AI journey to have your people experiment with very light oversight – but then you have to very quickly develop golden rules, the basic principles that people can cling to as they’re experimenting with AI. “
Daniel Lawrence, CEO of the accounting tech business Bots For That, says: “At the moment, I see a lot of people going to one of two extremes. Either they lock their systems down so tightly that people can’t move or there’s no locking down whatsoever and it’s all too loose.”
You’ll need to find a position in between those extremes. So let’s look at the three big areas of AI risk.
Risk 1: Data security

What data are you putting into AI and can you be sure it won’t end up in the public domain? And what might your staff be putting into AI, either on the organisation’s devices or their own?
“I’d advise people to choose their AI tool with some thought about its privacy policy,” says Jake Moore, Global Cybersecurity Advisor at ESET.
“If you’re using a free version of ChatGPT in particular, you need to remember that its business model is to learn as much as it can and add more data to its algorithm, which is the nuts and bolts of AI.
“If you want to be as safe as possible, you have to assume that in a worst-case scenario, any data you upload to the internet could one day be compromised.”
OpenAI, the company behind ChatGPT, says it doesn’t train its models on data from its paid plans (called Team, Enterprise and API). These plans also offer enhanced security protections, including encryption and compliance with standards such as SOC 2 Type 2 certification. Free users may have their inputs and outputs used in training though unless they opt out.
“ChatGPT is potentially not as secure as Copilot, so you have to be really careful with what you put into it,” says Becky Glover. “You should avoid individuals’ names, phone numbers, addresses, dates of birth – any private information like that.
iplicit's Chief Product Officer, Paul Sparkes, says: “Beware of the free models, albeit they're great and appealing. I'd keep that for your holiday planning.”
Many organisations work in Microsoft’s ecosystem and use its Copilot AI tool, which keeps your data private. Copilot can also integrate powerfully with your other Microsoft applications to help you with emails, calendars, presentations and more. But many people will prefer to continue using other tools.
Apart from risking your own data, you need to bear in mind that you’ve probably given undertakings about what you do with customer information. “If customer data is held within a company or its service, there’s an assumption that it won’t be shared with any third party – and those AI tools are third parties,” says Jake Moore of ESET. “So there has to be something about AI written into contracts.”
CEO and podcaster Indi Tatla gives says: “If everyone in your organisation is using ChatGPT and you want to encourage it, make sure they’re using company accounts and make sure you know what you’re paying for. Don’t scrimp on it – pay for the level that acknowledges this is your data and it needs protecting.”
Jack Rhodes, Revenue Operations Manager at iplicit, suggests: "Ask yourself the question: Is this information you would share to a stranger in the pub? If it's not, you probably shouldn't be giving it to an AI model without some safeguards in place.”

The risks to customer data multiply as more employees get access to that data and use AI in their work. And while there hasn’t yet been a big GDPR case involving data uploaded to an AI tool, that could happen.
The Information Commissioner’s Office advises organisations to apply GPR principles thoroughly when using AI – and suggests Data Protection Impact Assessments (DPIAs) are an ideal way to demonstrate compliance. The Financial Conduct Authority also has guidance on the subject, stressing the need for processing of personal data to be fair and transparent.
“Many employees have access to huge amounts of data compared with 20 years ago,” says Indi.
“When I joined the world of work, there was no way I could see all aspects of a customer database, for example, but now many people can. When you start introducing AI into the team, it’s very difficult to track what’s going on.
“What’s more, if someone is using their own personal GPT for work, using it for both personal and work tasks, then if they leave, that knowledge has left your ecosystem and has gone. Having guardrails will protect against that but will also ensure the team benefits from the knowledge generated within these systems.”
Daniel Lawrence of Bots For That says: “It’s important to understand the different AI models, the subscription level you’re on and what protection comes with it. You’re potentially using a lot of private data but if you only use AI for generic things, like ‘Give me a response to this email’, you’re missing the opportunity to get the most out of it.
“You only get one chance to lose a customer and that will happen as soon as you deliver the wrong results or do something that breaches security or privacy.”
💡 TOP TIP
Check what the owner of the AI tool can do with your data by default. Opt out of any uses that you’re unhappy with – and if in doubt, use a paid plan.
Risk 2: Hallucinations

You’ll have read about some of AI’s greatest howlers. Once in a while, artificial intelligence will confidently assert something that’s 100% wrong.
For example, there are not three Rs in the word “blueberry”, as ChatGPT-5 told some users in August 2025. Saving Private Ryan did not win the Best Picture Oscar for 1998, as ChatGPT and other LLMs said. And glue is not a good way to stick cheese to your pizza, despite Google’s AI Overview feature suggesting it in May 2024.
That could be embarrassing if you’re hosting pizza night or taking part in a pub quiz (the answer you’re looking for is Shakespeare In Love). But more concerning in the professional world are the cases of AI getting maths wrong, citing sources that don’t exist or making omissions and fabrications when condensing documents.
“AI can get the most simple things wrong yet often does the most complex things really well. You always have to check its homework,” says iplicit’s Rob Steele.
Ali Kokaz, One Peak’s Head of Data and AI, says: “This is an issue for everyone but particularly for finance teams. Time and efficiency are valuable to finance teams but trust is doubly important. They don’t want to save time if they can’t sign off the numbers and put their names to them.”

Most hallucinations happen because large language models work by predicting the next word, based on the text they’ve ingested. Sometimes, things go seriously off track. When it comes to numbers, AI can apply the wrong formulas, misunderstand numbers or spot patterns that aren’t meaningful.
Back in our chapter on prompting, we looked at some ways to reduce the risk of hallucinations. You can give context about what the numbers mean. You could ask for step-by-step calculations so errors in reasoning are easy to spot. You could even ask the AI to run the numbers using Python or another language.
When the responses come back, you will have to sense-check them, perhaps comparing them with trusted sources. And beware spurious precision – a figure worked out to many decimal places might be just plain wrong.
“It’s your responsibility to check whether what’s come out of AI’s black box is actually correct,” says Alastair Barlow. “It’s your liability, your insurance and your reputation that’s on the hook – for now.”
Paul Sparkes, iplicit’s Chief Product Officer, says: “AI is like having an apprentice. It’s is a very capable apprentice but needs guidance. In two years’ time, that apprentice will be amazing and will bring a lot of value. But you wouldn't set your apprentice off to build a report pack without you checking it and being confident it’s right.”
iplicit's Rob Steele adds: "I certainly wouldn't have AI signing off your tax returns or VAT return, or anything like that, without some human intervention.
“In the end, AI doesn’t go to jail – you do. Ultimately, you have to be responsible for these things.”
💡 TOP TIP
Beware of surprising precision in AI answers. It could be a sign of hallucination.
Risk 3: Scams and fraud

Among the many people who’ve improved their productivity thanks to AI are criminals. Their profit margins seem to have enjoyed a healthy boost too.
“There are hundreds of ways criminals can use AI creatively,” says Jake Moore of ESET.
“Many people are used to the traditional scam emails that purport to come from your boss or your bank manager and a lot of people will spot those. But AI can now create more targeted scams with a much better hit rate.”
A handy resource for the AI-literate scammer is GhostGPT, a large language model without ethical constraints. It will answer requests that other LLMs won’t, such as instructions to create ransomware, write phishing emails and other scams.
Jake adds: “There are even more dodgy creations that allow people to upload a virus and say ‘Create a version of this that’s brand new and won’t be seen by traditional antivirus software’.
“There are tools out there being used at scale to offer malware as a service, enabled by AI.”
Indi Tatla adds: “Everybody needs to double down on tackling fraud. The amount of fraudulent email that arrives in your inbox now is huge and a lot of it looks very close to being the real deal. People make mistakes and can let these things through. The British Library, the Co-op and M&S have both fallen victim to ransom attacks and it can happen to anyone.”
AI’s ability to generate images, video and text has created rich new areas of opportunity for scammers. These are the “deepfake” creations which can fool even a cautious user. It’s easy to be fooled by an AI-generated picture of a non-existent receipt or invoice – and fraud can get more sophisticated than that.
As long ago as 2019 – before the world had heard of ChatGPT – The Wall Street Journal reported that a convincing AI voice had been used to call the CEO of a UK energy firm and scam the business out of 220,000 euros.
That was thought to be the first case of its kind. But in 2023, ESET’s Jake Moore staged a startling real-world demonstration of how easy a scam involving an AI voice generator had become.
“I cloned the voice of a CEO and then, with his permission, I spoofed his phone number,” he says.
“I then sent a voice note via WhatsApp to his financial director. I knew he was at a particular restaurant, so the AI voice said he was at the restaurant and they’d forgotten to pay someone so the FD should transfer the money straight away.
“It made sense, it was believable, and 14 minutes later she transferred £250 to my personal bank account.”
More recently, Jake created a LinkedIn post in the name of a locally-known company boss, announcing that he was planning to take part in a charity bike ride. Hundreds of people liked the post and many looked for a non-existent JustGiving page. The tool this time? A convincing AI-generated video.
The prospect of fake versions of you – or your chief executive – joining your Zoom or Teams calls is not very far away.
In this world of AI-assisted fraud, human vigilance will be even more important, as will protection such as multi-factor authentication.
"Verification is key – but seeing and hearing isn't believing any more,” says Jake.
"A lot of small companies are using code words now. They use particular words when financial information is being shared that are only ever spoken within the team, never shared online or by text.
"Companies need to really pay for good security software as well – a multi-layered security platform, with multiple tools from different companies, making sure that information is secure all the way from backup to your spam filter."
💡 TOP TIP
Cyber security is becoming more important by the day. Invest in the right products and in human vigilance.
Policies and golden rules

Having weighed up all the risks and how to mitigate them, you’ll need to set down your position in a policy at some point. That sounds intimidating – but common sense and an FD’s customary caution will take you a long way.
FD Becky Glover says: “Finance is a confidential function anyway. We are always thinking ‘Can I talk to this person about that?’ I think we always have that in the back of our mind – it’s about balancing risk and reward."
Your AI policy needs to be realistic, so that people aren’t tempted to circumvent it, and it needs to be widely communicated and adhered to.
For iplicit’s Olivia McMillan, the balancing act is between caution and innovation. “When you’re developing governance for AI, you really want to be careful not to stifle innovation,” she says.
“You need your governance framework, you need to think about ethical use of AI, you need to think about GDPR and all those other important things. But you can’t stifle innovation – and that’s the balance you have to find.
“Even if you’re not encouraging your people to forge ahead with AI as we are, they’re probably using it. It’s probably bled through from their personal lives, where they’re using it to shop or check contracts or create their home renovation plans.
“People are adopting these tools naturally and you as an organisation need to support them and welcome that – and having some golden rules in place helps everybody.”
💡 TOP TIP
Develop robust but practical golden rules – and make sure everyone knows about them.
Latest guides
Want to see iplicit in action?
Book your demo and discover how iplicit can simplify your finance operations, automate manual processes, and give you real-time visibility - wherever you work.
