
Tackling your board's next big question
What's the risk of AI errors spinning out of control?
Feb 19 | 6 min read | By Tim Cooper
TLDR;
Anyone who’s used AI for in-depth tasks knows it makes many more blunders than the average human. But cognitive tools are increasingly persuasive, so it’s easy to miss errors if you’re working under pressure. CFOs feel nervous about the fallout if a rogue AI mistake goes unnoticed. But if anyone can manage the risks, they can. In this Brief we’ll get into:
Make bots prove their worth. Only grant autonomy after rigorous stress testing.
Join fellow leaders. Build understanding about AI risks and governance.
Beware the silent cannibal - a small error compounding across your systems.

At high-growth companies with lean teams, finance either compounds or becomes a bottleneck
Teams growing from $10M to $200M ARR don’t need to add bodies, they need to remove manual work.
Campfire's Accounting Intelligence automates the parts of the close that quietly steal weeks: categorization, reconciliations, and variance analysis.
Close faster, answer executive questions in seconds, and save hundreds of thousands annually.
This is how lean finance teams operate in hypergrowth.
Book a demo to see for yourself




Source: Dictionary.com
In the movie “2001: Space Odyssey”, a machine called Hal notoriously overrules human control. What if you get a real-life Hal loose in your finance function - an AI tool that tries to ignore your governance framework?

There’s a real risk there. AI’s promise is predicated on speed, efficiency, and advanced reasoning, but… accuracy isn’t currently a strong suit. As a probabilistic technology - the algorithm picks the most likely answer based on relationships within the data - it’s best to think of generative AI more as a guessing machine instead of a fact generator.
Which is fine if you’re baking a cake or writing an email because precision isn’t the end goal. Small errors in flour measurements or sentence structure won’t cost you much of anything.
However, CFOs need absolute certainty, especially when it comes to financial reporting and understanding the economics of the business. But because of the "snowball effect," once an AI model makes even a single error, it treats that mistake as a "fact," leading to a doom loop in which the AI continues to generate ever greater amounts of bad information based on the original error.
In other words, having AI introduce even the smallest errors into financial processes can compound into enormous, and hugely expensive, problems very quickly. In fact, half of UK accountants say companies have lost money due to AI mistakes, and 31% encounter these errors on a weekly basis, according to a recent KPMG study.
Finance chiefs also need to prepare for the “silent cannibal”. This may sound like a bad dinner party guest, but it actually describes the risk that small discrepancies can quickly go viral across your agents and workflows.
“AI is taking over finance tech, but it’s not always balanced with safety. Leadership hasn’t caught up with AI’s probabilistic nature and the way you can’t control hallucinations,” said Jen Garcia, head of financial services at consultant RGP. “Also, large ERP providers are rapidly incorporating AI enhancements. I don’t know that CFOs are checking to ensure those versions meet their safety standards.”
Even small slips can have serious financial consequences for companies. “If you pay a vendor $20,000 instead of $1,000 due to an AI inaccuracy, someone’s job is on the line,” said Edwin Ang, regional FD, Asia at workforce solutions provider Brunel.
And while the major AI platforms might claim that error rates are under control, it’s not clear that’s actually the case. Workers' approach to AI isn’t helping either: according to KPMG research, 58% of employees aren't checking accuracy, 57% are making mistakes due to AI, and 44% are breaching usage policies.
So what are CFOs to do to prevent hallucination risk from turning into financial disaster?
Governance trumps technology
Dan Owens, CFO at financial operations platform Maxio, says the increasing risk of errors is one reason governance is now more important than technology when implementing AI in finance.
“Autonomy should be earned, not granted,” he said. Owens offered several governance guidelines to help mitigate hallucination risk.
Don’t rush into it. Scale AI only after controls are proven, monitoring is continuous, and you can defend every material decision to auditors, regulators, and your board. That means starting small and testing continuously.
Keep a tight rein. Autonomous AI workers must be managed like employees with system access, signing authority, defined scope, and accountability.
Oversight structures are slow to establish in many firms, though: 32% of finance leaders said their AI risk governance frameworks were still ad hoc or in development, 56% said they were “established,” and only 13% were “advanced.”
“That’s staggering,” said Garcia. “CFOs are in a Wild Wild West, having to interpret things that change quickly. I get the sense they’re at times uncomfortable and even overwhelmed by the amount of new information and responsibilities. They’re upskilling themselves and their teams to recognize whether models are safe. With investors wanting to understand AI safety, it’s a must-have, starting yesterday.”
Her top suggestion for those feeling the pressure is to work with the CIO and other leaders to co-develop AI skills and policies and build shared confidence and understanding.
A battle plan for AI blunders
Phil Lim is director of product management, analytics, and AI at governance solutions provider Diligent. To combat AI inaccuracies, he said CFOs need to identify all emerging risks and bring them into mainstream oversight. Boards should expect management to:
Keep live inventories of AI in finance workflows
Define human accountability and escalation paths
Set thresholds around AI error rates and tolerances, reported to the audit or risk committee
Invite challenging questions from internal audit
Invest in AI literacy for finance, so reviewers can query outputs effectively
For example, leaders can invite questions from audit about how an undetected AI mistake might penetrate their finance stack, and how quickly they would identify and stop it from spreading.
“Finance teams are preparing new controls to identify minor AI errors before they turn systemic. We’re seeing leaders insist on real-time visibility: dashboards showing exceptions, aging items, override rates, and approval bottlenecks before problems cascade,” said Omar Choucair, CFO at software provider Trintech.
Choucair recommends avoiding models where you can’t see the audit trail.
“Lack of transparency can lead to friction and confusion during year-end audits or regulatory non-compliance if an autonomous agent misapplies an accounting standard,” said Choucair. “You need audit logs for data sources used, rules applied, and who approved it. If you can’t explain an AI-driven outcome to an auditor, you don’t control it.”
Ang also suggested regular testing that throws weird scenarios at your AI tools to see what needs improvement. Training machines to spot errors, such as abnormal invoices or gross margin surprises, could also help. Does the bot know how to escalate an abnormality? Does it flag for human review or try to "solve" the problem itself?
Control override: sci-fi or reality?
Agentic AI is already capable of acting independently and making decisions. But that’s just the beginning. Kunwar Chadha has held two CFO positions and is currently head of FP&A at Google Health Subscriptions.
“Artificial general intelligence [which can match or exceed human cognition] could be a game changer, perhaps within two years. The risk to your financial books will get much higher when AI starts solving things without asking you,” Chadha said.
“I don’t think even the experts know how to stop that yet. But I’m thinking you might need more high-tier supervisors, which sounds expensive and could impact ROI,” he added. “I still think CFOs should implement this technology quickly, though. If it becomes standard procedure in 10 years, you need to think about it now.”
But Choucair believes strong governance rails will prohibit any attempted machine override.
“AI doesn’t sign off on the financials, I do,” he said. “AI governance should give me confidence to understand and validate decisions, and stand behind the results when the questions come.”

Reading the Room…
Be ready to tackle your board's next big question:
Fail safe. How do we make sure we catch a compounding error as a result of AI before the ‘snowball effect’ causes real damage?
Liability risk. What are our legal and financial obligations if and when AI causes a compounding error in our financial statements or data?
High visibility. Is there anything we can do to see into the ‘black box’ of AI-powered tools?
ROI math. How can we calculate the savings that AI might provide versus the increased compliance costs of mitigating error risk?
Kill switch. What is our back up plan if we need to shut AI down if we catch compounding errors within financial reporting?
Point in time. Do we really understand if, and where, our employees are using AI today?
Sandbox. How do we create safe guardrails that include concentric circles of risk for our teams to experiment in while still balancing speed and adoption?

Boardroom Brief is presented by The Secret CFO Network





