
Tackling your board's next big question
Are CFOs ready to review work produced by AI?
April 16 | 8 min read | By Tim Cooper
TLDR;
There is no question that AI is going to crush a ton of entry and mid-level work in finance. So the question is, what does the review and governance mode need to look like for that work? And how does it get done? Will it just change how CFOs review AI-produced work? Or will it also increase the review process as work gets done in a different way? The answer will drive what the future CFO org design looks like.
Supervision overload. AI will drive efficiencies in the team, but it could lead to a heavier review load for CFOs they are unprepared for.
Trust trouble. Poor understanding, skill gaps, and high suspicion in the tech make these problems worse.
Tactical hacks. CFOs need a new playbook for reviewing work they didn't ask a human to do.
Your AR team is chasing invoices while the cash sits somewhere in your finance inbox
Invoices go out, but the inbox fills up with queries, document requests, disputes, and back-and-forth.
Manual follow-ups mean issues sit and payments drift.
The bottleneck isn't your collections process. It's everything happening between the invoice and the payment.
Paraglide puts AI agents in that gap to help businesses get paid faster, so your team spends less time on manual inbox work and more time moving payments forward.



As the old saw goes, nothing comes for free…
As CFOs integrate AI across their functions, the hidden costs and extra time needed to supervise AI-produced work could become a two-headed monster distracting from bigger business issues and indirectly killing ROI.
This risk is sharpening the focus on oversight while CFOs determine how the ongoing technological shift will transform their day-to-day workflows.
“AI governance breaks when it stays siloed, static, and disconnected from live systems, allowing risk and performance issues to compound unnoticed,” said Alistair Gurney, CFO of performance management software provider Lucanet. “Return on investment depends on continuous monitoring and clear accountability embedded into workflows.”
Sounds complicated… but how to keep AI supervision from undermining the very efficiencies that are so enticing to begin with?
The burden of review
Building in review time for AI work needs to become a critical part of productivity projections in any function, but particularly in finance, where leaders often can’t sign off on outcomes without checking them. And if senior executives are doing the checking or rework themselves, the cost of their time and distraction can quickly kill momentum and break the AI efficiency model.
It’s already becoming a problem. Among employees and leaders who use AI every day, 77% review its output more rigorously than human work, “creating a massive hidden workload of verification,” according to research by HR platform Workday. For every 10 hours of efficiency gained from AI, Workday estimates nearly four are lost to fixing its outputs.
That ratio will likely improve over time, but it offers little comfort in the short term. And the hours don't cancel each other out. The four hours spent fixing AI output are not the same as the ten saved; they tend to fall on more senior people, in more constrained windows, doing more cognitively demanding work. In other words: more expensive, harder to schedule, and frankly more precious.
Here’s how that might typically happen in practice: AI use expands faster than permissions, escalation pathways, and ownership are defined. Early gains mask the issues. But they become visible when AI outputs enter approval flows, trigger exceptions, or face audit scrutiny.
The CFO cannot clearly evidence what the system was allowed to do. Expensive specialists need to spend time on reengineering. Momentum dries up, and projects are stalled or abandoned.
And all those alleged efficiencies have flown the coop.
Future unknowns
If you believe Anthropic, AI could beat humans in “most” tasks within two or three years. (Really? Does that include origami and rock climbing?) Still, the drive towards “artificial general intelligence” is enough to make finance leaders stop and think—will autonomous machines need much more expensive supervision and blow up cost projections as they become a reality?
“What keeps me up at night is how this works in the future. How can the workforce bring more experts into the loop capable of properly overseeing and controlling autonomous solutions?” said Connor Augustyn, director at consultant West Monroe.
Gurney said that assessing future costs is difficult, but there could also be large unknown upsides too.
“Higher-level supervision could be onerous if it takes two people to monitor a process. But by then the tool might have saved eight salaries,” he said. “The savings might equal the cost of extra checks and balances. But the ability to apply governance might also become more automated.”
It sounds like the shape of the ‘organizational pyramid’ will need to change.
Not knowing how AI could transform non-cost-related benefits in the future further complicates this assessment.
“Think of real-time closes, predictive forecasts, deeper insights, and revenue acceleration. Getting there first could be a competitive advantage. How do you value that?” said Kunwar Chadha, a former CFO and current head of FP&A at Google Health Subscriptions. “We need a thoughtful approach focusing on specific contexts and staying nimble.”
To control supervisory costs, Chadha suggested:
Monitor AI output closely at first, but increase autonomy as accuracy improves.
Assess materiality. Design review workflows carefully to avoid finance leaders inadvertently wasting time checking low-level items.
Consider AI self-governance. Tools that audit more functional finance models are becoming available. Use them if you can.
Think humans-on-the-loop (HOTL), not in it, for lower-risk cases, or well-embedded workflows – that means not reviewing every transaction but only exceptions flagged by the system.
HOTL requires new skills, as it shifts from verifying output to defining and managing exceptions. "It requires critical and analytical thinking in defining the rules and remediating flagged output," said Chadha.
A looming skill gap
However, for there to be effective supervision over AI, there needs to be people who can oversee outputs. And that’s proving challenging. AI oversight is a top skill gap in finance functions, according to a report by professional services firm RGP. To help bridge this, RGP recommended:
Closer collaboration between CFOs and CHROs to bring in talent who can properly govern AI.
Clearer ownership of supervision frameworks to build enterprise trust and improve decisions.
As boards press CFOs to focus on rapid AI impact, supervisory skills are being left out of the conversation and pressing finance departments beyond their remits, said Augustyn.
“You’re asking the workforce to fundamentally change. Your accounting or AP manager expertise - that’s still paramount. But they also now need to be experts in operations, who can understand end-to-end value chains and the risk when they go wrong. And they must be able to de-risk control challenges, quality assure agentic workflows, and monitor for drift.”
It’s difficult to quantify how much new experts like these might cost. But Augustyn said, “I don’t think it would be double the salary because universities are starting to educate workforces differently.”
“But the pace of agentic solutions will far outpace our ability to train a workforce of natural experts. So there’s a gap until they arrive [during which] you’ll pay a premium for experts. If you’re bringing in too many of these experts and not taking enough others out, ROI will diminish,” he added.
Gurney believes this investment in expertise can’t be scrimped on when deploying AI tools. His firm has invested in IT, security, and AI engineering experts to address oversight challenges across the organization.
As well as avoiding supervisory bottlenecks, their experience helps make budgeting more accurate.
“You can estimate governance costs relatively well if your team has done it before,” said Gurney.
Linking governance and production
It’s not just talent that is causing supervisory issues. Supervision problems are often initially hidden because oversight structures have been slow-forming and fragmented across functions in many firms, according to RGP. This can lead to overlaps, blind spots, and inconsistencies.
Brian Unruh, CFO of document processing and automation solution provider ABBYY, said some colleagues in his network are experiencing higher costs from AI oversight because it’s exposing existing workflow problems.
“Increasingly, as I connect with CFOs, we’re doing more process mapping to ensure AI is sustainable, locating new validation points as we deploy agentic processes. That’s urgent for finance leaders,” he said.
ROI models need continual updating against fast-changing governance needs on the ground, added Unruh. “If an oversight team hasn’t looked at production dashboards, that will get them in trouble on ROI.”
Jason Godley, CFO at revenue software provider Xactly, said that if expensive and time-consuming governance processes reduce ROI, it “forces conversations” about where executives prioritize their time.
On the front foot
To address these issues, get proactive. Involve yourself in designing AI review workflows.
Provide solid evidence and explanation to the board around how your AI governance is scalable, operationalized, and fully costed, including who will do reviews and reworks, and how long that should take them.
No one knows what the future holds. But one thing’s for sure, CFOs are going to need a different system of guardrails for overseeing their powerful new robot workers.

Reading the Room…
Dynamic oversight. How are we reviewing AI outputs now and as we move to agentic workflows? How will the cost and time commitments scale as AI becomes more autonomous?
Leadership overload?. Do our leaders have the bandwidth to absorb this additional governance burden at a time when the volatile macro situation requires hand-to-hand combat?
Workflow concern. Do we have nimble, streamlined approval processes downstream from AI workflows, or is validation slow and manual? Is it taking up unbudgeted leadership time that could be better spent elsewhere?
Appropriate measurement. Do we have a long-term, macro view of ROI on AI, or are we killing speed by making every project sing for its supper too quickly?
Capability gap. How do we build the capability to review AI outputs throughout the organization to make sure the problems don’t all just drift up to the top of the organization?

Boardroom Brief is presented by The Secret CFO Network
Dive into last week's Playbook for The Secret CFO’s guide on when to leverage AI in finance - and when to steer clear.
If you found this helpful, please forward it to your fellow finance leaders (and maybe even your Board). If this was forwarded to you, you can make sure you receive the next edition by subscribing here.






