Pennsylvania CPA Journal

Where Is AI’s Place in the Finance Function?

But beyond the hype, how exactly can artificial intelligence (AI) be put to use in corporate finance? This Business & Industry column examines how CFOs and the finance functions can incorporate AI, and offers a few examples of how it is being used.


w26-bi.tmb-There are indications that the artificial intelligence (AI) hysteria may be peaking. Studies highlight a lack of AI-driven return on investment (ROI)1 or productivity improvements.2 Skepticism about massive financial resources being plowed into data centers is leading to fears of an AI-driven market bubble that is about to burst.3 But AI need not live up to the most extravagant predictions to have an impact. For many CFOs, it is this reality – more so than the risk of a market correction – that causes sleepless nights. How do we, and the finance functions we lead, keep pace, and are we doing enough with AI?

This anxiety is familiar to those of us who lived through the dot-com frenzy 25 years ago – and not just because that one led to a recession, although that ominous fact cannot be ignored. At the time, I was working in the enterprise resource planning (ERP) software implementation practice of a Big Four firm. Watching the market overheat with tech stock valuations, 20-something entrepreneurs become overnight millionaires, and companies scramble to develop online strategies, I felt like a dinosaur working on unexciting ERP systems while the firm spun up new service offerings to capitalize on the internet craze.

The angst of 25 years ago pales in comparison to the anxieties AI sparks on many different levels. But a little fear can be good motivation. As Andy Grove, former Intel CEO, famously said, “Only the paranoid survive.” But with accumulating signs that AI hype may be reaching its crescendo, it is a good time to pause, figure out what’s really worth worrying about, and recalibrate an approach to leveraging AI in a thoughtful way, without being drawn in by hype.

My own journey with AI began as a relatively early adopter of OpenAI’s ChatGPT. I experienced equal parts marvel at its utility and, oddly, a vague sense of guilt for using it. Could I take credit for a work product that was even partially AI-generated, or for suddenly being more productive?

It didn’t take long to forgive myself. I concluded that using ChatGPT to generate a first draft of a treasury policy, for example, was no different in principle than leveraging samples and templates that had been available to financial professionals for decades. Nor was it any different than having a junior team member write a first draft. With limited staffing and bandwidth, there was no reason to apologize for using AI.

More than a year later, I am almost daily using ChatGPT and other large language model (LLM) solutions, including Perplexity Pro, Anthropic’s Claude, and Google Notebook LM. Some of my use cases as a CFO include the following:

  • Online search (i.e., anything I might have previously used a Google search).
  • Technical research of accounting, financial, tax, and legal concepts.
  • First drafts of policy and procedure documents, checklists, and job descriptions.
  • Reviewing and editing emails for tone, clarity, and completeness.
  • Summarization of lengthy, complex documents or email chains.
  • Comparison of vendor proposals.
  • Preliminary review of contracts.
  • Synthesis of free-form text responses when collecting information.
  • Compression of complex and nuanced concepts into concise PowerPoint bullet points.
  • Excel help (far superior to Excel’s own help function).

These are all examples of using LLMs for tasks. But AI can also be used for thought. This paradigm of LLMs for thinking, in contrast to doing, was inspired by Wharton professor and author Ethan Mollick’s Substack AI newsletter, One Useful Thing. Using AI as a thought partner illustrates how LLMs can amplify human performance rather than replace it. Engaging an LLM as a thought partner simply entails having an interactive text or audio conversation with it – just as you would with a person – to explore potential solutions to business problems, brainstorm new ideas, or prepare for meetings and presentations. LLMs can be consultants or executive coaches, not just interns, administrative assistants, researchers, or junior staff.

Over the past few months, I have thought-partnered with LLMs on appropriate business travel parameters; ideas for driving organizational engagement with new reporting tools; potential new key performance indicators; approaches to difficult conversations; design of off-site corporate retreat sessions; navigating board dynamics; ways to better communicate financial results and concepts; analytical approaches and methodologies; finance roles and organization design; and many other issues. I ask LLMs to review my slide decks from the perspective of the intended audience and ask me questions to help me prepare for the presentation.

Learning is another way to use LLMs. I have used Google Notebook LM to study a variety of topics. Perhaps less familiar than ChatGPT, Notebook LM is based on Google’s Gemini platform. It takes in up to 50 sources (more in the premium version) such as articles, industry reports, white papers, online links, and YouTube transcripts that you can interrogate with a chatbot. With one click, Notebook LM produces briefing documents, flashcard study guides, mind maps, and more. Notebook LM’s synthesis of source material into a human-like audio “podcast” is remarkable and it can also create videos of the topic. The technology offers entertaining and convenient ways to learn on the go.

Anecdotally, it appears that most people default to using LLMs for tasks, but less so for thought. Task-based use cases are more concrete, easier to describe, and are sharable across teams. While LLM thought partnership is more abstract and bespoke, it is arguably of equal or greater utility.

All of these use cases – whether for tasks or thought – are individual use cases. As I progressed in my knowledge of LLMs, I came to realize they had little or no potential to redesign processes, and suspected they were not transforming finance functions in a meaningful way. Research (using Perplexity Pro) confirmed my hunch, as did a Big Four firm presentation about AI in the finance function at a conference I recently attended. The most familiar LLMs are consumer applications, not enterprise systems. They are terrible at simple arithmetic. They cannot create Excel spreadsheets from scratch. And they are not workflow management tools. Therein lies the paranoia: CFOs are left feeling like their finance functions should be doing something more with LLMs, but what?

This fear may be misplaced. A key takeaway from the aforementioned Big Four presentation is that asking, “How should I implement AI?” is the wrong question; that is the tail wagging the dog. The right question is, “What problem am I trying to solve?” The answer may not be AI – or any technology at all. It may just be a process change. The solution may lie in existing systems, especially as AI and natural language capabilities are increasingly embedded in legacy ERP software and desktop applications.

LLMs are only one type of AI. In their current form, they are unlikely to transform the finance function. While there may be benefits from building custom applications on top of LLM platforms, optimization of the finance function is more likely to come from entirely different forms of AI, such as predictive analytics, intelligent process automation, or agentic applications that handle specific tasks (such as invoice matching).

A more legitimate cause for AI anxiety is the human element. The potential elimination of jobs is concerning, both because of the toll on those displaced as well as the existential question that remains unanswered: where will the next generation of finance leaders come from if the bottom of the pyramid is hollowed out? And what about the effects of AI on those of us that do not lose our jobs? A recent MIT Media Lab research paper, Your Brain on ChatGPT, shows that when using LLMs for writing, brain activity is lower than when writing without it. Writing is thinking. If we outsource our writing to LLMs, we are outsourcing our thinking – we become cognitively lazy. This does not bode well for learning and development. It takes hard-won knowledge and experience to evaluate AI output; how will future finance leaders be able to do so if AI does the work for them that we did earlier in our careers? Recycled AI knowledge threatens to dilute human expertise, as those with hands-on experience retire and are replaced with AI natives that never created their own work-product.

So how should we navigate the road ahead? Curiosity, optimism, and even gratitude will serve us well – after all, we are participants, not just spectators, in the AI revolution. At the same time, contributing responsibly means balancing excitement with pragmatism, and resisting the hype. Implement AI not for its own sake, but only if it is the right solution to a specific problem. Experiment with LLMs in your own work, encourage your team to do the same, and share use cases. Partner with your information technology team to consider building custom applications on top of LLM platforms to interrogate your organization’s data. Learn about AI applications other than LLMs that are more relevant and impactful to the finance function, including new AI functionality embedded into existing applications. Consider AI’s power to design entirely new, innovative processes for desired outcomes, instead of redesigning existing processes.

Most importantly, prioritize those things that make us human. Let’s not lose the ability to think and write, or to synthesize and retain information, in the relentless pursuit of LLM-driven productivity. Author and Georgetown University professor Cal Newport offers a balanced approach: while cautioning against the cognitive dangers of outsourcing writing (and hence thinking) to AI, he also concedes there are professional situations in which “the writing … is subservient to the larger goal of communicating useful information, so if there’s an easier way to accomplish this goal, then why not use it?” Being thoughtful about when to use AI and when not to will protect cognitive fitness. And when using it, be “the human in the loop,” as Mollick advises.

Author Natalie Nixon, in her book Move, Think, Rest, highlights the challenges and opportunities of amplifying the human element: “Instead of obsessing with ‘What to do with all of this new technology?’ we should be leaning into what makes us uniquely human. … The time that opens up to us because we can arrive at answers more quickly means that we have more time for human interaction, for pausing and spaciousness, and for new opportunities to collaborate.” Emotional intelligence will become increasingly sought after; building this competency will preserve our professional relevance in the age of AI.

Regardless of AI’s exact trajectory and its near-term impact on the financial markets, AI will only continue to get better and more pervasive in our professional and personal lives. How we decide to use it and not use it – and our emotional intelligence – will determine whether we amplify ourselves and our teams or get left behind.

 

1 Jeremy Kahn, “An MIT Report That 95% of AI Pilots Fail Spooked Investors,” Fortune (Aug. 21, 2025).
2 Kate Niederhoffer, Gabriella Rosen Kellerman, Angela Lee, Alex Liebscher, Kristina Rapuano, and Jeffrey T. Hancock, “AI-Generated ‘Workslop’ Is Destroying Productivity,” Harvard Business Review (Sept. 22, 2025).
3 Eliot Brown and Robbie Whelan, “Spending on AI Is at Epic Levels. Will It Ever Pay Off?” Wall Street Journal (Sept. 26, 2025).


James J. Caruso, CPA (Inactive), is the CFO of ClearView Healthcare Partners of Newton, Mass., and a member of the Pennsylvania CPA Journal Editorial Board. He can be reached at jim.caruso@clearviewhcp.com.