Curiosity, conversation, and investment into artificial intelligence are quickly gaining traction in the tax community, but proper due diligence requires an acknowledgement of what such tools are and aren’t yet capable of, as well as an assessment of security and performance risks, according to industry experts.
With the tax world exploring how AI can improve practice and administration; firms, the IRS, and taxpayers alike are in the early stages of considering its potential for streamlining tasks, saving time, and improving access to information. Regardless of one’s individual optimism or skepticism about the possible future of AI in the tax space, panelists at an American Bar Association conference in Washington, D.C., this past week suggested that practitioners arm themselves with important fundamentals and key technological differences under the broad-stroke term of AI.
An increasingly popular and publicly available AI tool is ChatGPT. Users can interact with ChatGPT by issuing whatever prompts come to mind, such as telling it to write a script for a screenplay or simply asking a question. As opposed to algorithmic machine learning tools specifically designed with a narrow focus, such as those in development at the IRS to crack down on abusive transactions like conservation easements, ChatGPT is what is called a large language model (LLM).
LLMs, according to PricewaterhouseCoopers Principal Chris Kontaridis, are text-based and use statistical methodologies “to create a relationship between your question and patterns of data and text.” In other words, the more data an LLM like ChatGPT—which is currently ‘learning’ from users across the entire internet—absorbs, the better it can attempt to predict and algorithmically interact with a person. Importantly, however, ChatGPT “is not a knowledge model,” Kontaridis said. Calling ChatGPT a knowledge model “would insinuate that it is going to give you the correct answer every time you put in a question.” Because it is not “artificial general intelligence,” something akin to a Hollywood portrayal of sentient machines overtaking humanity, users should recognize that ChatGPT is not “self-reasoning,” he said.
“We’re not even close to having real AGI out there,” Kontaridis added.
Professor Abdi Aidid of the University of Toronto Faculty of Law and AI research-focused Blue J Legal, said at the ABA conference that “the really important thing when you’re using a tool like [ChatGPT] is recognizing its limitations.” He explained that it “is not providing source material for legal or tax advice. What it’s doing, and this is very important, is simply making a probabilistic determination about the next likely word.” For instance, Aidid demonstrated that if you ask ChatGPT what your name is, it will give you an answer whether it knows it or not. You can rephrase the same question and ask it again, and it “might give you a slightly different answer with different words because it’s responding to a different prompt.”
At a separate panel, Ken Crutchfield—vice president and general manager of Legal Markets— said he asked ChatGPT who invented the Trapper Keeper binder, knowing in fact his father Bryant Crutchfield is credited with the invention. ChatGPT spit out a random name. In telling the story, Crutchfield said: “I went through, and I continued to ask questions, and I eventually convinced ChatGPT that it was wrong, and it admitted it and it said ‘yes, Bryant Crutchfield did invent the Trapper Keeper.'” Crutchfield said that when someone else tried asking ChatGPT who invented the Trapper Keeper, it gave yet another name. He tried it again himself more recently, and the answer included his father’s name, but listed his own alma mater. “So it’s getting better and kind of learns through the these back-and-forths with people that are interacting.”
Aidid explained that these instances are referred to as “hallucinations.” That is, when an AI does not know the answer and essentially makes something up on the spot based on the data and patterns it has up to that point. If a user were to ask ChatGPT about the Inflation Reduction Act, it would hallucinate an answer because it currently is limited to knowledge as recent as September 2021. Generative AI ChatGPT is still more sophisticated than more base-level tools that work off of “decision trees,” such as when a taxpayer interacts with the IRS Tax Assistant Tool, Aidid said. The Tax Assistant Tool, Aidid said, is not generative AI.
Mindy Herzfeld, professor at the University of Florida Levin College of Law, responsed that it is “especially problematic because the [Tax Assistant Tool] is implying that it has all that information and it’s generating responses based on the ‘world of information,’ but it’s really not doing that, so it’s misleading.”
The most potential for the application of generative AI is with so-called deep learning tools, which are supposedly more advanced and complex iterations of machine learning platforms. Aidid said deep learning “can work with unstructured data.” Such technology can not only “synthesize and review information, but review new information for us. It’s starting to take all that and generate things—not simple predictions—but actually generate things that are in the style and mode of human communication, and that’s where we’re seeing significant investment today.”
Herzfeld said that machine learning is already being used in tax on a daily basis, but deep learning is “a little harder to see where that is in tax law.” These more advanced tools will likely be developed in-house at firms, likely in partnership with AI researchers.
PwC is working with Blue J in pursuit of tax-oriented deep learning generative AI to help reduce much of the clerical work that is all too time-consuming in tax practice, according to Kontaridis. “Freeing up staff to focus efforts to other things while AI sifts through mountains of data is a boon,” he said.
However, as the saying goes, with great power comes with great responsibility. Here, that means guarding sensitive information and ensuring accuracy. Kontaridis said that “it’s really important to make sure before you deploy something like this to your staff or use it yourself that you’re doing it in a safe environment where you are protecting the confidentiality of your personal IP … and privilege that you have with your clients.”
Herzfeld echoed that practitioners should bear in mind how easily misinformation could be perpetuated through an overreliance or lack of oversight of AI, which she called a “very broadly societal risk.” Kontaridis assured the audience that he is “not worried about generative AI replacing our role and the tax professional … this is a tool that will help us do our work better.”
Referring to the “myth” that “CPA bots” will take over the industry, he said: “What I’m worried about is the impact it has on our profession at the university level of it, discouraging bright young minds from pursuing careers in tax and accounting consulting.”
Get all the latest tax, accounting, audit, and corporate finance news with Checkpoint Edge. Sign up for a free 7-day trial today.