While AI can be a powerful tool, tax practitioners should always verify the accuracy of AI-generated output and take steps to protect data privacy and attorney-client privilege, according to tax attorney Julie Bradlow, chair of DarrowEverett’s Tax Practice Group.
It’s important to check for factual errors and ensure the proper context is applied, Bradlow told Checkpoint. She also advises against using public AI platforms for client work due to data privacy risks and the potential for losing attorney-client privilege.
Verify AI Output to Avoid ‘Hallucinations’
A well-known risk of using AI is possible “hallucinations” — where technology fabricates information, including legal citations and case holdings. Bradlow recalled an instance where a colleague asked ChatGPT about a legal concept and it returned a citation to “a revenue ruling that did not exist.”
Bradlow said that it’s best practice to “look at the original source and see what it says.” And that applies whether she’s using an AI summary or simply looking at a treatise. “It’s nice to have somebody else’s summary of a situation, or their interpretation of what a particular original source says, but there’s no substitute for looking at oneself,” she stressed.
A recent Tax Court case highlights the danger of relying on unverified AI output. In Clinco v. Commissioner (T.C. Memo. 2026-16), an attorney submitted a brief containing citations to nonexistent cases. “The bouillabaisse of case names, reporter citations, and legal propositions suggests something cooked up by AI,” wrote Judge Mark V. Holmes.
The court harshly rebuked the attorney, stating the presence of such “apparitions” and “fictitious caselaw” in legal briefings is “unacceptable” and a “recipe for sanctions.”
To prevent such errors, Bradlow stressed the importance of having a “human in the loop” to review all AI-generated work. “Before any attorney signs something that’s going to be filed with the court or an administrative agency, whether it’s the IRS or any other agency, they ought to be able to stand behind what’s in it,” she said. In Bradlow’s view, there is not yet a substitute for a human verifying that the authorities in a brief “stand for the proposition for which they’re being cited.”
Bradlow also noted that state bars may require, as part of an attorney’s duty of competence, a requirement to “keep abreast” of technology such as AI — including its risks. She cited North Carolina’s state bar rules as an example of imposing “a duty of technological competence.”
Keep Client Data Out of Public AI
In addition to accuracy concerns, Bradlow warned that practitioners risk waiving client confidentiality and privilege when using AI. “I would never put a client document into the public version of ChatGPT,” Bradlow said, emphasizing an attorney’s duty to protect client secrets. Putting confidential information into a public system, she noted, is a violation of that duty.
Instead, Bradlow suggests practitioners opt for professional, secure platforms designed for legal work. Models should guarantee client data is kept private and that client data is not used to train the model.
She also recommends having an IT security expert vet the data privacy and security protocols of any AI vendor before adopting the technology in your practice. Bradlow noted that an attorney’s ethical duty of technological competence includes knowing when to bring in an expert.
The attorney-client privilege risk is underscored in a recent case, United States v. Heppner (2026 WL 436479 (S.D.N.Y. 2026)), said Bradlow. In that case, Bradley Heppner, a defendant in a criminal investigation, used the public AI platform Claude to help prepare his defense strategy. The district court ruled that the defendant’s communications with the AI were not protected by attorney-client privilege or the work product doctrine. The court found there was no reasonable expectation of confidentiality when voluntarily disclosing information to a third-party AI platform, whose privacy policy explicitly reserved the right to disclose user data to “a host of ‘third parties,’ including ‘governmental regulatory authorities.'”
Because Heppner acted on his own volition and was not directed by his attorney to use the AI, the court also found the work product doctrine did not apply. The doctrine protects materials prepared “by or at the behest of counsel,” and the defendant’s AI-generated documents did not reflect his lawyer’s mental processes or strategy.
While defendant Heppner used AI himself, Bradlow advises professionals use “closed and proprietary AI systems” to avoid pitfalls when representing clients. “I always believe that the number one duty that every attorney has is to keep their clients’ confidences and their secrets,” she said. “Taking their documents and putting them in the public domain — that’s a violation of that duty.”
Take your tax and accounting research to the next level with Checkpoint Edge and CoCounsel. Get instant access to AI-assisted research, expert-approved answers, and cutting-edge tools like Advisory Maps and State Charts. Try it today and transform the way you work! Subscribe now and discover a smarter way to find answers.