Skip to content
Federal Tax

Tax Pros: Data Readiness, Protection Critical to AI Strategy

Tim Shaw  

· 5 minute read

Tim Shaw  

· 5 minute read

Tax practices seeking to begin developing an artificial intelligence (AI) strategy should understand their end goals and limitations from the get-go, with an emphasis on how data is collected and safeguarded, a panel of tax and tech leaders at PricewaterhouseCoopers (PwC) advised.

Dina Brozzetti, managing director of the firm’s innovation hub, said from the onset of the April 25 webinar that introducing AI into a tax practice “starts with responsibility and trust” as opposed to jumping into the deep end with development, only thinking about potential benefits.

“[Y]ou need to know the guardrails in which you can operate,” said Brozzetti. “Just because you can do something doesn’t mean you should do it.”

Tax Reporting and Strategy Principal Kristin Born said “perhaps the biggest issue” tax functions at organizations face is having reliable data. “This generally stems from data not being appropriately collected or validated at the source, which can lead to significant issues” down the road with access and compliance.

Asset Wealth Management Tax Partner Chris Hefty recommended first considering desired outcomes and internal limitations, starting with data. Questions that should be asked include: 1) what data is needed; 2) what systems house the data or if it needs to be sourced from clients; 3) what authorizations are needed to use the data; and 4) how much “manipulation or potential enhancement” of the data is necessary.

From there, Hefty continued, it is a matter of pinning down what technical resources are required. This means gauging if the organization has personnel with an appropriate level of expertise and if processes and technology framework systems are in place to make certain projects possible. Finally, practices should determine if development can be done internally or if it would make more sense to use an outside partner, said Hefty.

Born added there is “often this tension that exists” during customer or vendor onboarding and that data collection is a pain point for many businesses, which is an example of a use case where AI could be leveraged. But organizations may have a large list of possible areas in their workstreams where such technology could be deployed. Tax AI Solutions Director David Resseguie suggested identifying “common flows” with use cases. That is, finding patterns where there are common capabilities of AI and how data flows through those systems.

“If you’ve identified those, now, you can focus that investment that you have on building out that infrastructure, those integration points,” said Resseguie. Doing so will help “get to value quicker for the individual use cases.”

A key takeaway from the panel is the importance of building “with responsibility in mind,” as Brozzetti described. No single individual is responsible for overseeing the governance of AI strategies, but rather “everyone” at the organization is, and not just the tax team, she said. Organizations need to be realistic with their appetite for risk without being “so constrained” they are “not going to reap the benefits.”

Having multiple “lines of defense” with oversight of AI use — from model development, setting standards and policies everyone can be trained on, and internal audits — are essential, according to the presentation.

Brozzetti said the “data risk is huge” beyond having complete and accurate data. “[T]he number one thing that regulators are looking at is privacy … and that landscape is moving fast.” From an infrastructure perspective, there are “cyber risks” with data “moving in and out” that “need to be managed,” she cautioned, recommending organizations to keep their legal teams involved.


Get all the latest tax, accounting, audit, and corporate finance news with Checkpoint Edge. Sign up for a free 7-day trial today.

More answers