Skip to content
Federal Tax

Data Privacy in the Age of AI: More Than Just a Tax Issue

Tim Shaw, Checkpoint News  Senior Editor

· 6 minute read

Tim Shaw, Checkpoint News  Senior Editor

· 6 minute read

Tax practitioners face an expanding set of data security regulations, and the rapid adoption of artificial intelligence is adding layers of legal and ethical complexity that many firms are unprepared for, according to experts speaking at a recent American Bar Association (ABA) panel and with Checkpoint.

Practitioners must comply with federal, state, and even international laws, all while defending against increasingly sophisticated cyber threats that treat professional firms as attractive targets. The failure to adequately protect client information can lead to steep regulatory fines, expensive litigation, and irreparable reputational harm.

Legal Guardrails

At the federal level, the legal cornerstone for tax professionals is the Gramm-Leach-Bliley Act (GLBA), which defines tax preparers as “financial institutions” regardless of their size. This designation brings them under the purview of the Federal Trade Commission’s (FTC) Safeguards Rule, which mandates a formal data protection program. The Safeguards Rule applies to most tax preparers as “financial institutions,” Lisa V. Zivkovic, a member of the cybersecurity and privacy team at Skadden, Arps, Slate, Meagher & Flom LLP, told the audience at the ABA’s May Tax Meeting in Washington, D.C.

The Safeguards Rule requires firms to create and maintain a comprehensive written information security plan, or WISP. The plan must be reviewed and updated regularly as the firm’s operations and technology evolve. Requirements include designating a qualified individual to oversee the program, conducting regular risk assessments, implementing and testing safeguards, and carefully managing vendor relationships. The IRS provides detailed resources for this, including IRS Publication 4557, Safeguarding Taxpayer Data, and a direct template in IRS Publication 5708, Creating a Written Information Security Plan for your Tax & Accounting Practice.

Professional ethics rules are also evolving to address technology. While Circular 230 currently encourages “best practices” like maintaining confidentiality, proposed changes aim to make these obligations more explicit.

Professor Annette Nellen of San José State University presented proposed regulatory language at the same panel showing that the definition of competence would be expanded to require “understanding the benefits and risks associated with relevant technology that is used by the practitioner to provide services to clients or to store and transmit tax return and other confidential information.” The proposals also describe creating a data security policy as a best practice for all firms.

Beyond the federal baseline, a growing patchwork of state and international laws creates additional compliance hurdles. Zivkovic pointed to California’s Consumer Privacy Act (CCPA) as one of the most demanding, as it now covers business-to-business contact data and imposes rules on data collected from employees. Other states have their own unique mandates. Massachusetts, for instance, has one of the strictest security rules, making encryption of personal information in transit and at rest a legal requirement, not just a best practice.

Many U.S. tax firms may not realize that regulations like Europe’s General Data Protection Regulation (GDPR) can also apply to them. Processing returns for EU residents, handling HR data for a client’s EU subsidiary, or even using a web portal that tracks user behavior can trigger GDPR’s extraterritorial reach, which comes with the risk of severe fines.

The Rise of AI and Vendor Risk

The adoption of artificial intelligence tools introduces another source of risk. While AI can create efficiencies, experts warn that using it without proper analysis is dangerous. “What I see more so is there being great uses for AI, but people don’t go about implementing it, in my opinion, in the correct series,” Ricardo Gilmore, a partner at DarrowEverett LLP, told Checkpoint. “Before adoption, there needs to be first analysis, not only of what the problem is you’re trying to solve, but also about what infrastructure you already have.”

One danger is “purpose limitation drift,” where data collected for one reason is repurposed for another, such as training an AI model. This can violate the terms under which the client data was originally gathered. Gilmore also warned that organizations cannot simply delegate their duties to an algorithm. Using public housing as an example, he noted that the U.S. Department of Housing and Urban Development (HUD) has clarified that using an AI tool for applicant analysis “does not exonerate you from a claim of fair housing discrimination. You don’t abdicate your responsibilities to AI.”

Professor Nellen highlighted the problem of “shadow AI,” where employees use consumer-grade AI tools without their employer’s knowledge, potentially uploading sensitive client data to unsecured platforms. Other easily overlooked risks include copy machines that store images of documents on internal hard drives and improperly “sanitized” Word documents that still contain confidential client data in their file properties.

Defending Against Modern Cyber Threats

Cybercriminals are actively targeting tax and law firms. Matt Harvey, a cybersecurity expert with CrowdStrike, said on the ABA panel that both financially motivated criminals and nation-state actors view firms as attractive targets, since a single firm can hold sensitive data on multiple clients of interest. The primary threats are ransomware and data theft. Harvey explained that modern attacks often combine encrypting a firm’s systems to disrupt availability with exfiltrating the data to use for extortion, threatening confidentiality.

To defend against these threats, Harvey stressed the importance of proper data governance, well-tested backups, and modern security software. However, he cautioned that simply having multi-factor authentication (MFA) is not enough. Threat actors are actively finding ways to defeat it, often through social engineering that dupes employees into approving a fraudulent login. He warned against simple push-notification MFA, where users are “trained” to “see big green button, press big green button” without thinking.

For smaller firms that lack resources for outside consultants, Gilmore stressed the importance of internal vigilance. Investing in “sufficient training and someone on your staff and having that be a part of their charge is to keep us up to date on AI” is critical, he said. That training must also empower employees to act as a human firewall. “People need to know that that’s happening so that they can respond to what’s going on,” Harvey said, urging firms to train employees to immediately report any suspicious phone call or email to their security team.

Looking forward, Gilmore sees a fundamental, human-level risk in over-relying on AI for core business skills like communication. He worries about a future where professionals can no longer function without the technology. “I’ve hired you, and you’re creating this great work, and now the electricity goes off and I find out you can’t put together two sentences,” he said. “It scares me that we don’t know who we are hiring.”

 

Take your tax and accounting research to the next level with Checkpoint Edge and CoCounsel. Get instant access to AI-assisted research, expert-approved answers, and cutting-edge tools like Advisory Maps and State Charts. Try it today and transform the way you work! Subscribe now and discover a smarter way to find answers.

More answers