Skip to content
Artificial Intelligence

A data security checklist for tax firms using AI

Thomson Reuters Tax & Accounting  

· 11 minute read

Thomson Reuters Tax & Accounting  

· 11 minute read

As the integration of AI technologies, including GenAI and agentic AI, become increasingly prevalent in the tax and accounting industry, ensuring data security is paramount.

Highlights

  • Tax firms face unprecedented AI security risks requiring immediate attention and compliance measures.
  • Proper encryption, access controls, and vendor evaluation protect sensitive taxpayer information effectively.
  • Comprehensive security training and incident response plans ensure regulatory compliance and client trust.

 

One data breach can destroy decades of client trust overnight.

During tax season, your firm processes thousands of sensitive documents—W-2s, 1099s, Social Security numbers, and complete financial records. One compromised AI tool could expose every client’s most private information to bad actors. The rapid adoption of AI tools in tax and accounting practices has created unprecedented opportunities for efficiency and accuracy. Yet these technologies introduce new security vulnerabilities that demand your immediate attention.

Whether you’re using AI to analyze Schedule C deductions, automate state tax calculations, or draft client correspondence, protecting sensitive client information is non-negotiable. This checklist will help you navigate the security landscape of AI adoption while maintaining compliance with IRS requirements, GLBA obligations, and state data protection laws.

 

Jump to ↓

1. Use data encryption on client data


2. Implement access controls and authentication


3. Conduct regular security audits and penetration testing


4. Select professional-grade AI tools with strong security features


5. Provide continuous security training


6. Minimize tax data collection and retention


7. Incorporate privacy considerations


8. Develop an incident response plan


9. Apply monitoring and logging systems for detection


10. Keep clients updated with your data protection measures


Agentic AI security for tax data


Generative AI security for tax data

 

1. Use data encryption on client data

You must encrypt every piece of client data—from Form 1040 to financial statements and source documents—during both transmission and storage. This dual-layer approach confirms that even if attackers intercept or access information without authorization, it remains unreadable.

AI-specific considerations

When selecting any AI platform for tax preparation, research, or client communication, verify that encryption protocols comply with IRS Publication 4557 (Safeguarding Taxpayer Data) requirements. AI systems often process information in memory or across cloud environments, creating potential vulnerabilities. Confirm your encryption standards extend to all workflows, including e-filing transmission governed by IRS Publication 1345 and client portal access.

2. Implement access controls and authentication

Not everyone in your firm needs access to every AI tool or client file. Define clear roles and limit permissions based on job function. Your tax preparers, engagement managers, and IT administrators should each have appropriate levels of access to these systems. Consider additional restrictions during peak tax season when temporary staff may need limited access.

Require multi-factor authentication (MFA)

Make MFA mandatory for all users accessing AI platforms and tax software. This simple step dramatically reduces the risk of improper access, even if attackers compromise login credentials. Add authentication requirements for applications that access PTIN information, CAF numbers, or e-filing systems.

3. Conduct regular security audits and penetration testing

Conduct thorough security audits and penetration testing at least annually (preferably before tax season begins). Include all AI systems in your tech stack within these assessments, not just your traditional tax preparation software.

AI technologies evolve rapidly, and so do the security threats targeting them. Update your platforms frequently with the latest security patches. If you’re using AI tools integrated with Thomson Reuters UltraTax CS, for example, verify your vendors maintain current security certifications and notify you promptly of vulnerabilities.

4. Select professional-grade AI tools with strong security features

Choose solutions that hold relevant security certifications such as SOC 2, ISO 27001, and demonstrate compliance with the FTC Safeguards Rule for tax preparers. These certifications confirm that vendors have put rigorous security controls in place and undergo periodic third-party audits.

Conduct thorough vendor evaluations

Before committing to any AI platform, assess the vendor’s security practices in detail. Ask specific questions about:

  • Compliance with IRS Publication 4557 and GLBA requirements
  • How they secure taxpayer information during processing
  • Whether they use your client data to train their models
  • Their incident response capabilities and breach notification procedures

Verify their practices meet state-specific requirements like CCPA (California) or NY SHIELD Act standards.

5. Provide continuous security training

Technology threats evolve constantly, and your team’s knowledge must keep pace. Provide training sessions that cover the latest security threats, phishing tactics targeting tax professionals, and best practices for handling sensitive information during tax season’s high-pressure environment.

Address AI-specific tax data security challenges

Standard cybersecurity training isn’t enough. Include specialized instruction on the unique security challenges these technologies create. Teach your staff to recognize risks like prompt injection attacks, data leakage through AI interactions, and the critical importance of not inputting Social Security numbers or ITIN data into unauthorized consumer-grade AI platforms.

6. Minimize tax data collection and retention

Collect and retain only the information necessary for tax preparation and compliance. When using AI to analyze client financials or generate tax projections, ask whether each data element is truly required. Excess data creates unnecessary risk, especially for source documents that might contain bank account numbers or investment details.

Set up data lifecycle management

Create clear policies for securely disposing of information once you no longer need it, following IRS recordkeeping requirements. This includes data the system uses during AI processing, information stored in system logs, and AI-generated outputs like draft returns or tax planning memos that may contain sensitive details.

Use data anonymization where possible

When testing new AI platforms or training your team, use anonymized or synthetic tax information whenever feasible. This reduces the risk of exposing real client details while allowing you to evaluate AI capabilities.

7. Incorporate privacy considerations

Before launching any new AI project, whether for tax research automation, return review, or client communication, conduct a privacy impact assessment (PIA) to identify potential risks under GLBA and state privacy laws. Consider how client information will flow through the system, where you will store it, who will have access, and what safeguards are necessary.

Confirm clients understand how you may use their information in AI-enhanced workflows and obtain necessary consents. Be prepared to explain your security measures, especially to high-net-worth clients or business entities with heightened privacy concerns. Offer alternatives for clients who prefer traditional processing methods.

8. Develop an incident response plan

Create a detailed incident response plan that specifically addresses AI-related security incidents. This plan should outline clear procedures for detecting, containing, and resolving breaches involving taxpayer data, including notification to your IRS stakeholder liaison for tax-related identity theft incidents.

If an incident affects client information, you have legal obligations under multiple regulations. Communicate promptly and transparently about what happened, what you’re doing to address it, and how you’re preventing future occurrences. Your state board of accountancy may require specific reporting as well.

9. Apply monitoring and logging systems for detection

Use advanced monitoring systems to detect and respond to improper access or anomalies in real-time. AI platforms can generate unusual activity patterns, like bulk access to prior-year returns or repeated queries about specific deductions to verify you have configured your monitoring tools to recognize AI-specific threats.

Keep extensive logs of all system activities for audit and forensic purposes. These logs should track who accessed the platform, what tax returns or client files the system processed, what queries users submitted, and what outputs the system generated. You need this documentation for compliance audits, investigating potential security incidents, and demonstrating due diligence to the IRS and state regulators.

10. Keep clients updated with your data protection measures

Share information with clients about your data protection measures on an ongoing basis. Focus on the outcomes your security practices deliver: confidentiality, integrity, and availability of their sensitive tax and financial information.

Don’t hide your use of AI technologies. Instead, position it as a competitive advantage explaining how AI enhances accuracy in tax preparation, identifies optimization opportunities, and improves service delivery, all while maintaining rigorous security standards that meet or exceed IRS requirements.

Agentic AI security for tax data

Agentic AI systems operate autonomously, making decisions and taking actions without continuous human oversight. When these systems access tax data, the security implications multiply.

Key safeguards for autonomous systems:

Set clear boundaries: Define explicit limits on what agentic AI can access. For example, if your AI automatically retrieves prior year returns to calculate basis for capital gains, restrict it from accessing unrelated clients’ files. Limit its ability to modify tax elections, file extensions, or submit returns without human review.

Control data access during tax season: Put granular permission controls in place that restrict which client files, tax years, and document types the agent can reach. During peak periods when processing hundreds of returns, these controls prevent cross-client data contamination.

Monitor autonomous actions: Log every decision the agent makes—whether it’s flagging an unusual Schedule A deduction, calculating estimated tax payments, or accessing state tax returns. These audit trails are essential if questions arise about AI-generated tax positions.

Prevent unauthorized filing or transmission: Configure agentic solutions to prevent them from independently e-filing returns, submitting IRS payments, or sharing client data with state tax authorities without explicit human approval.

Create escalation for complex tax issues: Design your agentic platforms to recognize when they encounter complex tax code interpretations or high-risk situations requiring CPA judgment. Build escalation pathways where the agent pauses and requests human review.

Generative AI security for tax data

Generative AI creates new content based on patterns learned from training data. When used for tax memo drafting, client correspondence, or research summarization, this capability introduces unique security risks.

Critical protection measures:

Prevent training data exposure: If developers trained a generative AI model on client tax returns or engagement letters, fragments of that confidential information could surface in outputs. Use only solutions that maintain strict separation between your client data and their training data, or that process information without incorporating it into the underlying model.

Protect against prompt injection attacks: Attackers can manipulate generative systems through crafted prompts that trick the application into revealing client tax information. For example, a prompt injection might attempt to make the AI “forget” restrictions and disclose details about a client’s offshore accounts or unreported income. Select platforms with strong input validation that can detect and block these manipulation attempts.

Validate outputs before client delivery: Before using AI-generated tax planning memos, research summaries, or client letters, review them carefully for unintended disclosure. Generative AI may combine information from multiple clients in unexpected ways, potentially creating outputs that reveal protected details about other engagements.

Control sensitive inputs: Train your staff never to input Social Security numbers, EINs, bank account details, or specific dollar amounts from tax returns into generative AI prompts unless you’re using applications specifically designed and secured for tax preparation. Develop clear guidelines about redacting sensitive data before using AI for document analysis or client communication drafting.

Address memorization risks for tax positions: Use generative AI providers that apply technical safeguards against memorizing specific client tax strategies, aggressive positions, or sensitive financial arrangements. Verify they do not use your client data to retrain their models in ways that could expose your clients’ tax planning approaches to other users.

Protecting client trust across all AI technologies

Tax and accounting professionals operate on trust. Your clients entrust you with their most sensitive financial information, their income, assets, deductions, and financial challenges. They’re confident that you’ll protect it with the highest standards of professional care and compliance with IRS regulations.

Inadequate data security doesn’t just risk client relationships. It can trigger FTC enforcement action for Safeguards Rule violations, loss of IRS PTIN and e-file privileges, state board sanctions, malpractice claims, and Circular 230 disciplinary proceedings.

The firms that thrive in the AI era won’t be those that adopt the technology fastest. They’ll be the ones that adopt it most securely. By following this checklist across all your AI implementations, you’re not just protecting data. You’re protecting your professional reputation, your firm’s future, and your clients’ financial security.

Ready to dive deeper into AI security best practices for tax professionals? Download our detailed white paper, “What every firm needs to know about AI tools and data security,” for practical guidance on implementing these strategies across your entire practice.

More answers