It may be tempting to explore generative AI’s capabilities and investigate how it may simplify your life, but there are more immediate practical concerns and considerations for CPAs.
By Sarah Beckett Ference, CPA
“Write a 1,200-word article regarding how CPA firms can manage risk related to the use of ChatGPT and other forms of generative AI.”
This easy ChatGPT request would have resulted in the production of this month’s column in a far more expeditious manner than writing one from scratch, but would it have been as informative or accurate? ChatGPT is an example of generative artificial intelligence (AI), and its popularity has soared since its introduction to the public in November 2022. Given ChatGPT’s versatility, speed, and ability to engage in human-like conversations, it’s easy to see why.
Generative AI models learn from input data and can then generate new data based on what they have learned. While it may be tempting to explore ChatGPT’s capabilities and investigate how generative AI may make your life easier, there are more immediate practical concerns and considerations for CPA firms before using these tools.
Professional liability risks
Confidentiality
CPA firms handle a vast amount of financial and personal information related to the firm’s clients, owners, and employees, making data privacy and security a top priority. Myriad data protection laws and regulations — too many to name — require the holders of confidential individual and personal data to protect it.
When data is entered into a generative AI tool, you are sharing that data with the AI tool’s owners and, thus, entrusting them to protect this data. Have you read the tool’s terms and conditions and privacy policies to understand how data is protected? What happens if the AI system experiences a data security incident and unauthorized individuals access your firm’s or your client’s sensitive data? A data breach can have significant financial and reputational consequences for a CPA firm, and a generative AI tool’s owner may attempt to disclaim liability for a data security incident.
Reliability
AI models are not infallible. In fact, ChatGPT’s terms of service at the time of this writing remind users that AI and machine learning are evolving fields of study and acknowledge that use of ChatGPT may “result in incorrect output that does not accurately reflect real people, places or facts.” Responses from generative AI are based on the patterns and data on which it has been trained. If the data used to train the AI model is out of date, inaccurate, or incomplete, then the output may also be inaccurate or incomplete.
For example, if a CPA were to ask ChatGPT about the merits of a certain tax return position, ChatGPT may pull from various online sources to provide a response without being capable of differentiating between the sources that may provide reliable guidance and those that may not. In other words, garbage in, garbage out.
Further, unlike humans, generative AI currently lacks the ability to understand context and nuance, which may be critical to arriving at a proper result. Today’s generative AI are machine-learning systems that do not have the same level of understanding, analysis, and judgment of a human being. Generative AI may not understand the complexities of professional standards, tax laws, or financial reporting frameworks.
While generative AI is likely able to reliably answer a straightforward question — such as, “What is the financial reporting standard governing lease accounting under U.S. generally accepted accounting principles?” — its answer to a question regarding the application of the same standard to a specific transaction may not be accurate. In addition, generative AI may not understand the question’s intent if the question is not phrased properly.
This potential for misinterpretation or misapplication of standards, laws, and regulations can have disastrous consequences if human judgment and professional skepticism are removed. Indiscriminate reliance on a response or an answer derived from AI when delivering services without critically analyzing whether that result is correct may lead to errors or omissions and potential professional liability claims.
Risk management recommendations
Chances are, personnel at your firm already use ChatGPT to some extent. It’s still relatively novel, and its capabilities are fun to explore. Before jumping into the deep end of the generative AI pool, CPA firms should consider the following:
Understand the limitations of any specific generative AI tool used by the firm
Remember that ChatGPT and other forms of generative AI are, fundamentally, tools — albeit highly sophisticated ones. And just like any other tool, before using it, one must first understand the tool’s purpose, limitations, and instructions for use. If a firm uses generative AI, perform diligence. What are the datasets used to train the tool? How current are its inputs? Are there limitations of which the firm should be aware?
Develop a policy for appropriate use
Draft a firmwide policy for how generative AI may be used. A clear policy shared and reinforced with all personnel will help promote consistency of use throughout the firm.
- Scope: What are the specific purposes or tasks for which employees should be permitted to use generative AI? Some items may include creating a first draft of emails or reports or conducting initial research. Specifically identifying permissible uses helps avoid ambiguity regarding when AI should and should not be used. It may also be useful to create separate sub-policies that apply to specific roles or groups at the firm. For example, HR personnel may have more restrictions regarding use of generative AI in the recruitment and talent development processes as generative AI may introduce unintended bias in these processes, which may violate employment laws and regulations.
- Guidelines related to data inputs: Prohibit the sharing of confidential and proprietary client and firm information with generative AI tools. Advise all firm personnel to take the same level of care with information shared with generative AI as they would if they were posting on a public site, such as social media.
- Responsibility for review of outputs: Unlike you, ChatGPT and other generative AI tools have not been formally educated and trained in the practice of public accountancy and are not licensed CPAs. It is important to supervise and review the output from generative AI just as you would the work of any other engagement team member. Indeed, ChatGPT terms of service at the time of this writing remind users to “evaluate the accuracy of any output as appropriate for your use case, including by using human review of the output.” Consider requiring firm personnel to inform their supervisor when work has been created or developed using generative AI. Doing so will help ensure that the output is properly reviewed.
Training, monitoring, and oversight
As with any new policy, train firm personnel on the firm’s new generative AI policy and regularly monitor usage of generative AI tools to ensure adherence to the policy.
Consult with counsel
Consult with the firm’s counsel to help understand the terms of service and privacy policies of any generative AI model being used, the potential data security issues posed by how the firm intends to use generative AI, and whether usage requires any formal client communication or consent.
Stay abreast of changes in the generative AI landscape
Unlike the development of new professional or financial reporting standards or the enactment of new tax legislation, the evolution of generative AI technology and its capabilities will likely continue at a rapid pace, requiring CPA firms to flexible and responsive. Monitor usage, trends, and developments in generative AI and be prepared to adjust the firm’s policy and approach.
Something to talk about
100 million
The number of ChatGPT users by January 2023. The generative AI platform amassed 1 million users within its first five days, and then took off, making it the fastest-expanding platform on record.
Source: Mailbutler, “ChatGPT + AI statistics and trends — Updated July 2023”