The opinion said that under Model Rule 1.1, related to competent representation, attorneys have an ethical obligation to understand the benefits and risks of any generative AI tool that they use in their legal practice.
As part of that obligation, attorneys must be aware of changes in the benefits and risks of generative AI tools as they develop, according to the opinion.
"Although there is no single right way to keep up with [generative AI] developments, lawyers should consider reading about GAI tools targeted at the legal profession, attending relevant continuing legal education programs, and, as noted above, consulting others who are proficient in GAI technology," the opinion said.
The opinion noted that one of the risks of generative AI tools is false outputs, known as hallucinations, so attorneys have an ethical obligation under Model Rule 1.1 to review or verify information.
How much review or verification attorneys need to do to meet their ethical obligation depends on the generative AI tool and the task being performed, according to the opinion.
"While GAI tools may be able to significantly assist lawyers in serving clients, they cannot replace the judgment and experience necessary for lawyers to competently advise clients about their legal matters or to craft the legal documents or arguments required to carry out representations," the opinion said.
Over the last year and a half, generative AI tools have exploded in the legal industry and attorneys have been trying to navigate the technology with various levels of success.
Last year, two New York personal injury attorneys made headlines for submitting a ChatGPT-generated brief with fake case citations. The attorneys were ultimately sanctioned for their mistake.
Since the New York case, several other courts, including Texas and Missouri state appeals courts, have called out litigants for submitting AI-generated court filings with fake case citations. A Manhattan federal judge also criticized a law firm for using ChatGPT to support its attorney fee request of more than $100,000.
A few state bar associations, including California's and Florida's, have issued ethical guidance on the use of generative AI in the practice of law.
In addition, several judges have implemented standing orders on AI, requiring attorneys to disclose their use of the technology or banning the technology outright.
The bar association's opinion noted that as generative AI tools continue to develop, attorneys might one day have to use them to provide competent legal services to their clients.
The opinion also addresses attorneys' ethical duties related to confidentiality, client communication, supervision and fees when using generative AI tools.
The opinion said that under Model Rule 1.6, attorneys must keep all information related to client representation confidential.
Therefore, attorneys must evaluate the risks of information being disclosed or accessed by others before inputting client information into generative AI tools, according to the opinion.
"Because GAI tools now available differ in their ability to ensure that information relating to the representation is protected from impermissible disclosure and access, this risk analysis will be fact-driven and depend on the client, the matter, the task and the GAI tool used to perform it," the opinion said.
The opinion said that attorneys should refer to Model Rule 1.4 when determining whether they are required to disclose generative AI use to clients who don't ask.
However, if clients specifically ask about generative AI practices, attorneys should disclose how they are using the technology, according to the opinion.
"There may be situations where a client retains a lawyer based on the lawyer's particular skill and judgment, when the use of a GAI tool, without the client's knowledge, would violate the terms of the engagement agreement or the client's reasonable expectations regarding how the lawyer intends to accomplish the objectives of the representation," the opinion said.
The opinion briefly acknowledged that attorneys' use of generative AI has led to fake case citations.
The opinion said that under Model Rules 3.3 and 8.4(c), lawyers should not make false statements in court or engage in misrepresentations.
"In judicial proceedings, duties to the tribunal likewise require lawyers, before submitting materials to a court, to review these outputs, including analysis and citations to authority, and to correct errors, including misstatements of law and fact, a failure to include controlling legal authority and misleading arguments," the opinion said.
The opinion also advised attorneys to make sure that lawyers and nonlawyers under their supervision were trained in appropriate use of generative AI and to be aware that some generative AI expenses should not be passed on to clients.
"To the extent a particular tool or service functions similarly to equipping and maintaining a legal practice, a lawyer should consider its cost to be overhead and not charge the client for its cost absent a contrary disclosure to the client in advance," the opinion said. "In contrast, when a lawyer uses a third-party provider's GAI service to review thousands of voluminous contracts for a particular client and the provider charges the lawyer for using the tool on a per-use basis, it would ordinarily be reasonable for the lawyer to bill the client as an expense for the actual out-of-pocket expense incurred for using that tool."
--Editing by Karin Roberts.
For a reprint of this article, please contact reprints@law360.com.