The Hawai‘i Rules of Professional Conduct Rule 1.1, comment 6 states: “To maintain the requisite knowledge and skill, a lawyer should engage in continuing study and education and keep abreast of changes in the law and its practice, including the benefits and risks associated with relevant technology” (emphasis added).
The practice of law is gathering, organizing, and communicating information so developments in information technology and communications technology inherently affect legal practice. Some of the issues raised by past technological developments were predictable, but many were not, and several of these same concerns will arise again with the expanded use of artificial intelligence (“AI”). It is important for both legal practitioners as well as those across a multitude of industries to at least be aware of the current landscape, dangers, and possible uses of this new technology.
Technical Background on Machine Learning and AI
Machine learning uses a mathematical algorithm having multiple tunable parameters to generate multiple mathematical “models” (through using various values for the tunable parameters) that can discriminate between multiple categories of “training” data. The models are all then applied to “test” data that supposedly contains the same categories as the “training” data. The model that best discriminates between the categories in the “test” data is selected as the “trained” model.
AI (technically, generative AI) uses machine learning to generate natural language responses to natural language prompts from a human user, based on the data on which the model was trained.
This is done by using machine learning to analyze text in a user’s prompt to predict what text would best respond to the prompt and wording that text in the most persuasive manner, based on statistical analyses from the data on which the AI was trained.
In November 2022, San Francisco-based OpenAI publicly released its chatbot ChatGPT, making AI widely accessible and available to the masses. Since its launch, ChatGPT was followed by similar products from tech companies including Google and Microsoft, as well as applications and tools that utilize generative AI. There is now widespread use across multiple industries, including the legal field.
Referring to ChatGPT, in a March 21, 2023, blog post entitled “The Age of AI has Begun”, Bill Gates said, “As computing power gets cheaper, GPT’s ability to express ideas will increasingly be like having a white-collar worker available to help you with various tasks.” While there are numerous benefits, there are also many risk factors that must be addressed.
Legal Issues with AI, and Legal Responses
Generative AI, which generates natural language responses to user-provided prompts, is viewed to be of such great importance in so many industries that on October 30, 2023, President Biden issued Executive Order 14110 entitled “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence”.
One major potential issue that has resulted from the use of AI in legal practice is “hallucinations”*, in which AI cites non-existent judicial opinions with fake quotes and citations. Mata v Avianca, Inc., No. 22-cv-1461-PKC, — F. Supp. 3d —, 2023 WL 4114965 (S.D.N.Y. 2023). Indeed, Donald Trump’s former attorney and fixer, Michael Cohen, used Google’s Bard to perform legal research, which resulted in three nonexistent cases being cited in a legal memo he provided to his attorneys. See Order to Show Cause, U.S. v. Michael Cohen, 2023 WL 8635521 (December 12, 2023). The problem of hallucinations in the preparation of legal papers has even been recognized by the U.S. Supreme Court.
On June 2, 2023, MIT formed a Task Force on Responsible Use of Generative AI for Law and released version 0.2, expressing seven principles, including duties of confidentiality and competence.
Orders Relating to AI in the Practice of Law in Hawai‘i
Here in Hawai‘i, Judge Leslie Kobayashi of the U.S. District Court for the District of Hawai‘i issued an order requiring disclosure of the use of AI in drafting any documents, and on November 14, 2023, the four judges of the same court issued General Order 23-1, requiring that any counsel or pro se party that submits any filing or submission generated by an AI platform or persons compensated to produce materials not tailored to specific cases (collectively, “unverified sources”), to concurrently file a declaration captioned “Reliance on Unverified Source” that discloses the reliance on unverified sources, and verifies that any such material is not fictitious. However, this order “does not affect the use of basic research tools such as Westlaw, Lexis, or Bloomberg, and no declaration is required if all sources can be located on such well-accepted basic research tools.”
Guidance and Orders in Other States
On November 16, 2023, the Committee on Professional Responsibility and Conduct of the State Bar of California issued “Practical Guidance for the Use of Generative Artificial Intelligence in the Practice of Law.”
The State Bar of California’s Practical Guidance identified several problems with AI, including the following:
- Confidentiality – the AI being used may retain information in prompts to be used for subsequent training, and then disclose that same information in response to later prompts by others. Also, AI sites have already been hacked and data has already been taken;
- Hallucinations – the AI creates false citations and information;
- Duty to Comply with the Law – AI and its use raise many issues under existing laws, including under laws relating to privacy, technical data export restrictions, intellectual property, and cybersecurity;
- Bias – the AI may have been trained on biased “training” data and therefore fail in properly classifying data;
- Duty to Supervise – attorneys have the duty to supervise AI, just as they have the duty to supervise associates and assistants;
- Duty to Disclose – attorneys may have the duty to disclose to clients the use of AI, and the associated benefits and risks;
- Attorneys’ Fees – fee agreements should explain the basis for all fees and costs, including the use of AI, and attorneys who charge fees based on time spent cannot charge hourly fees based on time saved by using AI;
- Candor to the Tribunal – attorneys may have a duty to disclose the use of AI to the tribunal; and
- Unlawful Discrimination – attorneys should be aware of possible biases in AI and risks that biases may present.
On January 19, 2024, the Florida Bar issued Ethics Opinion 24-1 relating to the use of AI. This opinion discussed concerns over AI relating to confidentiality of information, oversight to verify accuracy and sufficiency, legal fees, and costs (informing a client of intent to charge for the use of AI, and ensuring charges are reasonable and not duplicative), and lawyer advertising (informing prospective clients they are communicating with an AI program and not a lawyer, and not saying their generative AI is superior to other firms’ generative AI, unless objectively verifiable).
On January 24, 2024, the New Jersey Supreme Court issued a Notice to the Bar entitled “Legal Practice: Preliminary Guidelines on the Use of Artificial Intelligence by New Jersey Lawyers”. This guidance stated that AI does not change lawyers’ duties, and that “Because AI can generate false information, a lawyer has an ethical duty to check and verify all information generated by AI to ensure that it is accurate.” The guidelines also said “A lawyer is responsible to ensure the security of an AI system before entering any non-public client information”, and that a lawyer’s responsibility for overseeing other lawyers and nonlawyer staff, as well as law students and interns, “extends to ensuring the ethical use of AI by other lawyers and nonlawyer staff.”
Some judges and at least one magistrate in federal district courts have already issued orders requiring disclosure of the use of AI in drafting briefs, including Judge Brantley Starr of the U.S. District Court for the Northern District of Texas, Judge Michael Baylson of the U.S. District Court for the Eastern District of Pennsylvania and Judge Gabriel A. Fuentes of the Northern District of Illinois. Similarly, on June 8, 2023, Judge Stephen Alexander Vaden of the U.S. Court of International Trade issued an order requiring disclosure of the use of AI.
On February 22, 2024, Judge Paul Engelmayer of the U.S. District Court for the Southern District of New York issued an order in J.G. v. New York City Department of Education, 23 Civ. 959 (PAE) stating that a law firm’s “invocation of ChatGPT as support for its aggressive fee bid is utterly and unusually unpersuasive.”
Guidance Issued by U.S. Copyright Office
The U.S. Copyright Office issued copyright regulations on March 16, 2023, indicating that applicants have a duty to disclose the inclusion of AI-generated content in a work and to provide a brief explanation of the author’s contributions to the work because copyright rights only apply to works with human authorship. The U.S. District Court for the District of Columbia decided on August 18, 2023, that an AI-generated work is not eligible for copyright, Thaler v. Perlmutter — F.Supp.3d —, 2023 WL 5333236 (August 18, 2023), presently being appealed.
Guidance Issued by U.S. Patent and Trademark Office
The Director of the U.S. Patent and Trademark Office (“USPTO”) issued a February 6, 2024, Memorandum indicating that existing regulations will apply to the use of AI and promised new guidance is to come. Tellingly, this memorandum stated “Simply assuming the accuracy of an AI tool is not a reasonable inquiry”, citing Mata v Avianca, Inc., No. 22-cv-1461-PKC, — F. Supp. 3d —, 2023 WL 4114965 at *15-16 (S.D.N.Y. June 22, 2023).
On February 13, 2024, the USPTO issued guidance on inventorship for AI-assisted inventions. 89 FR 10043, 2024 WL 553179 (F.R.). This guidance indicated that AI-assisted inventions are not categorically unpatentable, and patent protection may be sought for inventions for which a natural person provided a “significant contribution” to the invention.
Finding a Way Forward Based on Prior Experiences with Machines Performing Mental Labor
Humankind previously experienced the technological development of widely distributed personal machines to perform human mental labor, and the lessons then are instructive now with respect to AI. The author was in eighth grade when the first four-function handheld electronic calculators became commercially available. Prior to that time, arithmetic calculations were performed manually (mentally, on paper, or using slide rules), or on adding machines (or on abacuses in China). The same kinds of issues were discussed – whether students should be allowed to use calculators, whether calculators were dependable, whether calculators would make people “dumber,” etc. The principle that was adopted then was to perform the desired calculation mentally first, to gain an approximate expectation of the correct value, and then to have the calculator perform the calculation, to see whether the result was within the range of the expected correct value. This was possible because slide rules had been in use for decades, so approximating multiplication, division, squares, square roots, cubes, and cube roots was easy.
The author believes that, in applications where truth and objective facts matter, a similar principle should be used with respect to AI: gain an approximate expectation of the correct result and then check to see whether the AI provides an answer that is within the range of the correct result. However, because there is no analog for slide rules in the practice of law, this means that an attorney using AI must know what the approximately correct result should be, and then evaluate the result provided by AI. For the practice of law and other applications where truth and facts matter, unsupervised use of AI, or use of AI by an attorney who does not already know the approximately correct result, may lead to disaster, because of the inability to detect when the AI cites non-existent references, or drafts provisions that are not legally enforceable
Thus, AI can be used by experienced practitioners to create first drafts of documents with consequent gains in efficiency, but the work product of AI cannot be used unquestioningly.
Most importantly, attorneys must keep in mind that generative AI only provides responses based on the data on which it was trained. In many legal situations, the best response may not be within the scope of the training data. In the profession of practicing law, this limitation to only what was within the training data would be a major shortcoming if a solution for a client is beyond the scope of the training data.