How To Use (And Not Use) Large Language Models In Patent Application Drafting
It is hard to overstate the impact that large language models (LLMs) with chat-based interfaces, including ChatGPT and Bard, have had since their respective launches. These models are changing the high tech, education, finance, and medical industries at a remarkable rate. They are one of the main reasons that the Writers Guild of America has been on strike since early May. In the legal sphere, there has been more than a little hand wringing and consternation regarding the eventual replacement of lawyers with LLMs. Such predictions are premature; but over time, it is likely that some legal functions can and will be delegated to LLMs. In other words, successful lawyers will need to integrate LLMs into their practices in a manner that allows them to increase efficiency.
One of the most common and important tasks carried out by patent attorneys is drafting patent applications. At first blush, it may be tempting to let LLMs help you write various sections of a patent application. Given the right set of prompts, both ChatGPT and Bard can write passable claims as well as describe technology in patent-appropriate language. In fact, Bard will write an entire patent application if prompted, though the result is typically much shorter and less detailed than the average application.
Several potential concerns arise with using an LLM to substantively draft a full patent application. The main issues include disclosure of the invention to the LLM and whether the LLM’s description is technically accurate and reasonably complete.
Here, LLM use is considered within the context of the ABA Model Rules of Ethics. As a first example, ABA Model Rule 1.6(a) states that a lawyer “shall not reveal information relating to the representation of a client” without the client’s informed consent. In this context, an attorney should not disclose confidential client information to an LLM or any other third party.
Most LLMs are operated by private companies and remotely hosted as cloud-based services. The terms and conditions of your end-user agreement with the third party (especially in exchange for use of a free or beta-test phase LLM) will almost certainly state that the third party will collect and store your conversations with its LLM. Additionally, your stored data may be used to train future iterations of the LLM. This means that if you ask an LLM to draft claims, or specification text drawn to any aspect of the claimed invention, you may be disclosing this invention to a third party. Doing so may not only start the 12-month grace period for filing the application in the U.S., but it may also prevent an application from being filed in any country or region that requires absolute novelty.
Furthermore, under ABA Model Rule 1.1, a lawyer “shall provide competent representation to a client.” In other words, a lawyer must ensure that any LLM output utilized in a patent application is correct and true.
Current versions of LLMs have been known to “hallucinate” from time to time, generating output that is false. A notable example occurred recently when two New York lawyers submitted case law citations in a brief that were fabricated by ChatGPT. The lawyers’ firm suffered public embarrassment and was fined $5,000 by the judge.
Of similar concern is whether using an LLM as a shortcut is representing the client to the best of your ability. Use of an LLM may result in a patent application of lower quality than the manual work product of the lawyer invoking the LLM. Furthermore, LLMs are frequently “out of date”, because they are trained on data that is 1-2 years old and may not be capable of taking into account recent developments in law, science, and technology.
Nonetheless, we believe that LLMs can be helpful in patent drafting when employed in a targeted fashion. An LLM can be used for supplementary research not unlike how patent attorneys currently utilize web searches, Wikipedia, and academic papers. But such use should be limited to generating text describing the non-innovative parts of the application such as boilerplate sections, discussions of the state of the art, and contextual information. For instance, if the invention is related to the coupling of two metal elements but not the type of coupler used, asking an LLM for a list of alternative metal couplers might be helpful. Likewise, if the invention relates to a software program that converts between data formats (e.g., from JSON to XML), an LLM can provide “before” and “after” examples (though one must be careful to ensure that these examples do not effectively disclose any aspect of the invention).
In other words, using an LLM in patent application drafting should be focused on generating short, specific units of text – sentences or paragraphs – relating to something other than the invention at hand. Confidential client information should never be used in an LLM prompt and any text produced by an LLM should be verified by an individual with the expertise to adjust any inaccuracies or errors. Do not cut and paste the output of an LLM into your work product without doing so.
We also recommend re-writing LLM output as a general rule. Current LLMs write better than the average person but fall far short of an experienced patent attorney. Rewording and reorganizing LLM output is often necessary anyway in order to integrate it into the rest of the patent application.
In sum, LLMs are not yet capable of replacing a competent patent attorney. However, these increasingly sophisticated chatbots represent useful tools for carrying out certain discrete and well-defined tasks in the practice of patent law. As technologies emerge and improve, lawyers need to adopt those that are helpful. Not unlike word processors, drawing software, spell checkers, and calculators, LLMs are bound to become part of a patent attorney’s toolbox.