What is ChatGPT?

ChatGPT is an artificial intelligence (AI) model developed by OpenAI. In particular, ChatGPT is a type of “language” model designed to respond with a natural language reply when prompted with a text-based question. The “Chat” in ChatGPT refers to this question-and-answer design, where ChatGPT behaves like a ChatBot. 

ChatGPT belongs to the field of AI known as natural language processing (NLP), in which computers are taught to understand and interpret the text of human languages, such as the English language. 

The “GPT” in ChatGPT refers to the type of AI model that ChatGPT is, i.e., a Generative Pre-trained Transformer. The “G” in ChatGPT means “Generative” and refers to a category of AI models that generate new output based on a given input. For example, ChatGPT can create new text-based content, such as written sentences or programming code, based on a question or request from a user. More generally, a Generative AI model can create new content in the form of text, images, audio, or the like. 

The “P” in ChatGPT stands for “Pre-trained” meaning that ChatGPT is an already-trained, ready-to-use model. This is why ChatGPT seems so knowledgeable about a vast array of topics and subject areas. In particular, ChatGPT has been trained on text and databases, including text publicly available on the Internet, including approximately 570GB of data obtained from books, web texts (e.g., Wikipedia), articles, and other information available on the Internet. According to OpenAI, over 300 billion words were used to train ChatGPT. 

ChatGPT was trained in early 2022, so it is aware of data, news, and information up to that time. Said another way, it is “unaware” of news or events post-early 2022 because such news and events were not available at the time when ChatGPT was created. Thus, ChatGPT is “Pre-trained,” and stuck in the past (i.e., early 2022). Nonetheless, it can learn from the inputs a user provides, given that it is also a transformer model (as discussed below).

The “T” in ChatGPT stands for “Transformer” meaning that ChatGPT is also a type of transformer model. Generally, a transformer model is a neural network (e.g., a type of AI model) that learns context and meaning over time, through various sessions, by tracking relationships in sequential data. For example, the transformer in ChatGPT tracks relationships between words in a sentence during various chat sessions with the user, which helps ChatGPT appear to “understand” a user’s questions, which can lead to ChatGPT ultimately forming and outputting words and sentences in a way that seems natural to human users.

How was ChatGPT trained?

ChatGPT was trained on a Microsoft Azure AI supercomputing infrastructure using both supervised machine learning and reinforcement learning. Generally, supervised learning comprises teaching a machine the associations between data. This can involve matching data (e.g., “feature data”) to desired output data (e.g., “label data”) in a dataset. The matching is typically manually performed by a human and is thus referred to as “supervised” learning. Altogether, this process is commonly referred to as “training” a machine learning model. 

Reinforcement learning, on the other hand, seeks to train a machine learning model over many iterations of pass-fail, trial-and-error attempts, where the model is provided with a “reward” (a positive value) for interactions where a positive result is returned by a model being trained. For example, an output of a sentence or phrase that appears “natural” to a user can receive a positive value, but an output that is non-sensical will not. 

OpenAI informs us that ChatGPT was trained using “Reinforcement Learning from Human Feedback (RLHF),” which is a hybrid approach that uses both supervised and reinforcement learning. Open AI provides the below chart as a demonstration of how it was developed. In the example use case, the ChatGPT can be trained and used to write “a story about otters.”


According to OpenAI, for Step 1, “human AI trainers provided conversations in which they played both sides—the user and an AI assistant.” The human trainers were provided with model-written suggestions to help them compose their responses. The question-and-responses were then used as a supervised learning dataset having a dialogue format.

For reinforcement learning, for step 2, OpenAI created a reward model where AI trainers held conversations with the chatbot of ChatGPT during training. The chatbot returned several responses, and AI trainers ranked the responses according to quality. 

For step 3, using the reward model, OpenAI fine-tuned its model using its Proximal Policy Optimization (PPO), which is OpenAI’s reinforcement learning algorithm, over several iterations. This allowed the model to generate seemingly “natural,” or at least satisfactory output, e.g., a “story about otters” in the hypothetical training example.

Limitations of ChatGPT

While the output of ChatGPT can be impressive and seem “human” or almost-“human,” it is important to remember that ChatGPT does not “understand” or otherwise comprehend a question or dialogue in the same sense a human does. Rather, ChatGPT is limited in how it has been trained and seeks to generate an output with a selection and arrangement of words and phrases with the highest probability mathematical output, regardless of any true understanding. Thus, while it may seem to a human operator that a conversation is being conducted with a fully cognizant AI, it is not the case.

For example, Open AI answers the question, “[w]hy does the AI seem so real and lifelike” with the following response: “[t]hese models were trained on vast amounts of data from the internet written by humans, including conversations, so the responses it provides may sound human-like.” OpenAI goes on to state that “[i[t is important to keep in mind that this is a direct result of the system’s design (i.e., maximizing the similarity between outputs and the dataset the models were trained on) and that such outputs may be inaccurate, untruthful, and otherwise misleading at times.” OpenAI, ChatGPT General FAQ.

Rather, as identified by OpenAI, ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers. This issue arises from ChatGPT’s lack of “truth” in the way that it was trained. In particular, ChatGPT was trained on data from the internet, and there’s currently no source of truth (i.e., no ground truth) for ChatGPT’s reinforcement learning regimen based on internet data. So there is an element of caveat emptor with ChatGPT’s responses.  

On this point, Open AI answers the question, “Can I trust that the AI is telling me the truth?” with the following response: “ChatGPT is not connected to the internet, and it can occasionally produce incorrect answers. It has limited knowledge of world and events after 2021 and may also occasionally produce harmful instructions or biased content.” OpenAI further states that “We’d recommend checking whether responses from the model are accurate or not. If you find an answer is incorrect, please provide that feedback by using the ‘Thumbs Down’ button.’” OpenAI, ChatGPT General FAQ.

This problem stems from “hallucination,” which is a fundamental problem inherent with large language models to which ChatGPT belongs. In the context of a large language model, hallucination refers to mistakes in the generated text that are semantically or syntactically plausible but are in fact incorrect or nonsensical. In short, a user can’t trust what the machine is explaining or outputting.

Perhaps Yann LeCun, a pioneer in deep learning and the self-supervised learning used in large language models, described the inherent problem of hallucination best when he explained that “[l]arge language models have no idea of the underlying reality that language describes.” IEEE, Spectrum, “Hallucinations Could Blunt ChatGPT’s Success.” He further clarified that large language models “generate text that sounds fine, grammatically, semantically, but they don’t really have some sort of objective other than just satisfying statistical consistency with the prompt.” Id.

Further, ChatGPT is limited by its reliance on the humans that trained ChatGPT in the first place. That is, during the supervised learning phase of training, ChatGPT may not have learned an ideal answer because the specific people selected to train ChatGPT chose specific responses based on what they thought or knew was “right,” at least at the time and such responses may have been incorrect, or at least not ideal, at the time of training. 

Other limitations include ChatGPT responding with inconsistent answers based on minor tweaks to a user’s question, i.e., where minor tweaks to a user’s input into ChatGPT can, on the one hand, cause ChatGPT to claim not to know an answer, or on the other hand, answer correctly.

Still, further, ChatGPT is designed to provide a non-sensical answer (even a wrong answer from the perspective of a user) to a query that is ambiguous. That is, OpenAI admits that ChatGPT usually guesses what a user intended if a query is ambiguous, which can lead to an incorrect response (e.g., answering a different intended question or providing information about a different topic). This is especially dangerous for users unfamiliar with a given topic, who may assume that ChatGPT is providing a correct/accurate response, but where the opposite may be the case. Again caveat emptor.

OpenAI’s CEO, Sam Altman, readily admitted the inherent danger of ChatGPT when he acknowledged on Twitter that “[ChatGPT] does know a lot, but the danger is that it is confident and wrong a significant fraction of the time.” Wall Street Journal, “The Backstory of ChatGPT Creator OpenAI.” Similarly, Mr. Altman also acknowledged that “ChatGPT is incredibly limited, but good enough at some things to create a misleading impression of greatness,” and added that it is “a mistake to be relying on it for anything important right now.” Wall Street Journal, “What Is ChatGPT? What to Know About the AI Chatbot.”

A newer version of ChatGPT (named GPT-4) has been released, which supposedly reduces such errors. But OpenAI indicates that GPT-4 does have the same limitations as earlier GPT models (e.g., ChatGPT), albeit perhaps to a lesser extent. But this nonetheless includes its fundamental limitation, i.e., a lack of reliability in its output, i.e., “hallucinations”:

Despite its capabilities, GPT-4 has similar limitations as earlier GPT models. Most importantly, it still is not fully reliable (it “hallucinates” facts and makes reasoning errors). Great care should be taken when using language model outputs, particularly in high-stakes contexts, with the exact protocol (such as human review, grounding with additional context, or avoiding high-stakes uses altogether) matching the needs of a specific use-case. Id.

In any event, GPT-4 is an improvement over ChatGPT. As OpenAI says: “[w]hile still a real issue, GPT-4 significantly reduces hallucinations relative to previous models (which have themselves been improving with each iteration). GPT-4 scores 40% higher than our latest GPT-3.5 on our internal adversarial factuality evaluations.” Id. OpenAI provides the following chart showing the improvement in accuracy (reduced “hallucinations”) for GPT-4 compared to prior generations, including ChatGPT:

Because ChatGPT was trained on information found on the internet, we can expect that it will “know” about internet-centric assets, such as websites. 

In one example, I asked ChatGPT to describe my law firm, Marshall, Gerstein & Borun LLP.  In particular, I entered the following into ChatGPT’s prompt: “What is Marshall Gerstein”? 

ChatGPT responded with the following, which for the most part is accurate, but gets details such as the “founding date” wrong:

Marshall Gerstein is a law firm based in Chicago, Illinois, that specializes in intellectual property law. The firm was founded in 1999 [sic] and has since grown to become one of the largest IP law firms in the Midwest region of the United States. Marshall Gerstein's practice areas include patents, trademarks, copyrights, trade secrets, and related litigation, as well as IP licensing, counseling, and transactions.

The firm's attorneys have technical backgrounds in a wide range of industries, including life sciences, biotechnology, chemistry, pharmaceuticals, medical devices, electronics, software, and mechanical engineering. Marshall Gerstein represents a diverse range of clients, including start-ups, universities, research institutions, and established corporations.

Marshall Gerstein is known for its commitment to providing personalized and innovative legal solutions to its clients. The firm has been recognized as a leading IP law firm by numerous legal publications, such as Chambers USA, Managing Intellectual Property, and Intellectual Property Today.

Most of this information can be found on my firm’s website, so it is not surprising that ChatGPT, being a natural language processing model that was trained on data from the internet, would “know” about and could organize the information about “Marshall Gerstein” into a narrative, such as the one produced above. 

However, inaccuracies can be found. Namely, it incorrectly states that the law firm was “founded in 1999.” A simple navigation to the “Firm History” page confirms, however, that our firm was, in fact, founded in 1955. 

Determining the information/data source of ChatGPT’s response

If you are curious about what information or data that ChatGPT based a particular response on, you can simply ask ChatGPT for its source of information. For example, a follow-up question can be something like: “what is this based on?”

For example, when I asked this question following ChatGPT’s output regarding Marshall Gerstein, ChatGPT yielded the following further response:

My response is based on publicly available information about Marshall Gerstein. Marshall Gerstein is a well-known law firm that has been providing intellectual property legal services for many years. The information I provided is based on the firm's website, legal directories, and other reliable sources of information available online.

ChatGPT and Intellectual Property (IP) Related Topics

Copyrights: Authorship

Given that ChatGPT is a “generative” AI model, it can produce new content, e.g., text, in the form of an answer it provides to a user’s input.  

Currently, under current laws, including those of the U.S., an artificial intelligence model (such as ChatGPT) cannot be considered an “author” for purposes of copyright protection. For example, the U.S. Copyright Office has published “Copyright Registration Guidance: Works Containing Material Generated by Artificial Intelligence,” which provides that “[i]n the Office’s view, it is well-established that copyright can protect only material that is the product of human creativity. Most fundamentally, the term ‘author,’ which is used in both the Constitution and the Copyright Act, excludes non-humans.” 88 Fed. Reg. 51 at 16191.

Thus, it is unclear whether any of the material produced by ChatGPT (or other such generative works) is copyrightable, at least under U.S. law for the time being.

However, it should be noted that the Copyright Office, in its publication, noted that “[i]ndividuals who use AI technology in creating a work may claim copyright protection for their own contributions to that work.” Id. For example, in one case, an author (Kristina Kashtanova) sought to copyright images generated by an AI model named Midjourney. Like ChatGPT, Midjourney can produce new content but produces images instead of text. Ms. Kashtanova used Midjourney to produce images for a comic book named “Zarya of the Dawn.” Ms. Kashtanova had selected and arranged the images, but the images were produced wholly by Midjourney. The Copyright Office found that while Ms. Kashtanova’s selection and arrangement of the images could be copyrighted (given that Ms. Kashtanova’s selection and arrangement amounted to a “modicum of creativity” and that she was, of course, human – both as required by U.S. copyright law), that the images of the comic book itself could not be copyrighted as it was produced by a non-human. The Copyright Office’s decision was based, at least in part, on a Supreme Court case about camera technology from 1884. For those interested in this fascinating case, please see PatentNext’s article  The Curious Case of Burrow-Giles Lithographic (an 1884 U.S. Supreme Court decision involving “new” camera technology), and how it could help Shape Today’s Thinking on Artificial Intelligence (AI) Inventorship.

Further, it should be noted that the Copyright’s guidance on AI-generated works describes a “duty” to inform the Copyright Office when a work generated by a generative AI (e.g., ChatGPT) is sought to be copyrighted. That is, the Copyright’s guidance states that: “applicants have a duty to disclose the inclusion of AI-generated content in a work submitted for registration and to provide a brief explanation of the human author’s contributions to the work.” 

Copyrights: Ownership and Use of ChatGPT output

OpenAI does not address the question of copyright authorship. Instead, OpenAI treats the matter as one of ownership via contract law. In particular, OpenAI states that it “will not claim copyright over content generated by the [ChatGPT] API for you or your end users.” See OpenAI, FAQ (Copyright).

Further, OpenAI allows the commercial use of material output by ChatGPT. In particular, in response to the question: “Can I use the output from ChatGPT for commercial uses?”, OpenAI answers that: “Subject to the Content Policy and Terms, you own the output you create with ChatGPT, including the right to reprint, sell, and merchandise – regardless of whether the output was generated through a free or paid plan.” OpenAI, ChatGPT General FAQ.

OpenAI further has Terms of Use, in which OpenAI assigns users all “right, title and interest” in ChatGPT’s output. OpenAI clarifies that “[t]his means you can use Content for any purpose, including commercial purposes such as sale or publication if you comply with these Terms.”

Finally, OpenAI’s Content Policy includes terms to which users must “adhere.” This requires users to, for example, not “attempt to create, upload, or share images that are not G-rated, or that could cause harm,” among other similar limitations. 

Patents: Inventorship

Can an AI, such as ChatGPT, invent? No, according to various patent offices and patent laws around the world. Resistance to an AI as an inventor primarily stems from the fact that the written text of existing patent laws refers to human inventors, e.g., “individuals” or “persons,” which leaves little or no room for interpretation of a non-human AI as an inventor. For example, under U.S. Patent law, the term “inventor” is defined as an “individual” or “individuals” who “invented or discovered the subject matter of the invention.” 35 U.S. Code § 100(f). See also PatentNext’s article Can an Artificial Intelligence (AI) be an Inventor?

In August 2022, the U.S. Court of Appeals for the Federal Circuit affirmed a USPTO decision that U.S. Patent law requires at least one human inventor. See Thaler v. Vidal, Case No. 2021-2347 (Fed. Cir. Aug. 5, 2022). The Federal Circuit did, however, suggest that an AI invention with human involvement may be patentable, e.g., the court stating that: “Moreover, we are not confronted today with the question of whether inventions made by human beings with the assistance of AI are eligible for patent protection.”

Accordingly, at least for the time being, one can assume that a patent application listing ChatGPT as the sole inventor would be denied (as was the case in Thaler). However, in a situation where a human contributed to the conception of a claim while using ChatGPT, then perhaps the USPTO would allow joint inventorship (i.e., a human inventor and ChatGPT as an inventor) per the Federal Circuit’s hint that it had not yet addressed that issue, and where the human inventor requirement would be met, at least in part, under existing statutory law.

Patents and Trade Secrets: Possible Loss of Rights

ChatGPT poses a threat to the destruction or prevention of certain IP rights, namely trade secrets, and patents.

In particular, to be legally considered a trade secret in the United States, a company must make a reasonable effort to conceal the information from the public. 

With respect to a patent, a claimed invention can be invalid if it is made “available to the public” more than 1 year before the effective filing date of the claimed invention. See 35 U.S. Code § 102.

OpenAI makes clear that any questions, statements, or otherwise information input by a user into ChatGPT can be reviewed or used by OpenAI. Thus, such input can be considered to be a public disclosure that, if significant, can lead to waiving trade secret protection and/or precluding patent protection.

For example, OpenAI’s Terms of Use state that a user’s input (e.g., a question, statement, or otherwise text a user provides to ChatGPT) may be used “to help develop and improve our Services.” OpenAI’s Terms of Use specify that this is Non-API Content, which is presumably text entered into ChatGPT’s text box (as opposed to API Content, which is submitted via software via an API key). 

OpenAI’s FAQ also suggests that such input may be used by OpenAI. For example, in response to the question “Who can view my conversations?”, Open AI answers: “As part of our commitment to safe and responsible AI, we review conversations to improve our systems and to ensure the content complies with our policies and safety requirements.” OpenAI, ChatGPT General FAQ.

As another example, in response to the question, “Will you use my conversations for training?,” Open AI answers: “Yes. Your conversations may be reviewed by our AI trainers to improve our systems.” Id.

Accordingly, input provided by a user to ChatGPT ultimately constitutes disclosing such input to a third-party (namely OpenAI). Without a confidentiality agreement in place, such disclosure would likely be considered to be public. 

A way to combat this would be to obtain a private instance of ChatGPT, one that could be securely installed on a private network inside a company such that any queries from employees would not be sent publicly over the Internet or to a third party (e.g., OpenAI).

Patents: Drafting

In view of the above, patent drafters should be careful as to what information (if any) is provided to ChatGPT. On the other hand, a savvy patent drafter could use ChatGPT to prepare certain portions of a patent application that deal with prior art. In particular, a patent drafter could use ChatGPT to formulate a first draft portion of, for example, a background section of the patent, which could include a description of prior art. Of course, such information should be checked for accuracy in view of ChatGPT’s tendency to get some facts wrong or otherwise “hallucinate,” as OpenAI puts it.

As an example, a simple question can pose an easy enough task for ChatGPT to get right. For example, when presented with the question (“Prepare an overview of blockchain technology”), ChatGPT prepared the following summary, listing some of its key features of blockchain technology, which a patent drafter could review and edit for possible inclusion into a background section of a blockchain-related patent:

Blockchain technology is a digital ledger that enables secure and transparent record-keeping of transactions or data. It was originally invented in 2008 for the digital currency Bitcoin, but its potential applications extend far beyond just finance. Here's an overview of the key features of blockchain technology:

1. Decentralization: In traditional systems, data is stored on a central server or database, but in blockchain, the data is stored across a network of computers, which makes it decentralized. This means that there is no single point of control or failure, and the network is more resilient to attacks.

2. Immutable and Tamper-proof: Once a transaction or data is recorded on the blockchain, it cannot be altered or deleted without consensus from the network. This makes the blockchain immutable and tamper-proof, which ensures the integrity of the data and transactions recorded on it.

3. Distributed Ledger: The blockchain is a distributed ledger, which means that every node in the network has a copy of the same ledger. Each node validates transactions and adds new blocks to the chain, creating a transparent and secure record of all transactions.

4. Smart Contracts: Smart contracts are self-executing contracts that are coded on the blockchain. They are used to automate the execution of agreements and can be used in a wide range of industries such as finance, real estate, and supply chain management.

5. Transparency and Privacy: While the blockchain is transparent in the sense that all transactions are visible to everyone on the network, it is also private because users can control who has access to their data. This is achieved through the use of public and private keys that allow users to access and control their data.

6. Cryptography: The blockchain uses cryptographic algorithms to secure the data on the network. This includes hashing, which is used to create a unique fingerprint for each block, and digital signatures, which are used to authenticate transactions.

Overall, blockchain technology has the potential to revolutionize many industries by providing secure and transparent record-keeping, enabling new forms of trust, and reducing the need for intermediaries.

****

A few points on this output. First, such output is best vetted by someone already familiar with the subject area. For example, for the above output, I am very familiar with blockchain technology, so the above description seems adequate to me. For practitioners already familiar with blockchain technology, the above description could provide a rough draft that could be further edited for an intended use case, such as possible inclusion into a background section of a patent application. Second, and more generally, such output should be taken with a grain of salt. For example, I would not recommend trusting this output to a colleague unfamiliar with blockchain technology. At best, it is a starting point, i.e., a pre-written summary that can be cross-checked and verified with other sources. Nonetheless, in both cases, the output can be useful and improve writing efficiency, but it should be treated with caution – as with anything found on the internet, which is, after all, the information upon which ChatGPT was trained.  

Looking forward: GPT-4

GPT-4 is impressive in its ability to generate text, including programming code, such as HTML (Hypertext Markup Language), that is used in virtually all Internet web pages.  Additionally, GPT-4 adds functionality for reading and analyzing image content.   

In a recent demonstration, Greg Brockman (president and co-founder of OpenAI) uploaded an image of a hand-drawn mockup of a website, where the mockup included placeholders for a set of jokes. Greg prompted GPT-4 to write HTML/JavaScript code to convert the mockup into a colorful website that inserted real jokes into the placeholders.   

Within seconds, the model outputted the requested code, which was used to render the website.  Indeed, the result included a working JavaScript website that displayed real jokes along with selectable buttons to reveal the punchlines.      

Additionally, OpenAI is rolling out plugins for ChatGPT to unlock a variety of use cases.  For example, the Instacart plugin enables users to order from local grocery stores, the KAYAK plugin enables users to search for flights, stays, and rental cars, as well as receive recommendations for places to travel within a specified budget, and the OpenTable plugin provides restaurant recommendations with a direct link to book a reservation. 

****

Subscribe to get updates to this post or to receive future posts from PatentNext. Start a discussion or reach out to the authors, Matthew Carey and Ryan Phelan, at mcarey@marshallip.com (Tel: 312-474-9581) and rphelan@marshallip.com (Tel: 312-474-6607). Connect with or follow Matt and Ryan on LinkedIn.

Photo of Ryan N. Phelan Ryan N. Phelan

PatentNext is moderated by Ryan N. Phelan, a registered U.S. Patent Attorney and Software and Computer Engineer. Ryan previously worked in the IT industry as a consultant at Accenture, where he regularly consulted Fortune 500 companies in software and computing technologies. Ryan is…

PatentNext is moderated by Ryan N. Phelan, a registered U.S. Patent Attorney and Software and Computer Engineer. Ryan previously worked in the IT industry as a consultant at Accenture, where he regularly consulted Fortune 500 companies in software and computing technologies. Ryan is featured in the IAM Strategy 300 & 300 Global Leaders guides, and was selected for inclusion in The Best Lawyers in America© list in the practice area of Patent Law. Ryan is also an adjunct professor at Northwestern University’s Pritzker School of Law where he teaches coursework on Patenting Software Inventions. Learn more about Ryan.

Photo of Matthew R. Carey Matthew R. Carey

Matthew R. Carey is a registered patent attorney who counsels clients in the electrical and software engineering industry as it relates to patenting strategies as well as all stages of patent prosecution. His engineering background provides him an understanding of clients’ technologies and…

Matthew R. Carey is a registered patent attorney who counsels clients in the electrical and software engineering industry as it relates to patenting strategies as well as all stages of patent prosecution. His engineering background provides him an understanding of clients’ technologies and enables him to effectively and efficiently provide a range of patent procurement services. Mr. Carey also leverages his experience to assist on intellectual property litigation as well as develop strategies for acquiring and protecting intellectual property. Read full bio here.