How Does ChatGPT Actually Work? An ML Engineer Explains

ProfilePicture of Calin Cretu
Calin Cretu
Machine Learning Engineer
Components of ChatGPT architecture

ChatGPT has quickly become a go-to tool in the world of AI since its launch. And it’s easy to see why: ChatGPT can generate cohesive, grammatically correct written content based on prompts, translate text, write code, and perform countless useful tasks for marketers, developers, and data analysts.

Don’t feel like reading? We made a video that you can listen to or watch at your leisure.

YouTube video player

Table Of Contents

In the first five days after its launch, over a million users had already used ChatGPT to answer questions on various topics. While its capabilities have been impressive, from writing song lyrics to simulating a Linux terminal, the inner workings of ChatGPT remain a mystery to many. However, understanding how ChatGPT works is important not just for satisfying our curiosity, but also for unlocking its full potential. By demystifying ChatGPT’s inner workings, we can appreciate its capabilities better and identify areas for improvement. So how does ChatGPT work, and how was it trained to achieve such exceptional performance? 

In this article, we’ll take a deep dive into the architecture of ChatGPT and explore the training process that made it possible. Using my years of experience as a machine learning engineer, I’ll break down the inner workings of ChatGPT in a way that is easy to understand, even for those who are new to AI. 

ChatGPT: How OpenAI’s Neural Language Model Works

ChatGPT is a language model that was created by OpenAI in 2022. Based on neural network architecture, it’s designed to process and generate responses for any sequence of characters that make sense, including different spoken languages, programming languages, and mathematical equations.

How do Neural Network Architectures Work?

Neural networks are composed of interconnected layers of nodes, called neurons, that process and transmit information. ChatGPT’s neural network takes in a string of text as input and generates a response as output. However, as with most AI models, neural networks are essentially complex mathematical functions that require numerical data as input. Therefore, the input text is first encoded into numerical data before being fed into the network.

How the different layers of a Neural network architecture work.

To achieve this, each word in ChatGPT’s vocabulary is assigned a unique set of numbers to create a sequence of numbers that can be processed by the network. With this process, ChatGPT can understand and respond to various inquiries with varying degrees of success, depending on its training. 

Integrating AI into your product?
Our machine learning engineers blend back-end programming with data science knowledge to build custom AI-powered products.
Our Engineers

ChatGPT’s Language Model

ChatGPT generates its response one word at a time, with each new word depending on the previous ones. For example, when asked to complete the sentence “the cat jumped over the…”, there are multiple high-probability words that could follow:

ChatGPT prompt and the probability of each response.

Human speech is variable by nature. So to make the response more human, ChatGPT samples from these high-probability words from its dataset when generating the output. As a result, the model will not always predict the same word each time, adding more diversity and unpredictability to its responses. 

A screenshot of Chat GPT suggesting a different response thanks to its language model.

Let’s dive deeper into ChatGPT’s architecture to learn more about what’s happening between the input and the output. 

Building Blocks of ChatGPT: The Transformer Model

ChatGPT runs on a Transformer architecture, which underlies its powerful generalization ability. Understanding this architecture is key to understanding ChatGPT as a whole. So, in this section, we’ll explore the self-attention mechanism used in Transformers and how it contributes to a better understanding of the input context.

Previously, we learned how ChatGPT represents its input and output. However, the intermediate steps are just as important. Inside the neural network, there are hidden layers comprising neurons, which perform mathematical operations on their inputs and pass the results to the next layer until the final output is produced.

Neurons are parametrized by numbers that represent weights and biases. They decide if the input signal received by the neurons should be decreased or amplified. During the learning process, the network adjusts the weights and biases of the connections between the neurons to minimize the difference between the network’s output and the desired output.

Think of a group of musicians playing together in an orchestra. Each musician represents a neuron in the neural network, and each instrument they play represents a weight or bias parameter. Just as each musician decides how loud or soft to play their instrument based on the musical score they’re following, each neuron decides whether to decrease or amplify the input signal it receives based on the weights and biases assigned to it.

Now imagine that the orchestra is learning to play a new piece of music. At first, the musicians may make mistakes and play off-key, just as the neural network may produce incorrect outputs. However, with practice and feedback from the conductor, the musicians gradually adjust their playing to minimize the errors and produce a more accurate rendition of the music. Similarly, during the learning process, the neural network adjusts the weights and biases of the connections between the neurons to minimize the difference between its output and the desired output, improving its accuracy over time.

By combining different layers, we can create more complex networks that can be stacked on top of each other, run in parallel, merged, and so on. These layers play a crucial role in the network’s ability to process and understand complex input data, such as language.

Neural Network Layers: Feed Forward, Convolutional and Residual Network.

When designing a neural network, the sky’s the limit, but architectural decisions can greatly impact its performance. The chosen architecture can affect the network’s accuracy, training and inference speed, and overall size.

Since the first Transformer network was introduced in 2017, this architecture has gained immense popularity. Initially used in Natural Language Processing, it has more recently been applied to Computer Vision as well. Some of the most popular applications of Transformers include DALL-E 2, which can generate images based on text descriptions in natural language, GitHub Copilot, which provides real-time programming code suggestions, and ChatGPT.

At the core of the Transformer model lies a block called the Attention Mechanism, which enables the network to weigh the importance of different parts of the input when making predictions. This mechanism plays a critical role in the network’s ability to process complex input data and make accurate predictions.

To understand the Attention Mechanism, it’s useful to consider an analogy. Imagine you’re reviewing a textbook and using a highlighter to mark parts of the page that are particularly important and relevant. In this scenario, the highlighter is helping you more easily understand the overall context. 

An example of how the Attention Mechanism works, using the highlighter as an analogy.

Similarly, the Attention Mechanism in Transformers uses weights to highlight the most meaningful parts of the input, allowing the network to focus on what matters most for making accurate predictions. By acting as a cognitive filter, the Attention Mechanism helps the network to process and comprehend complex data by identifying and emphasizing the most relevant information.

ChatGPT and InstructGPT

According to OpenAI, ChatGPT is very similar to their previously released model, InstructGPT. The architecture is the same, but they differ in their training data and scope. ChatGPT is designed to generate natural language text for conversational purposes, while InstructGPT is designed for generating instructional text for tasks such as answering questions or providing step-by-step guidance. To learn more about this, check out InstructGPT’s extensive report

ChatGPT’s Training Process Explained

Like InstructGPT, ChatGPT’s training process involves a machine learning technique called fine-tuning, which aims to improve the performance of a pre-trained model on a specific task. Pre-trained models are models that have been trained on a large amount of data, typically for a different task than the one they are being fine-tuned for. 

The pre-trained model used for ChatGPT was trained to predict the next word in a sentence based on the context of the previous words. The training dataset included a vast amount of text data from books, websites, and other sources. While this training was successful, it needed further refinement for the model to provide personalized and accurate outputs.

Apply for Machine Learning Engineering Positions
Are you a back-end programmer or a data scientist looking to work with AI-powered products? Apply on our freelancers page.
Apply

The model’s capability to predict the next word accurately didn’t necessarily imply that it would generate useful and reliable responses in real-world scenarios. For example, suppose a user asks the model, “How do I treat my headache?” The model may be able to generate a response by completing the prompt with the most probable words based on its training, such as:

“Take some aspirin, drink water, rest, and avoid bright lights.”

While this response may seem appropriate based on the prompt, it may not be the right advice for the user. Depending on the cause and severity of the headache, taking aspirin or other pain relievers may not be the best treatment option. Also, some types of headaches may require medical attention.

Therefore, while the model was good at predicting the next word in a sentence, it still needed further refinement to understand the user’s specific situation and provide personalized, accurate, and safe advice. 

To improve ChatGPT’s ability to respond more accurately to user prompts, a three-step training process was employed, which involved human intervention. 

Three steps of the ChatGPT training process to fine-tune answers.

Step 1. The Supervised Fine-tuning Model 

In the first step, the model is trained using supervised learning. This is a type of machine learning where the model is trained to recognize patterns in data using labeled examples. In other words, the model is provided with the input and the output that it should learn. In our case, human annotators created appropriate responses to a dataset of user prompts. This Supervised Fine-tuning model was trained using supervised learning to mimic the responses of the given dataset. However, this process is costly and time-consuming, so they only trained for a short period.

Step 2. The Reward Model 

In the second step, the previously trained model generated multiple predictions for different user prompts, and human annotators ranked the predictions from the least to the most helpful. Using this data, the Reward Model was trained to predict how useful a response was to a given prompt.

Step 3. The Reinforcement Learning Process 

Finally, the Reinforcement Learning process is used to further train the Supervised Fine-tuning model, which is used as an agent that maximizes the reward from the Reward Model. It generates a response to a user prompt, which is then evaluated by the Reward Model. The Supervised Fine-tuning model then tries to update its prediction to get bigger rewards for future predictions. This process is more scalable than the first step because it’s easier and faster for an annotator to rank multiple outputs than to write a detailed response themselves. 

Note: Steps 2 and 3 can be repeated multiple times. Using the newly trained model from Step 3, a new reward model can be trained by repeating Step 2, which is fed again into Step 3, and so on. ChatGPT used the same architecture and training process as InstructGPT but with different data collection.

After the three-step training process, ChatGPT’s responses became more sophisticated and effective in real-world scenarios. For example, if a user asks the model, “What is the best way to reduce stress?” The model can now generate a response that takes into account the user’s specific situation and needs. For example, here’s the response ChatGPT gave when asked, “what is the best way to reduce stress?”

ChatGPT prompt example: What is the best way to reduce stress?

ChatGPT’s response shows that the model has the ability to understand the user’s needs and tailor its responses accordingly. By asking questions and seeking more information, the model can provide more accurate and helpful advice based on the user’s context. 

Final Thoughts: ChaptGPT’s Machine Learning Breakthroughs

ChatGPT is a remarkable achievement that showcases the impressive progress made in the field of AI research. 

Although ChatGPT is similar to InstructGPT, it represents a significant milestone in the development of virtual assistants capable of generating human-like responses. This breakthrough has enormous potential for professionals in various domains, including software development. Developers can leverage ChatGPT as a pair programming partner to generate code, documentation, tests, and even debug existing code.

One of the most exciting aspects of ChatGPT is the newly released ChatGPT API, which allows companies to take advantage of the capabilities of artificial intelligence without having to invest significant resources in developing their own models. This innovation has the potential to transform various industries and create new opportunities for innovation. Companies can now build on top of ChatGPT to develop new tools and services that leverage its powerful language processing capabilities.

Looking forward, ChatGPT’s potential applications are extensive, especially in the software development field. Its ability to assist in code generation, documentation, testing, and debugging is just the beginning. Overall, the tool’s impact on the AI industry is significant, opening doors for further innovation and competition. As the technology advances, we can expect to see even more impressive developments that leverage the power of AI to improve our lives and work.

Vetting Developers in the AI Age
Learn how we assess technical candidates during the recruitment process now that developers use AI tools.
Learn More
Originally published on Apr 6, 2023Last updated on Oct 23, 2023

Looking to hire?

Join our newsletter

Join thousands of subscribers already getting our original articles about software design and development. You will not receive any spam, just great content once a month.