{"id":4188,"date":"2024-02-08T09:37:37","date_gmt":"2024-02-08T09:37:37","guid":{"rendered":"https:\/\/cloudxlab.com\/blog\/?p=4188"},"modified":"2025-11-11T20:00:51","modified_gmt":"2025-11-11T20:00:51","slug":"building-your-own-chatgpt-from-scratch","status":"publish","type":"post","link":"https:\/\/cloudxlab.com\/blog\/building-your-own-chatgpt-from-scratch\/","title":{"rendered":"How to build\/code ChatGPT from scratch?"},"content":{"rendered":"\n<p>In a world where technology constantly pushes the boundaries of human imagination, one phenomenon stands out: <strong>ChatGPT<\/strong>. You&#8217;ve probably experienced its magic, admired how it can chat meaningfully, and maybe even wondered how it all works inside. ChatGPT is more than just a program; it&#8217;s a gateway to the realms of artificial intelligence, showcasing the amazing progress we&#8217;ve made in machine learning.<\/p>\n\n\n\n<p>At its core, ChatGPT is built on a technology called <strong>Generative Pre-trained Transformer<\/strong> (GPT). But what does that really mean? Let&#8217;s understand in this blog.<\/p>\n\n\n\n<p>In this blog, we&#8217;ll explore the fundamentals of machine learning, including how machines generate words. We&#8217;ll delve into the <strong>transformer architecture<\/strong> and its <strong>attention<\/strong> mechanisms. Then, we&#8217;ll demystify GPT and its role in AI. Finally, we&#8217;ll embark on coding our own GPT from scratch, bridging theory and practice in artificial intelligence.<\/p>\n\n\n\n<h2>How does Machine learn?<\/h2>\n\n\n\n<p>Imagine a network of interconnected knobs\u2014this is a <strong>neural network<\/strong>, inspired by our own brains. In this network, information flows through nodes, just like thoughts in our minds. Each node processes information and passes it along to the next, making decisions as it goes.<\/p>\n\n\n\n<p><strong>Each knob represents a neuron<\/strong>, a fundamental unit of processing. As information flows through this network, these neurons spring to action, analyzing, interpreting, and transmitting data. It&#8217;s similar to how thoughts travel through your mind\u2014constantly interacting and influencing one another to form a coherent understanding of the world around you. In a neural network, these interactions pave the way for learning, adaptation, and intelligent decision-making, mirroring the complex dynamics of the human mind in the digital realm.<\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img width=\"692\" height=\"285\" src=\"https:\/\/blog.cloudxlab.com\/wp-content\/uploads\/2024\/02\/Screenshot-2024-02-01-at-11.31.26-AM-1.png\" alt=\"\" class=\"wp-image-4195\"\/><\/figure>\n\n\n\n<!--more-->\n\n\n\n<p>During the training phase of a neural network, we essentially guide it to understand patterns in data. We start by showing the network examples: we say, &#8220;Here&#8217;s the input, and here&#8217;s what we expect the output to be.&#8221; Then comes the fascinating part: we adjust\/<strong>tweak these knobs<\/strong>, so that the network gets better at predicting the correct output for a given input.<\/p>\n\n\n\n<p>As we tweak these knobs, our goal is simple: we want the network to get closer and closer to producing outputs that match our expectations. It&#8217;s like fine-tuning an instrument to play the perfect melody. Gradually, through this process, the network starts giving outputs that align more closely with what we anticipate. This adjustment process, known as <strong>backpropagation<\/strong>, involves fine-tuning the connections to align the network&#8217;s predictions with the provided input-output pairs. For understanding backpropagation better, you can refer to the following blog.<\/p>\n\n\n\n<figure class=\"wp-block-embed is-type-wp-embed is-provider-cloudxlab-blog wp-block-embed-cloudxlab-blog\"><div class=\"wp-block-embed__wrapper\">\n<blockquote class=\"wp-embedded-content\"><a href=\"https:\/\/cloudxlab.com\/blog\/backpropagation-from-scratch\/\">Coding Backpropagation and Gradient Descent From Scratch without using any libraries<\/a><\/blockquote><script type='text\/javascript'><!--\/\/--><![CDATA[\/\/><!--\t\t\/*! This file is auto-generated *\/\t\t!function(c,d){\"use strict\";var e=!1,n=!1;if(d.querySelector)if(c.addEventListener)e=!0;if(c.wp=c.wp||{},!c.wp.receiveEmbedMessage)if(c.wp.receiveEmbedMessage=function(e){var t=e.data;if(t)if(t.secret||t.message||t.value)if(!\/[^a-zA-Z0-9]\/.test(t.secret)){for(var r,a,i,s=d.querySelectorAll('iframe[data-secret=\"'+t.secret+'\"]'),n=d.querySelectorAll('blockquote[data-secret=\"'+t.secret+'\"]'),o=0;o<n.length;o++)n[o].style.display=\"none\";for(o=0;o<s.length;o++)if(r=s[o],e.source===r.contentWindow){if(r.removeAttribute(\"style\"),\"height\"===t.message){if(1e3<(i=parseInt(t.value,10)))i=1e3;else if(~~i<200)i=200;r.height=i}if(\"link\"===t.message)if(a=d.createElement(\"a\"),i=d.createElement(\"a\"),a.href=r.getAttribute(\"src\"),i.href=t.value,i.host===a.host)if(d.activeElement===r)c.top.location.href=t.value}}},e)c.addEventListener(\"message\",c.wp.receiveEmbedMessage,!1),d.addEventListener(\"DOMContentLoaded\",t,!1),c.addEventListener(\"load\",t,!1);function t(){if(!n){n=!0;for(var e,t,r=-1!==navigator.appVersion.indexOf(\"MSIE 10\"),a=!!navigator.userAgent.match(\/Trident.*rv:11.\/),i=d.querySelectorAll(\"iframe.wp-embedded-content\"),s=0;s<i.length;s++){if(!(e=i[s]).getAttribute(\"data-secret\"))t=Math.random().toString(36).substr(2,10),e.src+=\"#?secret=\"+t,e.setAttribute(\"data-secret\",t);if(r||a)(t=e.cloneNode(!0)).removeAttribute(\"security\"),e.parentNode.replaceChild(t,e)}}}}(window,document);\/\/--><!]]><\/script><iframe title=\"&#8220;Coding Backpropagation and Gradient Descent From Scratch without using any libraries&#8221; &#8212; CloudxLab Blog\" sandbox=\"allow-scripts\" security=\"restricted\" src=\"https:\/\/cloudxlab.com\/blog\/backpropagation-from-scratch\/embed\/\" width=\"600\" height=\"338\" frameborder=\"0\" marginwidth=\"0\" marginheight=\"0\" scrolling=\"no\" class=\"wp-embedded-content\"><\/iframe>\n<\/div><\/figure>\n\n\n\n<p>Once our neural network has completed its training phase and learned the knob positions from the examples we provided, it enters the inference phase, where it gets to showcase its newfound skills.<\/p>\n\n\n\n<p>During inference we freeze the adjustments we made to the knobs during training. Think of it as setting the dials to the perfect settings\u2014and now the network is ready to tackle real-world tasks. When we present the network with new data, it springs into action, processing the input and swiftly generating an output based on what it&#8217;s learned.<\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img width=\"672\" height=\"280\" src=\"https:\/\/blog.cloudxlab.com\/wp-content\/uploads\/2024\/02\/Screenshot-2024-02-01-at-11.46.33-AM.png\" alt=\"\" class=\"wp-image-4196\"\/><\/figure>\n\n\n\n<p>Neural networks are versatile, capable of handling various tasks, from image recognition to natural language processing. By harnessing interconnected neurons, they unlock the potential of artificial intelligence, driving innovation across industries.<\/p>\n\n\n\n<p>For a detailed understanding of how neural networks work, you can refer to the following CloudxLab playlist.<\/p>\n\n\n\n<figure class=\"wp-block-embed is-type-rich is-provider-cloudxlab wp-block-embed-cloudxlab\"><div class=\"wp-block-embed__wrapper\">\n<div><div style=\"left: 0; width: 100%; height: 0; position: relative; padding-bottom: 56.6667%; padding-top: 120px;\"><iframe title=\"Artificial Neural Networks\" src=\"\/\/if-cdn.com\/JgfXiCW?maxheight=1000&#038;app=1\" style=\"top: 0; left: 0; width: 100%; height: 100%; position: absolute; border: 0;\" allowfullscreen><\/iframe><\/div><\/div><script async src=\"\/\/if-cdn.com\/embed.js\" charset=\"utf-8\"><\/script><script type=\"text\/javascript\">window.addEventListener(\"message\",function(e){\n                window.parent.postMessage(e.data,\"*\");\n            },false);<\/script>\n<\/div><\/figure>\n\n\n\n<h2><strong>How does model output a word?<\/strong><\/h2>\n\n\n\n<p>Now that we understand the concept of a neural network, let&#8217;s delve into its ability to perform <strong>classification<\/strong> tasks and how it outputs words.<\/p>\n\n\n\n<p>Classification is like sorting things into different groups. Imagine we&#8217;re sorting pictures into two categories: pictures of <strong>cats<\/strong> and pictures of everything else (<strong>not cats<\/strong>). Our job is to teach the computer to look at each picture and decide which category it belongs to. So, when we show it a picture, it&#8217;ll say, &#8220;Yes, that&#8217;s a cat,&#8221; or &#8220;No, that&#8217;s not a cat.&#8221; That&#8217;s how classification works\u2014it helps computers organize information into clear groups based on what they see.<\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img width=\"298\" height=\"203\" src=\"https:\/\/blog.cloudxlab.com\/wp-content\/uploads\/2024\/02\/Screenshot-2024-02-01-at-1.09.19-PM.png\" alt=\"\" class=\"wp-image-4198\"\/><\/figure>\n\n\n\n<p>Outputting a word is also a classification task. Let&#8217;s think of a big dictionary with lots of words\u2014say, 50,000 of them. Now, imagine we have a smart computer that&#8217;s learning to predict the next word in a sentence. So, when we give it a sequence of words from a sentence, it guesses what word should come next.<\/p>\n\n\n\n<p>But here&#8217;s the thing: computers think in numbers, not words. So, we turn each word into a special number, kind of like a <strong>token<\/strong>. Then, we train our computer to guess which number (or word) should come next in a sentence. When we give it some words, it looks at all the possibilities in the dictionary and assigns a chance (or probability) to each word, saying which one it thinks is most likely to come next.<\/p>\n\n\n\n<p>Suppose we have the following sequences and their corresponding next words:<\/p>\n\n\n\n<ol><li>Sequence: &#8220;The cat&#8221;, Next word: &#8220;sat&#8221;<\/li><li>Sequence: &#8220;The cat sat&#8221;, Next word: &#8220;on&#8221;<\/li><li>Sequence: &#8220;The cat sat on&#8221;, Next word: &#8220;the&#8221;<\/li><li>Sequence: &#8220;The cat sat on the&#8221;, Next word: &#8220;mat&#8221;<\/li><\/ol>\n\n\n\n<p>During training, the neural network will learn from these patterns. It will understand that &#8220;The cat&#8221; is typically followed by &#8220;sat&#8221;, &#8220;The cat sat&#8221; is followed by &#8220;on&#8221;, &#8220;The cat sat on&#8221; is followed by &#8220;the&#8221;, and &#8220;The cat sat on the&#8221; is followed by &#8220;mat&#8221;. This way, the model learns the language structure and can predict the next word in a sequence based on the learned patterns. After training, our model will be good in predicting the next word in a sentence.<\/p>\n\n\n\n<p>So, our computer&#8217;s job is to learn from lots of examples and get really good at guessing the next word in a sentence based on what it&#8217;s seen before. It&#8217;s like a super smart helper, trying to predict what word you&#8217;ll say next in a conversation.<\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img width=\"880\" height=\"436\" src=\"https:\/\/blog.cloudxlab.com\/wp-content\/uploads\/2024\/02\/Screenshot-2024-02-01-at-1.13.18-PM.png\" alt=\"\" class=\"wp-image-4199\"\/><\/figure>\n\n\n\n<p>In the above example, we have a dictionary (lookup) of <strong>n<\/strong> words. This means the neural network recognizes only these n words from the dictionary and can only produce predictions based on them. Any word not in the dictionary won&#8217;t be recognized or generated by the model. <\/p>\n\n\n\n<p>Now, we provide the input &#8220;the cat and the dog ___&#8221;. We can see that each word is represented by a token in the lookup such as &#8216;the&#8217; as 1, &#8216;cat&#8217; as 5, &#8216;and&#8217; as 2, etc. So we convert our input sequence to tokens using the lookup. Then we pass these tokens to the neural network, and it predicts the probability for each token, representing the chance of that token coming as the next word in the sequence. Then we choose the token with the highest probability, which in our case is token number 4. Upon performing a lookup, we find that token 4 represents the word &#8220;play&#8221;. So this becomes our output, and the sentence becomes &#8220;the cat and the dog play&#8221;.<\/p>\n\n\n\n<p>In our example, with a limited vocabulary of &#8216;n&#8217; words, the neural network can only predict the next word from the provided set of words. However, in large language models like ChatGPT, Bard, etc., the model is trained on a vast corpus of text data containing a diverse range of words and phrases from various sources. By training on a large dataset encompassing a wide vocabulary, the model becomes more proficient at understanding and generating <strong>human-like text<\/strong>.<strong> It learns the statistical relationships between words, their contexts, and the nuances of language usage across different domains<\/strong>.<\/p>\n\n\n\n<p>When you give LLMs a query or a prompt, they predict the next word in the sequence. Once they generate a word, they then consider what word might come after that, and the process continues until the response is completed. This iterative prediction process allows these models to generate coherent and contextually relevant responses.<\/p>\n\n\n\n<p>Let&#8217;s imagine the input prompt provided to ChatGPT is &#8220;Write a poem on nature.&#8221;Initially, the LLM might predict &#8220;The&#8221; as the first word. Then, considering &#8220;The&#8221; as the beginning of the poem, it might predict &#8220;beauty&#8221; as the next word, leading to &#8220;The beauty ____.&#8221; Continuing this process, it might predict &#8220;of&#8221; as the next word, resulting in &#8220;The beauty of ____.&#8221;<\/p>\n\n\n\n<p>As the LLM predicts each subsequent word, the poem gradually takes shape. It might predict &#8220;nature&#8221; as the next word, leading to &#8220;The beauty of nature ____.&#8221; Then, it might predict &#8220;is&#8221; as the following word, resulting in &#8220;The beauty of nature is ____.&#8221;<\/p>\n\n\n\n<p>The process continues until the LLM generates a coherent and evocative poem on nature. This iterative approach enables LLMs to create engaging and contextually relevant text based on the given prompt.<\/p>\n\n\n\n<h2>Recurrent Neural Networks<\/h2>\n\n\n\n<p>Imagine you&#8217;re reading a story, and you want to understand what&#8217;s happening as you go along. Your brain naturally remembers what you read before and uses that information to understand the story better. That&#8217;s kind of how recurrent neural networks work!<\/p>\n\n\n\n<p>In simple terms, RNNs are like brains for computers. They&#8217;re really good at processing sequences of data, like words in a sentence or frames in a video. RNNs were introduced in 1980s. What makes them special is that they remember what they&#8217;ve seen before and use that memory to make sense of what&#8217;s happening next.<\/p>\n\n\n\n<p>So, if you feed a sentence into an RNN, it&#8217;ll read one word at a time, just like you read one word after another in a story. But here&#8217;s the cool part: as it reads each word, it keeps a memory of what it read before. This memory helps it understand the context of the sentence and <strong>make better predictions about what word might come next<\/strong>.<\/p>\n\n\n\n<p>While <strong>RNNs<\/strong> were great at processing sequences of data, they <strong>struggled with remembering long sequences<\/strong>. So, to address this issue, researchers came up with a special type of RNN called LSTM, which stands for Long Short-Term Memory. <strong>LSTMs<\/strong> are like <strong>upgraded versions of RNNs<\/strong>\u2014they&#8217;re smarter and better at remembering important information from the past.<\/p>\n\n\n\n<p>LSTMs performed better than RNNs to retain memory over long sequences, but still were not very good  at the task. To address these challenges, researchers introduced the <strong>Transformer<\/strong> model.<\/p>\n\n\n\n<p>For understanding RNNs and LSTMs in detail, you can refer to the following CloudxLab playlist.<\/p>\n\n\n\n<figure class=\"wp-block-embed is-type-rich is-provider-cloudxlab wp-block-embed-cloudxlab\"><div class=\"wp-block-embed__wrapper\">\n<div><div style=\"left: 0; width: 100%; height: 0; position: relative; padding-bottom: 56.6667%; padding-top: 120px;\"><iframe title=\"Recurrent Neural Networks\" src=\"\/\/if-cdn.com\/qKFTUms?maxheight=1000&#038;app=1\" style=\"top: 0; left: 0; width: 100%; height: 100%; position: absolute; border: 0;\" allowfullscreen><\/iframe><\/div><\/div><script async src=\"\/\/if-cdn.com\/embed.js\" charset=\"utf-8\"><\/script><script type=\"text\/javascript\">window.addEventListener(\"message\",function(e){\n                window.parent.postMessage(e.data,\"*\");\n            },false);<\/script>\n<\/div><\/figure>\n\n\n\n<h2>Transformer<\/h2>\n\n\n\n<p>The introduction of the Transformer marked a significant breakthrough in the field of Natural Language Processing. It emerged in the seminal paper titled &#8220;<a href=\"https:\/\/arxiv.org\/abs\/1706.03762\">Attention is All You Need<\/a>.&#8221;<\/p>\n\n\n\n<p>The Transformer&#8217;s innovative design, leveraging <strong>self-attention<\/strong> mechanisms, addressed these shortcomings. By allowing the model to focus on relevant parts of the input sequence, the Transformer could capture long-range dependencies and contextual information more effectively. This breakthrough paved the way for more sophisticated language models, including ChatGPT, that excel in understanding and generating coherent text.<\/p>\n\n\n\n<h3>Self-Attention Mechanism<\/h3>\n\n\n\n<p><strong>The basic idea: <\/strong>Each time the model predicts an output word, it only uses a part of the input where the most relevant information is concentrated instead of the entire sentence.<\/p>\n\n\n\n<p>Suppose we have a sentence of <strong>n words<\/strong>:-<\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img width=\"443\" height=\"50\" src=\"https:\/\/blog.cloudxlab.com\/wp-content\/uploads\/2024\/02\/Untitled-1.png\" alt=\"\" class=\"wp-image-4203\"\/><\/figure>\n\n\n\n<p>As we know machines only understand number, let&#8217;s map these words into vectors:<\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img width=\"443\" height=\"50\" src=\"https:\/\/blog.cloudxlab.com\/wp-content\/uploads\/2024\/02\/Untitled-2.png\" alt=\"\" class=\"wp-image-4204\"\/><\/figure>\n\n\n\n<p>Now if we take a word vector C\u1d62 and we want to compute the similarity of C\u1d62 with every other vector, we take dot product of C\u1d62 with every other vector in C\u2081 to  C\u2099. <strong>If dot product is high, that means vectors are very similar<\/strong>. <\/p>\n\n\n\n<div class=\"wp-block-columns\">\n<div class=\"wp-block-column\" style=\"flex-basis:100%\">\n<p>To understand about word vectors, embeddings and how dot products represent similarity between two vectors, you can refer to:<\/p>\n\n\n\n<figure class=\"wp-block-embed is-type-wp-embed is-provider-cloudxlab-blog wp-block-embed-cloudxlab-blog\"><div class=\"wp-block-embed__wrapper\">\n<blockquote class=\"wp-embedded-content\"><a href=\"https:\/\/cloudxlab.com\/blog\/understanding-embeddings-and-matrices-with-the-help-of-sentiment-analysis-and-llms-hands-on\/\">Understanding Embeddings and Matrices with the help of Sentiment Analysis and LLMs (Hands-On)<\/a><\/blockquote><script type='text\/javascript'><!--\/\/--><![CDATA[\/\/><!--\t\t\/*! This file is auto-generated *\/\t\t!function(c,d){\"use strict\";var e=!1,n=!1;if(d.querySelector)if(c.addEventListener)e=!0;if(c.wp=c.wp||{},!c.wp.receiveEmbedMessage)if(c.wp.receiveEmbedMessage=function(e){var t=e.data;if(t)if(t.secret||t.message||t.value)if(!\/[^a-zA-Z0-9]\/.test(t.secret)){for(var r,a,i,s=d.querySelectorAll('iframe[data-secret=\"'+t.secret+'\"]'),n=d.querySelectorAll('blockquote[data-secret=\"'+t.secret+'\"]'),o=0;o<n.length;o++)n[o].style.display=\"none\";for(o=0;o<s.length;o++)if(r=s[o],e.source===r.contentWindow){if(r.removeAttribute(\"style\"),\"height\"===t.message){if(1e3<(i=parseInt(t.value,10)))i=1e3;else if(~~i<200)i=200;r.height=i}if(\"link\"===t.message)if(a=d.createElement(\"a\"),i=d.createElement(\"a\"),a.href=r.getAttribute(\"src\"),i.href=t.value,i.host===a.host)if(d.activeElement===r)c.top.location.href=t.value}}},e)c.addEventListener(\"message\",c.wp.receiveEmbedMessage,!1),d.addEventListener(\"DOMContentLoaded\",t,!1),c.addEventListener(\"load\",t,!1);function t(){if(!n){n=!0;for(var e,t,r=-1!==navigator.appVersion.indexOf(\"MSIE 10\"),a=!!navigator.userAgent.match(\/Trident.*rv:11.\/),i=d.querySelectorAll(\"iframe.wp-embedded-content\"),s=0;s<i.length;s++){if(!(e=i[s]).getAttribute(\"data-secret\"))t=Math.random().toString(36).substr(2,10),e.src+=\"#?secret=\"+t,e.setAttribute(\"data-secret\",t);if(r||a)(t=e.cloneNode(!0)).removeAttribute(\"security\"),e.parentNode.replaceChild(t,e)}}}}(window,document);\/\/--><!]]><\/script><iframe title=\"&#8220;Understanding Embeddings and Matrices with the help of Sentiment Analysis and LLMs (Hands-On)&#8221; &#8212; CloudxLab Blog\" sandbox=\"allow-scripts\" security=\"restricted\" src=\"https:\/\/cloudxlab.com\/blog\/understanding-embeddings-and-matrices-with-the-help-of-sentiment-analysis-and-llms-hands-on\/embed\/\" width=\"600\" height=\"338\" frameborder=\"0\" marginwidth=\"0\" marginheight=\"0\" scrolling=\"no\" class=\"wp-embedded-content\"><\/iframe>\n<\/div><\/figure>\n<\/div>\n<\/div>\n\n\n\n<p>These dot products can be big or small, but they&#8217;re not really easy to understand on their own. So, we want to make them simpler and easier to compare. To do that, we use a trick called scaling. It&#8217;s like putting all these numbers on the same scale, from 0 to 1. This way, we can see which words are more similar to each other. The higher the number, the more similar the words.<\/p>\n\n\n\n<p>Suppose dot(C\u1d62, C\u2081) is 0.7 and dot(C\u1d62, C\u2086) is 0.5. Then we can easily say that C\u1d62 is more similar to C\u2081 than to C\u2086.<\/p>\n\n\n\n<p>Now, imagine we have these nice numbers, but they&#8217;re still not exactly like probabilities (the chances of something happening). So, we use another trick called softmax. It helps us turn these numbers into something that looks more like probabilities.<\/p>\n\n\n\n<p><strong>Softmax basically adjusts the numbers so they all add up to 1, like percentages<\/strong>. <strong>This helps the computer understand how important each word is compared to the others. It&#8217;s like saying, &#8220;Out of all these words, which ones should we pay the most attention to?&#8221; Let&#8217;s call them attention scores.<\/strong><\/p>\n\n\n\n<p>Now, we want to use these attention scores to calculate a weighted sum of the original vectors C\u2081 to C\u2099. This weighted sum is called the <strong>context vector<\/strong>, and it gives us a representation of the input sentence that takes into account the importance of each word based on the attention scores. It provides a summary of the sentence that focuses more on the words that are deemed most relevant for the task at hand.<\/p>\n\n\n\n<p><strong>Confused? Let&#8217;s understand with an example<\/strong><\/p>\n\n\n\n<p>Suppose we have input sentence as:- &#8220;<strong>I love Natural Language Processing<\/strong>&#8220;.<\/p>\n\n\n\n<h5><strong>Step 1:-<\/strong> Let&#8217;s represent each word by encodings. For instance:<\/h5>\n\n\n\n<p>&#8220;i&#8221; = [1, 0, 0, 0, 0]<br>&#8220;love&#8221; = [0, 1, 0, 0, 0]<br>&#8220;natural&#8221; = [0, 0, 1, 0, 0]<br>&#8220;language&#8221; = [0, 0, 0, 1, 0]<br>&#8220;processing&#8221; = [0, 0, 0, 0, 1]<\/p>\n\n\n\n<p>Each word in the sentence \u201cI love Natural Language Processing\u201d is first transformed into two types of embeddings:<\/p>\n\n\n\n<ul><li><strong>Query (Q)<\/strong>: This is a representation of the word used to derive attention scores for other words. The Query vector represents the word or token for which we are calculating attention weights concerning other words or tokens in the sequence. For example, if we&#8217;re considering the word &#8220;love,&#8221; the query might be: &#8220;What other words help me understand the meaning of &#8216;love&#8217; in this sentence?&#8221;<\/li><li><strong>Key (K)<\/strong>: This is another representation of the word used to compare with other words. It is used to compare with Query vectors during the calculation of attention weights. The key acts like an answer to the query. It tells us how relevant each other word in the sentence is to understanding &#8220;love.&#8221;<\/li><\/ul>\n\n\n\n<p>For each word in the input sequence, we first compute the Query (Q) and Key (K) vectors using the initialized weight matrices <em>W<sub>q<\/sub> and W<sub>k<\/sub><\/em> as:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">Query (Q<sub>x<\/sub>) = x * W<sub>q<\/sub>\nKey (K<sub>x<\/sub>) = x * W<sub>k<\/sub>\n\n<em>where x is the input encoding, and W<sub>q<\/sub> and W<sub>k<\/sub> are the weight vectors learned during training.<\/em><\/pre>\n\n\n\n<p>Let\u2019s initialize W<sub>q<\/sub> and W<sub>k<\/sub> with random values. Suppose W<sub>q<\/sub> and W<sub>k<\/sub> is of shape (5, 4). That means the projection dimension of the query and key vector will be 4.<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">W<sub>q<\/sub> = [[0.94, 0.48, 0.02, 0.93], \n     [0.16, 0.72, 0.27, 0.06],\n     [0.17, 0.91, 0.6 , 0.21], \n     [0.37, 0.85, 0.13, 0.82], \n     [0.58, 0.85, 0.13, 0.75]]\n\nW<sub>k<\/sub> = [[0.37, 0.25, 0.17, 0.95], \n     [0.56, 0.19, 0.25, 0.91],\n     [0.93, 0.01, 0.94, 0.43],\n     [0.37, 0.84, 0.59, 0.68], \n     [0.97, 0.09, 0.42, 0.73]]<\/pre>\n\n\n\n<p id=\"block-9b4322b5-6bec-4536-894a-9bfb1f76e171\">So, for \u201cI\u201d, the query and key vectors come as:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\"><strong>i = [1, 0, 0, 0, 0]<\/strong>\nQ<sub>i<\/sub> = dot(i, W<sub>q<\/sub>) = <strong>[0.94, 0.48, 0.02, 0.93]<\/strong>\nK<sub>i<\/sub> = dot(i, W<sub>k<\/sub>)<strong> = [0.37, 0.25, 0.17, 0.95]<\/strong><\/pre>\n\n\n\n<p>In the same way, we calculate query and key vectors of all the words, which come as:<\/p>\n\n\n\n<p>Q<sub>i<\/sub> = [0.94, 0.48, 0.02, 0.93]<br>Q<sub>love <\/sub>= [0.16 0.72 0.27 0.06]<br>Q<sub>natural <\/sub>= [0.17 0.91 0.6&nbsp; 0.21]<br>Q<sub>language <\/sub>= [0.37 0.85 0.13 0.82]<br>Q<sub>processing <\/sub>= [0.58 0.85 0.13 0.75]<\/p>\n\n\n\n<p>K<sub>i<\/sub> = [0.37 0.25 0.17 0.95]<br>K<sub>love <\/sub>= [0.56 0.19 0.25 0.91]<br>K<sub>natural <\/sub>= [0.93 0.01 0.94 0.43]<br>K<sub>language <\/sub>= [0.37 0.84 0.59 0.68]<br>K<sub>processing <\/sub>= [0.97 0.09 0.42 0.73]<\/p>\n\n\n\n<h5><strong>Step 2:<\/strong>&#8211; Compute the dot product of <strong>query vector of \u201cI\u201d(Q<sub>i<\/sub><\/strong>)with every <strong>key vector<\/strong>.<\/h5>\n\n\n\n<p>Operation: <strong>dot(Q<sub>x<\/sub> , K<sub>x<\/sub>), <\/strong><em>where x is the word.<\/em> Let&#8217;s calculate for &#8220;I&#8221;. So,<\/p>\n\n\n\n<p>Q<sub>i<\/sub> = [0.94, 0.48, 0.02, 0.93]<\/p>\n\n\n\n<ul><li>dot(Q<sub>i<\/sub> , K<sub>i<\/sub>) = 0.94 * 0.37 + 0.48 * 0.25 + 0.02 * 0.17 + 0.93 * 0.95<strong> = 1.35<\/strong><\/li><li>dot(Q<sub>i<\/sub> , K<sub>love<\/sub>) = 0.94 * 0.56 + 0.48 * 0.19 + 0.02 * 0.25 + 0.93 * 0.91 = <strong>1.47<\/strong><\/li><li>dot(Q<sub>i<\/sub> , K<sub>natural<\/sub>) = 0.94 * 0.93 + 0.48 * 0.01 + 0.02 * 0.94 + 0.93 * 0.43 = <strong>1.3<\/strong><\/li><li>dot(Q<sub>i<\/sub> , K<sub>language<\/sub>) = 0.94 * 0.37 + 0.48 * 0.84 + 0.02 * 0.59 + 0.93 * 0.68 = <strong>1.4<\/strong><\/li><li>dot(Q<sub>i<\/sub> , K<sub>processing<\/sub>) = 0.94 * 0.97 + 0.48 * 0.09 + 0.02 * 0.42 + 0.93 * 0.73 = <strong>1.64<\/strong><\/li><\/ul>\n\n\n\n<p><strong>Dot product vector w.r.t I = <\/strong>[1.35, 1.47, 1.3, 1.4, 1.64]<\/p>\n\n\n\n<p>In the same way, we calculate dot product vector of all the query vectors with key vectors.<\/p>\n\n\n\n<p>Dot product vector w.r.t<strong> I (score<sub>i<\/sub>) = <\/strong>[1.35, 1.47, 1.3, 1.4, 1.64]<br>Dot product vector w.r.t<strong> love (score<sub>love<\/sub>) = <\/strong>[0.34, 0.35, 0.44, 0.86, 0.38]<br>Dot product vector w.r.t<strong> natural (score<sub>natural<\/sub>) = <\/strong>[0.59, 0.61, 0.82, 1.32, 0.65]<br>Dot product vector w.r.t<strong> language (score<sub>language<\/sub>) = <\/strong>[1.15, 1.15, 0.83, 1.49, 1.09]<br>Dot product vector w.r.t<strong> processing (score<sub>processing<\/sub>) = <\/strong>[1.16, 1.2, 0.99, 1.52, 1.24]<\/p>\n\n\n\n<p>But these scores can vary widely in magnitude and <strong>lack a clear interpretation of relative importance<\/strong> among the elements in the sequence.<\/p>\n\n\n\n<p>Here comes <strong>Softmax<\/strong> to rescue.<\/p>\n\n\n\n<p>Softmax function is applied to the attention scores to convert them into probabilities. <\/p>\n\n\n\n<p><img src=\"https:\/\/lh7-us.googleusercontent.com\/vgxYtSUYCw6U92f7PLkDqdbUC3O9yBgXgqLSMlETZhwihV_Lux-LNB64Rg2BVZ1fCSuhCK2_9RVmcvEsCMe4ONisTUZ5EedB__0xYTV9IlG2vW1tf86cM2PqG3LbFl6yyTbRSoIzH1a55ALZaUM9_Bah3g=nw\" style=\"width: 200px;\"><\/p>\n\n\n\n<p>This transformation has two main effects:<\/p>\n\n\n\n<ul><li><strong>Amplifying Higher Scores:<\/strong><ul><li>Scores that are initially higher are amplified more by the exponential function in the softmax numerator.&nbsp;<\/li><\/ul><\/li><li><strong>Diminishing Lower Scores:<\/strong><ul><li>Scores that are lower initially receive smaller weights after softmax normalization.<\/li><\/ul><\/li><\/ul>\n\n\n\n<p><strong>Let\u2019s understand it with an example.<\/strong><\/p>\n\n\n\n<pre class=\"wp-block-preformatted\"># Let\u2019s take an array:&nbsp;\narr = [1,2,3,4,5,6]<\/pre>\n\n\n\n<p>Let\u2019s calculate the softmax distribution:<br><\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">import numpy as np\nnumerator = np.exp(arr) = [ 2.71828183, 7.3890561 , 20.08553692, 54.59815003, 148.4131591 , 403.42879349]\n\ndenominator = sum(numerator) = 636.6329774790333\n\nsoftmax = numerator\/denominator = [0.00426978, 0.01160646, 0.03154963, 0.08576079, 0.23312201,\n0.63369132]<\/pre>\n\n\n\n<p>Now, let\u2019s we plot the bar graph of original array and softmax array.<\/p>\n\n\n\n<p><img width=\"827px;\" height=\"410px;\" src=\"https:\/\/lh7-us.googleusercontent.com\/c8ccLwAD3FRCiH5XhCIWVc2ZmGiNgWv3Njjv65CZbryPdlLkhwxqIZUcZSwmHSFixWLJ4fNL2DJamMj1t1mFnwJJxfISN9n42__hI-GBiAEpIE_36xuDWgP5qX5HI28XOFUuJhIn6vrxbOJX3Vk9Q13VAg=nw\"><\/p>\n\n\n\n<p>In the above image we can see that it amplified the higher scores, and diminished the lower scores.<\/p>\n\n\n\n<p>Now, we will pass the calculated attention scores to the softmax function.<\/p>\n\n\n\n<p>But, dot products grow large in magnitude, where the application of the softmax function would then return extremely small gradients.<\/p>\n\n\n\n<p>What can be done to bring scores to a definite scale?<\/p>\n\n\n\n<p>Yes, you are right. <strong>Scaling<\/strong>. Before passing the scores to softmax, we will first scale them.<\/p>\n\n\n\n<h5><strong>Step 3:<\/strong>&#8211; Scale the attention scores.<\/h5>\n\n\n\n<p>We simply divide the vectors with square root of length of the <strong>k vector<\/strong>, which we calculated. The length of our k vector is 4, so we\u2019ll divide the dot product vectors with sqrt(4), which is <strong>2.<\/strong><\/p>\n\n\n\n<p><strong>score<sub>i<\/sub> = <\/strong>[1.35, 1.47, 1.3, 1.4, 1.64] \/ 2 = [0.68, 0.74, 0.65, 0.7 , 0.82]<br><strong>score<sub>love<\/sub> = <\/strong>[0.34, 0.35, 0.44, 0.86, 0.38] \/ 2= [0.17, 0.18, 0.22, 0.43, 0.19]<br><strong>score<sub>natural<\/sub> = <\/strong>[0.59, 0.61, 0.82, 1.32, 0.65] \/ 2 = [0.3 , 0.3 , 0.41, 0.66, 0.32]<br><strong>score<sub>language<\/sub> = <\/strong>[1.15, 1.15, 0.83, 1.49, 1.09] \/ 2 = [0.57, 0.57, 0.42, 0.74, 0.55]<br><strong>score<sub>processing<\/sub> = <\/strong>[1.16, 1.2, 0.99, 1.52, 1.24] \/ 2 = [0.58, 0.6 , 0.5 , 0.76, 0.62]<\/p>\n\n\n\n<h5><strong>Step 4:<\/strong>&#8211; Apply softmax<\/h5>\n\n\n\n<p>After applying softmax, our scores come as:<br><strong>score<sub>i<\/sub> = <\/strong>[0.19, 0.2 , 0.19, 0.2 , 0.22]<br><strong>score<sub>love<\/sub> = <\/strong>[0.19, 0.19, 0.2 , 0.24, 0.19]<br><strong>score<sub>natural<\/sub> = <\/strong>[0.18, 0.18, 0.2 , 0.26, 0.18]<br><strong>score<sub>language<\/sub> = <\/strong>[0.2 , 0.2 , 0.17, 0.24, 0.2 ]<br><strong>score<sub>processing<\/sub> = <\/strong>[0.19, 0.2 , 0.18, 0.23, 0.2 ]<\/p>\n\n\n\n<h5>Step 5: Project context vector<\/h5>\n\n\n\n<p>Now we are just one step away from getting our final attention scores. We have to decide that in how many dimensions should we represent each word. For that we use <strong>Value <\/strong>embedding.<\/p>\n\n\n\n<p>We calculate the <strong>Value<\/strong> embeddings in the same way we calculated <strong>Query(Q)<\/strong> and <strong>Key(K)<\/strong> embeddings, i.e. V = x * W<sub>v<\/sub>.<\/p>\n\n\n\n<p>So, for value embeddings, let\u2019s take W<sub>v<\/sub>&nbsp; of shape (5,8). That means, we want to represent the context vector of each word with 8 dimensions.<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">W<sub>v<\/sub> = [[0.71, 0.95, 0.32, 0.16, 0.79, 0.61, 0.63, 0.06],\n      [0.6 , 0.84, 0.26, 0.29, 0.88, 0.26, 0.11, 0.6 ],\n      [0.65, 0.78, 0.02, 0.18, 0.07, 0.67, 0.58, 0.46],\n      [0.39, 0.68, 0.09, 0.23, 0.89, 0.14, 0.83, 0.64],\n      [0.7 , 0.96, 0.22, 0.45, 0.65, 0.79, 0.01, 0.59]]<\/pre>\n\n\n\n<p>So, our <strong>Value(V)<\/strong> matrix comes as: V = x * W<sub>v<\/sub><\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">V = [[0.71, 0.95, 0.32, 0.16, 0.79, 0.61, 0.63, 0.06],\n     [0.6 , 0.84, 0.26, 0.29, 0.88, 0.26, 0.11, 0.6 ],\n     [0.65, 0.78, 0.02, 0.18, 0.07, 0.67, 0.58, 0.46],\n     [0.39, 0.68, 0.09, 0.23, 0.89, 0.14, 0.83, 0.64],\n     [0.7 , 0.96, 0.22, 0.45, 0.65, 0.79, 0.01, 0.59]]<\/pre>\n\n\n\n<p>Now, we simply multiply the <strong>attention scores <\/strong>with <strong>Value matrix<\/strong> to get out final context vector.<\/p>\n\n\n\n<p><strong>Context vector of I comes as:<\/strong><br>Operation: <strong>np.matmul(score<sub>x<\/sub> , V), <\/strong><em>where x is the word.<\/em><\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">import numpy as np\n\nContext[\"I\"] = np.matmul(score<sub>i<\/sub> , V) = [0.61, 0.85, 0.18, 0.27, 0.66, 0.5 , 0.42, 0.48]<\/pre>\n\n\n\n<p><strong>Context vector of I = <\/strong>[0.61, 0.85, 0.18, 0.27, 0.66, 0.5 , 0.42, 0.48]<\/p>\n\n\n\n<p>In the same way, we calculate context vector of all the words.<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\"><strong>Context vector of I = <\/strong>[0.61, 0.85, 0.18, 0.27, 0.66, 0.5 , 0.42, 0.48]<br><strong>Context vector of love = <\/strong>[0.6 , 0.83, 0.18, 0.26, 0.66, 0.48, 0.45, 0.48]<br><strong>Context vector of natural = <\/strong>[0.59, 0.83, 0.17, 0.26, 0.66, 0.47, 0.46, 0.48]<br><strong>Context vector of language = <\/strong>[0.6 , 0.84, 0.18, 0.26, 0.68, 0.47, 0.44, 0.48]<br><strong>Context vector of processing = <\/strong>[0.6 , 0.84, 0.18, 0.26, 0.67, 0.48, 0.44, 0.48]<\/pre>\n\n\n\n<p>This context vector combines the original embedding of the word with information about how it relates to other words in the sentence, all based on the calculated attention scores.<\/p>\n\n\n\n<p><strong>Now the question arises, why have we done all this? <\/strong><\/p>\n\n\n\n<p>As words, vectors don&#8217;t tell the relationship of a particular word with other words of the sentence, so it is not any better than a random subset of words. As we know, sentence is a group of words which makes sense. So we calculate their context vector which also keep the information about the relationship of the word with every other word. This is called self-attention mechanism.<\/p>\n\n\n\n<h3>Transformer architecture<\/h3>\n\n\n\n<p><img width=\"369px;\" height=\"511px;\" src=\"https:\/\/lh7-us.googleusercontent.com\/vUDmLwRY91cPOEXNFaawnpnc14linQMGlSkYW8BgHZ7tZNkAwzTp9DwtfMSKJWagHkcgi22rv49RGvX0BOrEJ0Khqdxb26vUT10P7Qyaxh2QE64uR0ShSCelE27ijK8bDPLdTJHglq6XmrhMexzFemybdA=nw\"><\/p>\n\n\n\n<p>The transformer architecture is made of two blocks: <strong>Encoder<\/strong>(left) and <strong>Decoder<\/strong>(right). These encoder and decoder blocks are stacked <strong>N <\/strong>times.<\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img width=\"854\" height=\"481\" src=\"https:\/\/blog.cloudxlab.com\/wp-content\/uploads\/2024\/02\/Screenshot-2024-02-05-at-3.08.37-PM.png\" alt=\"\" class=\"wp-image-4209\"\/><\/figure>\n\n\n\n<h5>Encoder:<\/h5>\n\n\n\n<ul><li><strong>Functionality<\/strong>: The encoder&#8217;s goal is to extract meaningful features and patterns from the input sequence, which can then be used by the decoder for generating the output sequence. It analyzes the input sequence, token by token, and generates contextualized representations for each token. These representations capture information about the token&#8217;s context within the input sequence. <\/li><li><strong>Input<\/strong>: The encoder receives the input sequence, typically represented as a sequence of word embeddings or tokens.<\/li><li><strong>Output<\/strong>: The encoder outputs a sequence of contextualized representations for each token in the input sequence.<\/li><\/ul>\n\n\n\n<h5>Decoder:<\/h5>\n\n\n\n<ul><li><strong>Functionality<\/strong>: The decoder block is tasked with generating the output sequence based on the contextualized representations provided by the encoder.  It&#8217;s task is to predict the next token in the output sequence based on the context provided by the encoder and the previously generated tokens. It generates the output sequence token by token, taking into account the learned representations and the context provided by the encoder.<\/li><li><strong>Input<\/strong>: Initially, the decoder receives the same sequence of contextualized representations generated by the encoder.<\/li><li><strong>Outputs(Shifted right)<\/strong>: During training, the decoder also receives a shifted version of the output sequence, where each token is shifted to the right by one position. This shifted sequence is used for teacher forcing, helping the decoder learn to predict the next token in the sequence based on the previous tokens.<\/li><li><strong>Output<\/strong>: The decoder generates the output sequence, which represents the model&#8217;s predictions or translations.<\/li><\/ul>\n\n\n\n<h5>Positional Encoding<\/h5>\n\n\n\n<p>Consider the 2 following sentences:<\/p>\n\n\n\n<p>&gt; I <strong>do not<\/strong> like the story of the movie, but I <strong>do<\/strong> like the cast<\/p>\n\n\n\n<p>&gt; I <strong>do<\/strong> like the story of the movie, but I <strong>do not<\/strong> like the cast<\/p>\n\n\n\n<p>What is the difference between these 2 sentences? <\/p>\n\n\n\n<p>The words are same but the meaning is different. This shows that information of order is required to distinguish different meanings<\/p>\n\n\n\n<p><strong>Positional embedding<\/strong> generates embeddings which allows the model to learn the relative positions of words.<\/p>\n\n\n\n<p>Now, as we have a brief overview of how the transformer works, let&#8217;s cover the components inside encoder and decoder blocks one by one. We&#8217;ll simultaneously code the components which will give us the final code of GPT.<\/p>\n\n\n\n<h2>Coding GPT from scratch<\/h2>\n\n\n\n<p>Let&#8217;s code it. Make sure you are comfortable with <strong><a href=\"https:\/\/www.tensorflow.org\/\">Tensorflow<\/a><\/strong> and <strong><a href=\"https:\/\/keras.io\/\">Keras<\/a><\/strong> as we will be using it. You can access the complete code used in this blog at <a href=\"https:\/\/github.com\/cloudxlab\/GPT-from-scratch\/tree\/master\">https:\/\/github.com\/cloudxlab\/GPT-from-scratch\/tree\/master<\/a>. We will be coding only the decoder part of the transformer, as modern LLMs such as ChatGPT only use the decoder part of the transformer. <\/p>\n\n\n\n<h3>Head(attention) block<\/h3>\n\n\n\n<p>So, we&#8217;ll start with implementing the <strong>Head <\/strong>block. In the context of transformer-based architectures, a &#8220;<strong>Head<\/strong>&#8221; refers to a distinct computational unit responsible for performing attention computations. It operates within the broader framework of self-attention, allowing the model to focus on relevant parts of the input sequence.<\/p>\n\n\n\n<p>Let&#8217;s start with writing the __init__() method that sets up the necessary components and parameters required for attention computations.<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\"><strong>class<\/strong> Head(tf<strong>.<\/strong>keras<strong>.<\/strong>layers<strong>.<\/strong>Layer):\n    \"\"\" one head of self-attention \"\"\"\n\n    <strong>def<\/strong> __init__(self, head_size):\n        super(Head, self)<strong>.<\/strong>__init__()\n        self<strong>.<\/strong>key <strong>=<\/strong> tf<strong>.<\/strong>keras<strong>.<\/strong>layers<strong>.<\/strong>Dense(head_size, use_bias<strong>=False<\/strong>)\n        self<strong>.<\/strong>query <strong>=<\/strong> tf<strong>.<\/strong>keras<strong>.<\/strong>layers<strong>.<\/strong>Dense(head_size, use_bias<strong>=False<\/strong>)\n        self<strong>.<\/strong>value <strong>=<\/strong> tf<strong>.<\/strong>keras<strong>.<\/strong>layers<strong>.<\/strong>Dense(head_size, use_bias<strong>=False<\/strong>)\n\n        tril <strong>=<\/strong> tf<strong>.<\/strong>linalg<strong>.<\/strong>band_part(tf<strong>.<\/strong>ones((block_size, block_size)), <strong>-<\/strong>1, 0)\n        self<strong>.<\/strong>tril <strong>=<\/strong> tf<strong>.<\/strong>constant(tril)\n\n        self<strong>.<\/strong>dropout <strong>=<\/strong> tf<strong>.<\/strong>keras<strong>.<\/strong>layers<strong>.<\/strong>Dropout(dropout)<\/pre>\n\n\n\n<p>In the above code, <\/p>\n\n\n\n<ul><li>The <code>key<\/code>, <code>query<\/code>, and <code>value<\/code> layers are initialized as dense layers using the <code>tf.keras.layers.Dense<\/code> module. By initializing these layers without biases (<code>use_bias=False<\/code>), the model learns to capture complex relationships and patterns within the input sequence.<\/li><li>A lower triangular mask (<code>tril<\/code>) is generated using <code>tf.linalg.band_part<\/code>. This mask is essential for preventing the model from attending to future tokens during training, thereby avoiding information leakage. The lower triangular mask ensures that each position in the input sequence can only attend to positions\/words preceding it. While training transformers, we pass the whole input sequence at once. So suppose, we have the following input sequence:<\/li><\/ul>\n\n\n\n<p>[&lt;start&gt;, I, love, natural, language, processing, &lt;end&gt;]<\/p>\n\n\n\n<p>Now here we want to predict the word after &#8220;natural&#8221;. The lower triangular mask ensures that during training, our model can only attend to tokens that precede &#8220;natural&#8221; (i.e., <code>&lt;start&gt;<\/code>, &#8216;I&#8217;, &#8216;love&#8217;), masking out the words that come after it. This prevents the model from accessing future tokens, preserving the autoregressive nature of the task and ensuring that predictions are based solely on preceding context. It is only used in the decoder block and not the encoder block as while encoding we can access all the words but while decoding we cannot, because our task is to predict the next word.<\/p>\n\n\n\n<ul><li>In the end, we use a dropout layer initialized using <code>tf.keras.layers.Dropout<\/code>. Dropout regularization is applied to the attention weights during training to prevent overfitting and improve generalization performance.<\/li><\/ul>\n\n\n\n<p>Now, we will code the attention mechanism.<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\"><strong>def<\/strong> call(self, x):\n        <em># input of size (batch, time-step, channels)<\/em>\n        <em># output of size (batch, time-step, head size)<\/em>\n        B, T, C <strong>=<\/strong> x<strong>.<\/strong>shape\n        k <strong>=<\/strong> self<strong>.<\/strong>key(x)   <em># (B, T, hs)<\/em>\n        q <strong>=<\/strong> self<strong>.<\/strong>query(x) <em># (B, T, hs)<\/em>\n\n        <em># compute attention scores (\"affinities\")<\/em>\n        wei <strong>=<\/strong> tf<strong>.<\/strong>matmul(q, tf<strong>.<\/strong>transpose(k, perm<strong>=<\/strong>[0, 2, 1])) <strong>*<\/strong> tf<strong>.<\/strong>math<strong>.<\/strong>rsqrt(tf<strong>.<\/strong>cast(k<strong>.<\/strong>shape[<strong>-<\/strong>1], tf<strong>.<\/strong>float32))  <em># (B, T, T)<\/em>\n        wei <strong>=<\/strong> tf<strong>.<\/strong>where(self<strong>.<\/strong>tril[:T, :T] <strong>==<\/strong> 0, float('-inf'), wei)  <em># (B, T, T)<\/em>\n        wei <strong>=<\/strong> tf<strong>.<\/strong>nn<strong>.<\/strong>softmax(wei, axis<strong>=-<\/strong>1)  <em># (B, T, T)<\/em>\n        wei <strong>=<\/strong> self<strong>.<\/strong>dropout(wei)\n\n        <em># perform the weighted aggregation of the values<\/em>\n        v <strong>=<\/strong> self<strong>.<\/strong>value(x)  <em># (B, T, hs)<\/em>\n        out <strong>=<\/strong> tf<strong>.<\/strong>matmul(wei, v)  <em># (B, T, T) @ (B, T, hs) -&gt; (B, T, hs)<\/em>\n        <strong>return<\/strong> out<\/pre>\n\n\n\n<ul><li>The method receives input <code>x<\/code>, which is a tensor representing the input sequence. It assumes that the input has three dimensions: <code>(batch_size, time_steps, channels)<\/code>. <\/li><\/ul>\n\n\n\n<ol><li><strong>Batch Size<\/strong>: It&#8217;s the number of sequences processed together. For instance, if we process 32 movie reviews simultaneously, the batch size is 32.<\/li><li><strong>Time Steps<\/strong>: It&#8217;s the length of each sequence. In a movie review consisting of 100 words, each word is a time step.<\/li><li><strong>Channels<\/strong>: It&#8217;s the dimensionality of each feature in a sequence. If we represent words with 300-dimensional embeddings, each word has 300 channels.<\/li><\/ol>\n\n\n\n<ul><li>Then it applies the <code>key<\/code> and <code>query<\/code> layers to the input tensor <code>x<\/code>, resulting in tensors <code>k<\/code> and <code>q<\/code>, both with shapes <code>(batch_size, time_steps, head_size)<\/code>.  Here <code>head_size<\/code> refers to the dimensionality of the feature space within each attention head. For example, if <code>head_size<\/code> is set to 64, it means that each attention head operates within a feature space of dimension 64.<\/li><li>It computes attention scores between the query and key tensors <code>(q, k)<\/code> using the dot product followed by normalization. The result is a tensor <code>wei<\/code> of shape <code>(batch_size, time_steps, time_steps)<\/code>, where each element represents the attention score between a query and a key.<\/li><li>The lower triangular mask is applied to <code>wei<\/code> to prevent attending to future tokens, ensuring the autoregressive property of the model.<\/li><li>The softmax function is then applied along the last dimension to obtain attention weights, ensuring that the weights sum up to 1 for each time step.<\/li><li>After that, Dropout regularization is applied to the attention weights to prevent overfitting during training.<\/li><li>Then it applies the <code>value<\/code> layer to the input tensor <code>x<\/code>, resulting in a tensor <code>v<\/code> of shape <code>(batch_size, time_steps, head_size)<\/code>. It performs a weighted sum of the value tensor <code>v<\/code> using the attention weights <code>wei<\/code>, resulting in the output tensor <code>out<\/code> of shape <code>(batch_size, time_steps, head_size)<\/code>. This step computes the context vector, which represents the contextually enriched representation of the input sequence based on attention computations.<\/li><\/ul>\n\n\n\n<p>The <code>Head<\/code> block we implemented represents a single attention head within the Transformer architecture. It performs attention computations, including key, query, and value projections, attention score calculation, masking, softmax normalization, and weighted aggregation of values. Each <code>Head<\/code> block focuses on capturing specific patterns and relationships within the input sequence, contributing to the overall representation learning process of the model.<\/p>\n\n\n\n<p>Now, let&#8217;s delve into the concept of <strong>multi-head attention<\/strong>.<\/p>\n\n\n\n<h3>Multi-Head attention Block<\/h3>\n\n\n\n<p>Multi-head attention is a key component of the Transformer architecture designed to enhance the model&#8217;s ability to capture diverse patterns and dependencies within the input sequence. Instead of relying on a single attention head, the model utilizes multiple attention heads in parallel. Each attention head learns different patterns and relationships within the input sequence independently. The outputs of the multiple attention heads are then concatenated or combined in some way to produce a comprehensive representation of the input sequence.<\/p>\n\n\n\n<p><strong>Why multi-head attention?<\/strong><\/p>\n\n\n\n<ul><li><strong>Capturing Diverse Patterns<\/strong>: Each attention head specializes in capturing specific patterns or dependencies within the input sequence, enhancing the model&#8217;s capacity to learn diverse relationships.<\/li><li><strong>Improved Representation Learning<\/strong>: By leveraging multiple attention heads, the model can capture complex and nuanced interactions within the data, leading to more expressive representations.<\/li><li><strong>Enhanced Robustness<\/strong>: Multi-head attention enables the model to learn from different perspectives simultaneously, making it more robust to variations and uncertainties in the input data.<\/li><\/ul>\n\n\n\n<p>Now, let&#8217;s code the multi-head attention.<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\"><strong>class<\/strong> MultiHeadAttention(tf<strong>.<\/strong>keras<strong>.<\/strong>layers<strong>.<\/strong>Layer):\n    \"\"\" multiple heads of self-attention in parallel \"\"\"\n\n    <strong>def<\/strong> __init__(self, num_heads, head_size):\n        super(MultiHeadAttention, self)<strong>.<\/strong>__init__()\n        self<strong>.<\/strong>heads <strong>=<\/strong> [Head(head_size) <strong>for<\/strong> _ <strong>in<\/strong> range(num_heads)]\n        self<strong>.<\/strong>proj <strong>=<\/strong> tf<strong>.<\/strong>keras<strong>.<\/strong>layers<strong>.<\/strong>Dense(n_embd)\n        self<strong>.<\/strong>dropout <strong>=<\/strong> tf<strong>.<\/strong>keras<strong>.<\/strong>layers<strong>.<\/strong>Dropout(dropout)\n\n    <strong>def<\/strong> call(self, x):\n        out <strong>=<\/strong> tf<strong>.<\/strong>concat([h(x) <strong>for<\/strong> h <strong>in<\/strong> self<strong>.<\/strong>heads], axis<strong>=-<\/strong>1)\n        out <strong>=<\/strong> self<strong>.<\/strong>dropout(self<strong>.<\/strong>proj(out))\n        <strong>return<\/strong> out<\/pre>\n\n\n\n<h6>Initialization:<\/h6>\n\n\n\n<ul><li><code>num_heads<\/code> and <code>head_size<\/code> are parameters passed to initialize the <code>MultiHeadAttention<\/code> layer. <code>num_heads<\/code> specifies the number of attention heads to be used in parallel and <code>head_size<\/code> determines the dimensionality of the feature space within each attention head.<\/li><\/ul>\n\n\n\n<h5><code>__init__<\/code> Method:<\/h5>\n\n\n\n<ul><li>In the <code>__init__<\/code> method, we initialize the multiple attention heads by creating a list comprehension of <code>Head<\/code> instances. Each <code>Head<\/code> instance represents a single attention head with the specified <code>head_size<\/code>.<\/li><li>Additionally, we initialize a projection layer (<code>self.proj<\/code>) to aggregate the outputs of the multiple attention heads into a single representation.<\/li><li>A dropout layer (<code>self.dropout<\/code>) is also initialized to prevent overfitting during training.<\/li><\/ul>\n\n\n\n<h5><code>call<\/code> Method:<\/h5>\n\n\n\n<ul><li>The <code>call<\/code> method takes the input tensor <code>x<\/code> and processes it through each attention head in parallel.<\/li><li>For each attention head in <code>self.heads<\/code>, the input tensor <code>x<\/code> is passed through the attention head, and the outputs are concatenated along the last axis using <code>tf.concat<\/code>.<\/li><li>The concatenated output is then passed through the projection layer <code>self.proj<\/code> to combine the information from multiple heads into a single representation.<\/li><li>Finally, dropout regularization is applied to the projected output to prevent overfitting.<\/li><\/ul>\n\n\n\n<p>In summary, the <code>MultiHeadAttention<\/code> class encapsulates the functionality of performing self-attention across multiple heads in parallel, enabling the model to capture diverse patterns and relationships within the input sequence. It forms a critical building block of the Transformer architecture, contributing to its effectiveness in various natural language processing tasks.<\/p>\n\n\n\n<h3>Feed-forward layer<\/h3>\n\n\n\n<p>The <code>FeedForward<\/code> layer in the Transformer architecture introduces non-linearity and feature transformation, essential for capturing complex patterns in the data. Through the ReLU activation function, it models non-linearities, aiding better representation learning. By projecting input features into higher-dimensional spaces and reducing dimensionality, it enhances the model&#8217;s ability to capture intricate dependencies and structures, fostering more expressive representations. Additionally, dropout regularization within the layer prevents overfitting by encouraging robust and generalizable representations, improving the model&#8217;s performance across diverse natural language processing tasks.<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\"><strong>class<\/strong> FeedForward(tf<strong>.<\/strong>keras<strong>.<\/strong>layers<strong>.<\/strong>Layer):\n    \"\"\" a simple linear layer followed by a non-linearity \"\"\"\n\n    <strong>def<\/strong> __init__(self, n_embd):\n        super(FeedForward, self)<strong>.<\/strong>__init__()\n        self<strong>.<\/strong>net <strong>=<\/strong> tf<strong>.<\/strong>keras<strong>.<\/strong>Sequential([\n            tf<strong>.<\/strong>keras<strong>.<\/strong>layers<strong>.<\/strong>Dense(4 <strong>*<\/strong> n_embd),\n            tf<strong>.<\/strong>keras<strong>.<\/strong>layers<strong>.<\/strong>ReLU(),\n            tf<strong>.<\/strong>keras<strong>.<\/strong>layers<strong>.<\/strong>Dense(n_embd),\n            tf<strong>.<\/strong>keras<strong>.<\/strong>layers<strong>.<\/strong>Dropout(dropout),\n        ])\n\n    <strong>def<\/strong> call(self, x):\n        <strong>return<\/strong> self<strong>.<\/strong>net(x)<\/pre>\n\n\n\n<ul><li>The <code>FeedForward<\/code> layer is initialized with the parameter <code>n_embd<\/code>, which specifies the dimensionality of the input and output feature spaces, or we can say shape of input and output tensor.<\/li><\/ul>\n\n\n\n<h5><code>__init__<\/code> Method:<\/h5>\n\n\n\n<ul><li>In the <code>__init__<\/code> method, we define a simple feedforward neural network using <code>tf.keras.Sequential<\/code>.<\/li><li>The network consists of two dense layers:<ol><li>The first dense layer (<code>tf.keras.layers.Dense(4 * n_embd)<\/code>) projects the input features into a higher-dimensional space, followed by a rectified linear unit (ReLU) activation function (<code>tf.keras.layers.ReLU()<\/code>).<\/li><li>The second dense layer (<code>tf.keras.layers.Dense(n_embd)<\/code>) reduces the dimensionality back to the original feature space.<\/li><\/ol><\/li><li>Additionally, dropout regularization is applied using <code>tf.keras.layers.Dropout(dropout)<\/code> to prevent overfitting during training.<\/li><\/ul>\n\n\n\n<h5><code>call<\/code> Method:<\/h5>\n\n\n\n<ul><li>The <code>call<\/code> method takes the input tensor <code>x<\/code> and passes it through the feedforward neural network defined in <code>self.net<\/code>. The output of the feedforward network is returned as the final result.<\/li><\/ul>\n\n\n\n<p>In summary, the <code>FeedForward<\/code> class implements a feedforward neural network layer within the Transformer architecture. It applies linear transformations followed by non-linear activations to process input features, enabling the model to capture complex patterns and relationships within the data. This layer contributes to the expressive power and effectiveness of the Transformer model in various natural language processing tasks.<\/p>\n\n\n\n<h3>Transformer Block<\/h3>\n\n\n\n<p>Now, let&#8217;s add all these components to form a transformer block<\/p>\n\n\n\n<figure class=\"wp-block-image size-large is-resized\"><img src=\"https:\/\/blog.cloudxlab.com\/wp-content\/uploads\/2024\/02\/Screenshot-2024-02-05-at-2.49.14-PM.png\" alt=\"\" class=\"wp-image-4208\" width=\"164\" height=\"235\"\/><figcaption>Transformer Block<\/figcaption><\/figure>\n\n\n\n<pre class=\"wp-block-preformatted\"><strong>class<\/strong> Block(tf<strong>.<\/strong>keras<strong>.<\/strong>layers<strong>.<\/strong>Layer):\n    \"\"\" Transformer block: communication followed by computation \"\"\"\n\n    <strong>def<\/strong> __init__(self, n_embd, n_head):\n        super(Block, self)<strong>.<\/strong>__init__()\n        head_size <strong>=<\/strong> n_embd <strong>\/\/<\/strong> n_head\n        self<strong>.<\/strong>sa <strong>=<\/strong> MultiHeadAttention(n_head, head_size)\n        self<strong>.<\/strong>ffwd <strong>=<\/strong> FeedForward(n_embd)\n        self<strong>.<\/strong>ln1 <strong>=<\/strong> tf<strong>.<\/strong>keras<strong>.<\/strong>layers<strong>.<\/strong>LayerNormalization(epsilon<strong>=<\/strong>1e-6)\n        self<strong>.<\/strong>ln2 <strong>=<\/strong> tf<strong>.<\/strong>keras<strong>.<\/strong>layers<strong>.<\/strong>LayerNormalization(epsilon<strong>=<\/strong>1e-6)\n\n    <strong>def<\/strong> call(self, x):\n        x <strong>=<\/strong> x <strong>+<\/strong> self<strong>.<\/strong>sa(self<strong>.<\/strong>ln1(x))\n        x <strong>=<\/strong> x <strong>+<\/strong> self<strong>.<\/strong>ffwd(self<strong>.<\/strong>ln2(x))\n        <strong>return<\/strong> x<\/pre>\n\n\n\n<ul><li>The <code>Block<\/code> class is initialized with two parameters: <code>n_embd<\/code> and <code>n_head<\/code>. <code>n_embd<\/code> specifies the dimensionality of the input and output feature spaces and <code>n_head<\/code> determines the number of attention heads to be used in the MultiHeadAttention layer.<\/li><li>Inside the <code>__init__<\/code> method, we initialize the components of the Transformer block: <strong>MultiHeadAttention (<code>self.sa<\/code>)<\/strong>, <strong>FeedForward (<code>self.ffwd<\/code>)<\/strong>: <strong>Layer Normalization (<code>self.ln1<\/code>, <code>self.ln2<\/code>)<\/strong>, represented by <strong>Add&amp;Norm <\/strong>in the above diagram.<\/li><li>The <code>call<\/code> method of the <code>Block<\/code> class in the Transformer architecture processes the input tensor <code>x<\/code> through a series of transformations. Firstly, the input tensor undergoes the MultiHeadAttention layer (<code>self.sa<\/code>), followed by Layer Normalization (<code>self.ln1<\/code>). The resulting output is then added to the original input tensor to facilitate communication between different positions in the sequence. Subsequently, the augmented tensor from the previous step is passed through the FeedForward layer(<code>self.ffwd<\/code>), followed by another Layer Normalization (<code>self.ln2<\/code>). The output of the feedforward computation is again added to the augmented tensor.<\/li><\/ul>\n\n\n\n<h3>GPT<\/h3>\n\n\n\n<p>Now, as we have designed the components of GPT, let&#8217;s stack them together to build our GPT.<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\"><strong>class<\/strong> GPTLanguageModel(tf<strong>.<\/strong>keras<strong>.<\/strong>Model):\n\n    <strong>def<\/strong> __init__(self):\n        super(GPTLanguageModel, self)<strong>.<\/strong>__init__()\n        <em># each token directly reads off the logits for the next token from a lookup table<\/em>\n        self<strong>.<\/strong>token_embedding_table <strong>=<\/strong> tf<strong>.<\/strong>keras<strong>.<\/strong>layers<strong>.<\/strong>Embedding(vocab_size, n_embd)\n        self<strong>.<\/strong>position_embedding_table <strong>=<\/strong> tf<strong>.<\/strong>keras<strong>.<\/strong>layers<strong>.<\/strong>Embedding(block_size, n_embd)\n        self<strong>.<\/strong>blocks <strong>=<\/strong> tf<strong>.<\/strong>keras<strong>.<\/strong>Sequential([Block(n_embd, n_head<strong>=<\/strong>n_head) <strong>for<\/strong> _ <strong>in<\/strong> range(n_layer)])\n        self<strong>.<\/strong>ln_f <strong>=<\/strong> tf<strong>.<\/strong>keras<strong>.<\/strong>layers<strong>.<\/strong>LayerNormalization(epsilon<strong>=<\/strong>1e-6)\n        self<strong>.<\/strong>lm_head <strong>=<\/strong> tf<strong>.<\/strong>keras<strong>.<\/strong>layers<strong>.<\/strong>Dense(vocab_size, kernel_initializer<strong>=<\/strong>'normal', bias_initializer<strong>=<\/strong>'zeros')<\/pre>\n\n\n\n<p>The <code>GPTLanguageModel<\/code> class defines a language model based on the Generative Pre-trained Transformer (GPT) architecture.<\/p>\n\n\n\n<h5><code>__init__<\/code> Method:<\/h5>\n\n\n\n<ul><li>The __init__ method initializes the components necessary for the GPT language model.<\/li><li><strong><code>self.token_embedding_table<\/code><\/strong>: This layer converts input tokens into dense vectors of fixed size (embedding vectors). Each token is mapped to a unique embedding vector in a lookup table.<\/li><li><strong><code>self.position_embedding_table<\/code><\/strong>: This layer generates position encodings that represent the position of each token in the input sequence.<\/li><li><strong><code>self.blocks<\/code><\/strong>: A sequence of Transformer blocks responsible for processing the input sequence. Each block comprises multi-head self-attention mechanisms and feedforward neural networks.<\/li><li><strong><code>self.ln_f<\/code><\/strong>: Applies layer normalization to the final hidden states of the Transformer blocks. It stabilizes the training process by ensuring consistent distributions of hidden states across layers.<\/li><li><strong><code>self.lm_head<\/code><\/strong>: A dense layer that maps the final hidden states of the Transformer blocks to <strong>logits<\/strong> over the vocabulary. Logits represent unnormalized probabilities of each token in the vocabulary being the next token in the sequence.<\/li><\/ul>\n\n\n\n<p>Let&#8217;s see these components in the transformer architecture.<\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img width=\"874\" height=\"485\" src=\"https:\/\/blog.cloudxlab.com\/wp-content\/uploads\/2024\/02\/Screenshot-2024-02-06-at-4.35.08-PM.png\" alt=\"\" class=\"wp-image-4212\"\/><\/figure>\n\n\n\n<p><strong>Note<\/strong>:- The <strong><code>self.ln_f<\/code><\/strong> is not explicitly shown in the image.<\/p>\n\n\n\n<p>Now let&#8217;s write the method which will perform the forward pass during our training phase.<\/p>\n\n\n\n<h5><code>call<\/code> Method:<\/h5>\n\n\n\n<pre class=\"wp-block-preformatted\"><strong>def<\/strong> call(self, idx, targets<strong>=<\/strong><strong>None<\/strong>):\n        B, T <strong>=<\/strong> idx<strong>.<\/strong>shape\n\n        <em># idx and targets are both (B,T) tensor of integers<\/em>\n        tok_emb <strong>=<\/strong> self<strong>.<\/strong>token_embedding_table(idx)  <em># (B,T,C)<\/em>\n        pos_emb <strong>=<\/strong> self<strong>.<\/strong>position_embedding_table(tf<strong>.<\/strong>range(T, dtype<strong>=<\/strong>tf<strong>.<\/strong>float32))  <em># (T,C)<\/em>\n        x <strong>=<\/strong> tok_emb <strong>+<\/strong> pos_emb  <em># (B,T,C)<\/em>\n        x <strong>=<\/strong> self<strong>.<\/strong>blocks(x)  <em># (B,T,C)<\/em>\n        x <strong>=<\/strong> self<strong>.<\/strong>ln_f(x)  <em># (B,T,C)<\/em>\n        logits <strong>=<\/strong> self<strong>.<\/strong>lm_head(x)  <em># (B,T,vocab_size)<\/em>\n\n        <strong>if<\/strong> targets <strong>is<\/strong> <strong>None<\/strong>:\n            loss <strong>=<\/strong> <strong>None<\/strong>\n        <strong>else<\/strong>:\n            B, T, C <strong>=<\/strong> logits<strong>.<\/strong>shape\n            logits <strong>=<\/strong> tf<strong>.<\/strong>reshape(logits, (B <strong>*<\/strong> T, C))\n            targets <strong>=<\/strong> tf<strong>.<\/strong>reshape(targets, (B <strong>*<\/strong> T,))\n            loss <strong>=<\/strong> tf<strong>.<\/strong>keras<strong>.<\/strong>losses<strong>.<\/strong>SparseCategoricalCrossentropy(from_logits<strong>=<\/strong><strong>True<\/strong>)(targets, logits)\n\n        <strong>return<\/strong> logits, loss\n<\/pre>\n\n\n\n<ul><li>The call method takes idx and targets as input.<ul><li><code>idx<\/code> represents the input tensor containing integer indices of tokens. It has shape (batch_size, sequence_length).<\/li><li><code>targets<\/code> represents the target tensor containing the indices of the tokens to be predicted. It has the same shape as <code>idx<\/code>.<\/li><\/ul><\/li><li><code>tok_emb<\/code> retrieves the token embeddings for the input indices from the token embedding table.<\/li><li><code>pos_emb<\/code> generates position embeddings for each position in the input sequence using the position embedding table.<\/li><li><strong>x = tok_emb + pos_emb:<\/strong> The token and position embeddings are added together to incorporate both token and positional information into the input representation <code>x<\/code>.<\/li><li><strong>x = self.blocks(x):<\/strong> Then the input representation <code>x<\/code> is passed through the Transformer blocks (<code>self.blocks<\/code>), which process the sequence and extract relevant features.<\/li><li><strong>x = self.ln_f(x):<\/strong> Layer normalization (<code>self.ln_f<\/code>) is applied to stabilize the training process by normalizing the hidden states of the Transformer blocks.<\/li><li><strong>logits = self.lm_head(x):<\/strong> The final hidden states are passed through the output layer (<code>self.lm_head<\/code>), which generates logits for each token in the vocabulary.<\/li><li>If <code>targets<\/code> are provided, the method computes the loss using the sparse categorical cross-entropy loss function. It reshapes the logits and targets tensors to match the format required by the loss function.<\/li><li>If <code>targets<\/code> are not provided, the loss is set to None. That means we are not training the model but using it for prediction\/text generation.<\/li><li>The method returns the logits and the computed loss (if applicable).<\/li><\/ul>\n\n\n\n<p>Now that we&#8217;ve explored the inner workings of the call method, let&#8217;s dive into another captivating feature of our Generative Pre-trained Transformer (GPT): the generate method. While the call method focuses on predicting the next character given a sequence, the generate method takes it a step further by generating entire sequences of text. It relies on the call method internally to predict each subsequent character, iteratively building the complete sequence.<\/p>\n\n\n\n<h5><code>generate<\/code> Method:<\/h5>\n\n\n\n<pre class=\"wp-block-preformatted\"><strong>def<\/strong> generate(self, idx, max_new_tokens):\n        <em># idx is (B, T) array of indices in the current context<\/em>\n        <strong>for<\/strong> _ <strong>in<\/strong> range(max_new_tokens):\n            <em># crop idx to the last block_size tokens<\/em>\n            idx_cond <strong>=<\/strong> idx[:, <strong>-<\/strong>block_size:]\n            <em># get the predictions<\/em>\n            logits, loss <strong>=<\/strong> self(idx_cond)\n            <em># focus only on the last time step<\/em>\n            logits <strong>=<\/strong> logits[:, <strong>-<\/strong>1, :]  <em># becomes (B, C)<\/em>\n            <em># apply softmax to get probabilities<\/em>\n            probs <strong>=<\/strong> tf<strong>.<\/strong>nn<strong>.<\/strong>softmax(logits, axis<strong>=-<\/strong>1)  <em># (B, C)<\/em>\n            <em># sample from the distribution<\/em>\n            idx_next <strong>=<\/strong> tf<strong>.<\/strong>random<strong>.<\/strong>categorical(tf<strong>.<\/strong>math<strong>.<\/strong>log(probs), num_samples<strong>=<\/strong>1, dtype<strong>=<\/strong>tf<strong>.<\/strong>int64)  <em># (B, 1)<\/em>\n            <em># append sampled index to the running sequence<\/em>\n            idx <strong>=<\/strong> tf<strong>.<\/strong>concat([idx, idx_next], axis<strong>=<\/strong>1)  <em># (B, T+1)<\/em>\n        <strong>return<\/strong> idx<\/pre>\n\n\n\n<ul><li><strong>for _ in range(max_new_tokens):<\/strong> The method iterates for a specified number of <code>max_new_tokens<\/code> to generate new tokens based on the provided input sequence <code>idx<\/code>. max_new_tokens tells us about the number of tokens we want our GPT to generate.<\/li><li><strong>idx_cond = idx[:, -block_size:]:<\/strong> Then it extracts the last <code>block_size<\/code> tokens from the input sequence <code>idx<\/code> to ensure that the model generates new tokens based on the most recent context. This cropping operation ensures that the model&#8217;s predictions are influenced by the most recent tokens.<\/li><li><strong>logits, loss = self(idx_cond):<\/strong> Then the method invokes the model&#8217;s <code>call<\/code> method with the cropped input sequence <code>idx_cond<\/code> to obtain predictions for the next token in the sequence. The model generates logits, which are unnormalized probabilities, for each token in the vocabulary.<\/li><li><strong>logits = logits[:, -1, :]:<\/strong> It selects only the logits corresponding to the last time step of the sequence, representing predictions for the next token to be generated. This step ensures that the model focuses on predicting the next token based on the most recent context.<\/li><li><strong>probs = tf.nn.softmax(logits, axis=-1):<\/strong> Softmax activation is applied to the logits to convert them into probabilities. This softmax operation ensures that the model&#8217;s predictions are transformed into a probability distribution over the vocabulary, indicating the likelihood of each token being the next token in the sequence.<\/li><li><strong>idx_next = tf.random.categorical(tf.math.log(probs), num_samples=1, dtype=tf.int64):<\/strong> It samples tokens from the probability distribution using the <code>tf.random.categorical<\/code> function, which randomly selects one token index from the probability distribution for each sequence in the batch. The <code>log(probs)<\/code> argument is used to stabilize the sampling process.<\/li><li><strong>idx = tf.concat([idx, idx_next], axis=1):<\/strong> Then the sampled token indices are appended to the original input sequence <code>idx<\/code>, extending the sequence with the newly generated tokens.<\/li><li>This process repeats for each iteration of the loop, generating new tokens until the desired number of tokens (<code>max_new_tokens<\/code>) is reached.<\/li><li>Finally, the method returns the updated input sequence <code>idx<\/code>, which now includes the newly generated tokens, representing an extended sequence with additional context and predictions for future tokens.for _ in range(max_new_tokens):<\/li><\/ul>\n\n\n\n<p>In summary, the Generative Pre-trained Transformer (GPT) architecture employs advanced techniques like multi-head self-attention, feedforward neural networks, and layer normalization to understand and generate natural language text. With token and position embedding tables and a stack of Transformer blocks, GPT captures complex language patterns effectively.<\/p>\n\n\n\n<p>Now, it&#8217;s time to train the GPT model on relevant datasets, fine-tune its parameters, and explore its capabilities across different tasks and domains. We&#8217;ll use the Shakespear dataset to train our GPT. This means our model will learn to generate text in the style of Shakespeare&#8217;s writings. You can find the dataset at <a href=\"https:\/\/github.com\/cloudxlab\/GPT-from-scratch\/blob\/master\/input.txt\">https:\/\/github.com\/cloudxlab\/GPT-from-scratch\/blob\/master\/input.txt<\/a>.<\/p>\n\n\n\n<p>Let&#8217;s start with loading the dataset:<\/p>\n\n\n\n<h3>Loading the data<\/h3>\n\n\n\n<pre class=\"wp-block-preformatted\"><strong>with<\/strong> open('input.txt', 'r', encoding<strong>=<\/strong>'utf-8') <strong>as<\/strong> f:\n    text <strong>=<\/strong> f<strong>.<\/strong>read()<\/pre>\n\n\n\n<p>Now, let&#8217;s create the character mappings so that we can convert the characters into numbers to feed it to machine.<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">chars <strong>=<\/strong> sorted(list(set(text)))\nvocab_size <strong>=<\/strong> len(chars)\nstoi <strong>=<\/strong> {ch: i <strong>for<\/strong> i, ch <strong>in<\/strong> enumerate(chars)}\nitos <strong>=<\/strong> {i: ch <strong>for<\/strong> i, ch <strong>in<\/strong> enumerate(chars)}\nencode <strong>=<\/strong> <strong>lambda<\/strong> s: [stoi[c] <strong>for<\/strong> c <strong>in<\/strong> s]\ndecode <strong>=<\/strong> <strong>lambda<\/strong> l: ''<strong>.<\/strong>join([itos[i] <strong>for<\/strong> i <strong>in<\/strong> l])<\/pre>\n\n\n\n<p>The above code initializes dictionaries for character-to-index and index-to-character mappings:<\/p>\n\n\n\n<ul><li>It extracts unique characters from the text and sorts them alphabetically.<\/li><li>Two dictionaries are created:<ul><li><code>stoi<\/code>: Maps characters to indices.<\/li><li><code>itos<\/code>: Maps indices to characters.<\/li><\/ul><\/li><li>Encoding (<code>encode<\/code>) and decoding (<code>decode<\/code>) functions are defined to convert between strings and lists of indices.<\/li><\/ul>\n\n\n\n<p>Now let&#8217;s divide our dataset into training and testing set.<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\"><em># Train and test splits<\/em>\ndata <strong>=<\/strong> tf<strong>.<\/strong>constant(encode(text), dtype<strong>=<\/strong>tf<strong>.<\/strong>int64)\nn <strong>=<\/strong> int(0.9 <strong>*<\/strong> len(data))\ntrain_data <strong>=<\/strong> data[:n]\nval_data <strong>=<\/strong> data[n:]<\/pre>\n\n\n\n<p>To streamline our data processing, we&#8217;ll break it down into manageable batches. This approach helps us efficiently handle large datasets without overwhelming our system resources. Let&#8217;s write a function to load our data in batches, enabling us to feed it into our model systematically and effectively.<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\"><em># Data loading into batches<\/em>\n<strong>def<\/strong> get_batch(split):\n    data_split <strong>=<\/strong> train_data <strong>if<\/strong> split <strong>==<\/strong> 'train' <strong>else<\/strong> val_data\n    ix <strong>=<\/strong> tf<strong>.<\/strong>random<strong>.<\/strong>uniform(shape<strong>=<\/strong>(batch_size,), maxval<strong>=<\/strong>len(data_split) <strong>-<\/strong> block_size, dtype<strong>=<\/strong>tf<strong>.<\/strong>int32)\n    x <strong>=<\/strong> tf<strong>.<\/strong>stack([data_split[i:i<strong>+<\/strong>block_size] <strong>for<\/strong> i <strong>in<\/strong> ix])\n    y <strong>=<\/strong> tf<strong>.<\/strong>stack([data_split[i<strong>+<\/strong>1:i<strong>+<\/strong>block_size<strong>+<\/strong>1] <strong>for<\/strong> i <strong>in<\/strong> ix])\n    <strong>return<\/strong> x, y<\/pre>\n\n\n\n<p>Now that we have our dataset ready, we need a function to calculate the loss. This loss function helps us understand how well our model is performing during training. By evaluating the loss, we can adjust our model&#8217;s weights using the backpropagation algorithm, which fine-tunes its parameters to minimize the loss and improve performance over time. Let&#8217;s craft a simple yet effective function to calculate the loss for our model.<\/p>\n\n\n\n<h3>Calculating Loss<\/h3>\n\n\n\n<pre class=\"wp-block-preformatted\"><em># Calculating loss of the model<\/em>\n<strong>def<\/strong> estimate_loss(model):\n    out <strong>=<\/strong> {}\n    model<strong>.<\/strong>trainable <strong>=<\/strong> <strong>False<\/strong>\n    <strong>for<\/strong> split <strong>in<\/strong> ['train', 'val']:\n        losses <strong>=<\/strong> tf<strong>.<\/strong>TensorArray(tf<strong>.<\/strong>float32, size<strong>=<\/strong>eval_iters)\n        <strong>for<\/strong> k <strong>in<\/strong> range(eval_iters):\n            X, Y <strong>=<\/strong> get_batch(split)\n            logits, loss <strong>=<\/strong> model(X, Y)\n            losses <strong>=<\/strong> losses<strong>.<\/strong>write(k, loss)\n        out[split] <strong>=<\/strong> losses<strong>.<\/strong>stack()<strong>.<\/strong>numpy()<strong>.<\/strong>mean()\n    model<strong>.<\/strong>trainable <strong>=<\/strong> <strong>True<\/strong>\n    <strong>return<\/strong> out<\/pre>\n\n\n\n<ul><li>The function starts by initializing an empty dictionary named <code>out<\/code> to store the loss values for both the training and validation splits.<\/li><li>It sets the <code>trainable<\/code> attribute of the model to <code>False<\/code> to ensure that the model&#8217;s parameters are not updated during the loss estimation process.<\/li><li>The function iterates over two splits: &#8216;train&#8217; and &#8216;val&#8217;, representing the training and validation datasets, respectively.<\/li><li>Within each split, the function iterates <code>eval_iters<\/code> times. In each iteration, it retrieves a batch of input-output pairs (X, Y) using the <code>get_batch(split)<\/code> function.<\/li><li>For each batch, the model is called with inputs X and targets Y to obtain the logits and the corresponding loss.<\/li><li>The loss value for each iteration is stored in a TensorFlow TensorArray named <code>losses<\/code>.<\/li><li>Once all iterations for a split are completed, the mean loss value across all iterations is computed using the <code>numpy().mean()<\/code> method, and it is stored in the <code>out<\/code> dictionary with the corresponding split key.<\/li><li>After iterating over both &#8216;train&#8217; and &#8216;val&#8217; splits, the model&#8217;s <code>trainable<\/code> attribute is set back to <code>True<\/code> to allow further training if needed.<\/li><li>Finally, the function returns the dictionary <code>out<\/code>, containing the average loss values for both the training and validation splits.<\/li><\/ul>\n\n\n\n<h3>Training the model<\/h3>\n\n\n\n<p>Now, let&#8217;s define the hyperparameters needed to configure our model training.<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\"><em># hyperparameters<\/em>\nbatch_size <strong>=<\/strong> 64\nblock_size <strong>=<\/strong> 256\nmax_iters <strong>=<\/strong> 5000\neval_interval <strong>=<\/strong> 500\nlearning_rate <strong>=<\/strong> 3e-4\neval_iters <strong>=<\/strong> 200\nn_embd <strong>=<\/strong> 384\nn_head <strong>=<\/strong> 6\nn_layer <strong>=<\/strong> 6\ndropout <strong>=<\/strong> 0.2\n\n<em># Set random seed<\/em>\ntf<strong>.<\/strong>random<strong>.<\/strong>set_seed(1337)<\/pre>\n\n\n\n<p>Now, let&#8217;s implement the training loop for our model. This loop iterates through the dataset, feeding batches of data to the model for training. Within each iteration, the model calculates the loss and updates its weights using the backpropagation algorithm. By repeating this process over multiple epochs, our model gradually learns to make accurate predictions and improve its performance. Let&#8217;s dive in and code the training loop for our model.<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\"><em>#Training the model. GPU is recommended for training.<\/em>\n\nmodel <strong>=<\/strong> GPTLanguageModel()\noptimizer <strong>=<\/strong> tf<strong>.<\/strong>keras<strong>.<\/strong>optimizers<strong>.<\/strong>Adam(learning_rate)\n\n<strong>for<\/strong> iter <strong>in<\/strong> range(max_iters):\n\n    <em># every once in a while evaluate the loss on train and val sets<\/em>\n    <strong>if<\/strong> iter <strong>%<\/strong> eval_interval <strong>==<\/strong> 0 <strong>or<\/strong> iter <strong>==<\/strong> max_iters <strong>-<\/strong> 1:\n        losses <strong>=<\/strong> estimate_loss(model)\n        print(f\"step {iter}: train loss {losses['train']:.4f}, val loss {losses['val']:.4f}\")\n\n    <em># sample a batch of data<\/em>\n    xb, yb <strong>=<\/strong> get_batch('train')\n\n    <em># evaluate the loss<\/em>\n    <strong>with<\/strong> tf<strong>.<\/strong>GradientTape() <strong>as<\/strong> tape:\n        logits, loss <strong>=<\/strong> model(xb, yb)\n\n    grads <strong>=<\/strong> tape<strong>.<\/strong>gradient(loss, model<strong>.<\/strong>trainable_variables)\n    optimizer<strong>.<\/strong>apply_gradients(zip(grads, model<strong>.<\/strong>trainable_variables))<\/pre>\n\n\n\n<ul><li>The <code>GPTLanguageModel<\/code> class is instantiated, creating an instance of the GPT language model.<\/li><li>Then an Adam optimizer is initialized with the specified learning rate (<code>learning_rate<\/code>).<\/li><li>The training loop iterates over a specified number of iterations (<code>max_iters<\/code>). During each iteration, the model&#8217;s performance is periodically evaluated on both the training and validation datasets.<\/li><li>In each iteration, a batch of data (<code>xb<\/code>, <code>yb<\/code>) is sampled from the training dataset using the <code>get_batch<\/code> function. This function retrieves input-output pairs for training.<\/li><li>The loss is computed by forward-passing the input batch (<code>xb<\/code>) through the model (<code>model<\/code>) and comparing the predictions with the actual targets (<code>yb<\/code>).<\/li><li>A gradient tape (<code>tf.GradientTape<\/code>) records operations for automatic differentiation, enabling the computation of gradients with respect to trainable variables.<\/li><li>Gradients of the loss with respect to the trainable variables are computed using <code>tape.gradient<\/code>.<\/li><li>The optimizer (<code>optimizer<\/code>) then applies these gradients to update the model&#8217;s trainable parameters using the Adam optimization algorithm.<\/li><\/ul>\n\n\n\n<p>With the completion of the training loop, our model has been trained using gradient descent optimization. Through iterations of parameter updates, it has learned to minimize the loss function, improving its ability to generate coherent and contextually relevant text. This training process equips the model with the knowledge and understanding necessary to perform various natural language processing tasks effectively.<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">step 0: train loss 4.5158, val loss 4.5177\nstep 500: train loss 1.9006, val loss 2.0083\nstep 1000: train loss 1.4417, val loss 1.6584\nstep 1500: train loss 1.2854, val loss 1.5992\nstep 2000: train loss 1.1676, val loss 1.5936\nstep 2500: train loss 1.0419, val loss 1.6674\nstep 3000: train loss 0.9076, val loss 1.8094\nstep 3500: train loss 0.7525, val loss 2.0218\nstep 4000: train loss 0.6012, val loss 2.3162\nstep 4500: train loss 0.4598, val loss 2.6565\nstep 4999: train loss 0.3497, val loss 2.9876<\/pre>\n\n\n\n<p><br>From the provided training log, we can observe several key insights:<\/p>\n\n\n\n<ol><li><strong>Training Progress<\/strong>: As the training progresses, both the training loss and validation loss decrease gradually. This indicates that our model is learning and improving its performance over time.<\/li><li><strong>Overfitting<\/strong>: Towards the end of the training process, we notice a discrepancy between the training loss and the validation loss. While the training loss continues to decrease, the validation loss starts to increase after a certain point. This divergence suggests that our model may be overfitting to the training data, performing well on the training set but struggling to generalize to unseen data represented by the validation set.<\/li><li><strong>Model Performance<\/strong>: The final validation loss provides insight into the overall performance of our model. A lower validation loss indicates better generalization and performance on unseen data. In this case, the validation loss seems relatively high, suggesting that our model may not be performing optimally.<\/li><\/ol>\n\n\n\n<p>Now, it&#8217;s important to note that the observed behavior in the training log, including the increasing validation loss towards the end of training, was intentionally introduced to highlight the phenomenon of overfitting. Overfitting occurs when a model learns to perform well on the training data but struggles to generalize to unseen data.<\/p>\n\n\n\n<p>As part of your learning journey, it&#8217;s now your homework to address this issue and improve the model&#8217;s performance. You can explore various strategies to combat overfitting, such as adjusting the model architecture, incorporating regularization techniques, or increasing the diversity of the training data.<\/p>\n\n\n\n<p>We have saved the weights of the model after 5000 iteration. You can directly use those to avoid the training phase as it can take a lot of time without <strong>GPU<\/strong>. The weights are present at: <a href=\"https:\/\/github.com\/cloudxlab\/GPT-from-scratch\/blob\/master\/gpt_model_weights.h5\">https:\/\/github.com\/cloudxlab\/GPT-from-scratch\/blob\/master\/gpt_model_weights.h5<\/a>.<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\"><em># Initializing model with pre-trained weights. Use this if you don't want to re-train the model.<\/em>\n<em>model = GPTLanguageModel()<\/em>\n<em>dummy_input = tf.constant([[0]], dtype=tf.int32)<\/em>\n<em>model(dummy_input)<\/em>\n<em>model.load_weights('gpt_model_weights.h5')<\/em><\/pre>\n\n\n\n<p>Now we will generate new text using the model.<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\"><em># generate from the model<\/em>\ncontext <strong>=<\/strong> tf<strong>.<\/strong>zeros((1, 1), dtype<strong>=<\/strong>tf<strong>.<\/strong>int64)\ngenerated_sequence <strong>=<\/strong> model<strong>.<\/strong>generate(context, max_new_tokens<strong>=<\/strong>500)<strong>.<\/strong>numpy()\nprint(decode(generated_sequence[0]))<\/pre>\n\n\n\n<ul><li>An initial context is set up using <code>tf.zeros((1, 1), dtype=tf.int64)<\/code>. This initializes a tensor of shape <code>(1, 1)<\/code> with all elements set to zero, indicating the starting point for text generation.<\/li><li>The <code>generate<\/code> method of the trained model (<code>model<\/code>) is called to generate new text sequences based on the provided initial context. The <code>max_new_tokens<\/code> parameter specifies the maximum number of new tokens to generate in the text sequence.<\/li><li>The generated sequence is then decoded using a decoding function (<code>decode<\/code>) to convert the sequence of token IDs into human-readable text.<\/li><\/ul>\n\n\n\n<p>So, the output is:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">Now keeps.\nCan I know should thee were trans--I protest,\nTo betwixt the Samart's the mutine.\n\nCAMILLO:\nHa, madam!\nSir, you!\nYou pitiff now, but you are worth aboards,\nBetwixt the right of your ox adversaries,\nOr let our suddenly in all severaltius free\nThan Bolingbroke to England. Mercutio,\nEver justice with his praisence, he was proud;\nWhen she departed by his fortune like a greer,\nAnd in the gentle king fair hateful man.\nFarewell; so old Cominius, away; I rather,\nTo you are therefore be behold\n<\/pre>\n\n\n\n<p>The generated text exhibits a level of coherence and structure reminiscent of Shakespearean language, suggesting that the model has effectively learned patterns from the Shakespearean text data it was trained on. The text includes elements such as archaic language, poetic imagery, and character interactions, which are characteristic of Shakespeare&#8217;s writing style.<\/p>\n\n\n\n<p>Overall, the generated text demonstrates that the model is performing well in capturing the stylistic nuances and linguistic patterns present in the training data. It successfully produces text that resembles the language and tone of Shakespeare&#8217;s works, indicating that the model has learned to generate contextually relevant and plausible sequences of text.<\/p>\n\n\n\n<p>You can save the model weights using:<\/p>\n\n\n\n<pre class=\"wp-block-preformatted\">model<strong>.<\/strong>save_weights('gpt_model_weights.h5')<\/pre>\n\n\n\n<p>In conclusion, we have delved into the architecture and training process of the Generative Pre-trained Transformer (GPT) model. We explored the intricacies of its components, and gained insights into its training dynamics. Through our journey, we identified challenges such as overfitting and discussed strategies to address them.<\/p>\n\n\n\n<p>As we conclude, it&#8217;s important to remember that mastering machine learning models like GPT requires a combination of theoretical understanding, practical experimentation, and iterative refinement. By diving into the code, dataset, and pre-trained weights available at <a href=\"https:\/\/github.com\/cloudxlab\/GPT-from-scratch\/blob\/master\">https:\/\/github.com\/cloudxlab\/GPT-from-scratch\/blob\/master<\/a>, you can further explore, experiment, and enhance your understanding of GPT and its applications. Embrace the learning process, and let curiosity guide you as you continue your exploration of the fascinating world of natural language processing and machine learning.<\/p>\n\n\n\n<div class=\"wp-block-image\"><figure class=\"aligncenter size-large is-resized\"><a href=\"https:\/\/cloudxlab.com\/course\/204\/hands-on-generative-ai-with-langchain-and-python\"><img src=\"https:\/\/blog.cloudxlab.com\/wp-content\/uploads\/2024\/03\/Screenshot-2024-02-29-at-1.55.59-PM.png\" alt=\"\" class=\"wp-image-4226\" width=\"636\" height=\"231\"\/><\/a><figcaption>Check out our course on Hands-on generative AI with langchain and Python.<\/figcaption><\/figure><\/div>\n","protected":false},"excerpt":{"rendered":"<p>In a world where technology constantly pushes the boundaries of human imagination, one phenomenon stands out: ChatGPT. You&#8217;ve probably experienced its magic, admired how it can chat meaningfully, and maybe even wondered how it all works inside. ChatGPT is more than just a program; it&#8217;s a gateway to the realms of artificial intelligence, showcasing the &hellip; <a href=\"https:\/\/cloudxlab.com\/blog\/building-your-own-chatgpt-from-scratch\/\" class=\"more-link\">Continue reading<span class=\"screen-reader-text\"> &#8220;How to build\/code ChatGPT from scratch?&#8221;<\/span><\/a><\/p>\n","protected":false},"author":36,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[1],"tags":[218,217,215,219],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v16.2 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>How to build\/code ChatGPT from scratch? | CloudxLab Blog<\/title>\n<meta name=\"description\" content=\"We&#039;ll explore the fundamentals of ML, including how machines generate words, the GPT architecture and will then code our own ChatGPT from scratch.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/cloudxlab.com\/blog\/building-your-own-chatgpt-from-scratch\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"How to build\/code ChatGPT from scratch? | CloudxLab Blog\" \/>\n<meta property=\"og:description\" content=\"We&#039;ll explore the fundamentals of ML, including how machines generate words, the GPT architecture and will then code our own ChatGPT from scratch.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/cloudxlab.com\/blog\/building-your-own-chatgpt-from-scratch\/\" \/>\n<meta property=\"og:site_name\" content=\"CloudxLab Blog\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/cloudxlab\" \/>\n<meta property=\"article:published_time\" content=\"2024-02-08T09:37:37+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2025-11-11T20:00:51+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/blog.cloudxlab.com\/wp-content\/uploads\/2024\/02\/Screenshot-2024-02-01-at-11.31.26-AM-1.png\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@CloudxLab\" \/>\n<meta name=\"twitter:site\" content=\"@CloudxLab\" \/>\n<meta name=\"twitter:label1\" content=\"Est. reading time\">\n\t<meta name=\"twitter:data1\" content=\"43 minutes\">\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"WebSite\",\"@id\":\"https:\/\/cloudxlab.com\/blog\/#website\",\"url\":\"https:\/\/cloudxlab.com\/blog\/\",\"name\":\"CloudxLab Blog\",\"description\":\"Learn AI, Machine Learning, Deep Learning, Devops &amp; Big Data\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":\"https:\/\/cloudxlab.com\/blog\/?s={search_term_string}\",\"query-input\":\"required name=search_term_string\"}],\"inLanguage\":\"en-US\"},{\"@type\":\"ImageObject\",\"@id\":\"https:\/\/cloudxlab.com\/blog\/building-your-own-chatgpt-from-scratch\/#primaryimage\",\"inLanguage\":\"en-US\",\"url\":\"https:\/\/blog.cloudxlab.com\/wp-content\/uploads\/2024\/02\/Screenshot-2024-02-01-at-11.31.26-AM-1.png\",\"contentUrl\":\"https:\/\/blog.cloudxlab.com\/wp-content\/uploads\/2024\/02\/Screenshot-2024-02-01-at-11.31.26-AM-1.png\"},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/cloudxlab.com\/blog\/building-your-own-chatgpt-from-scratch\/#webpage\",\"url\":\"https:\/\/cloudxlab.com\/blog\/building-your-own-chatgpt-from-scratch\/\",\"name\":\"How to build\/code ChatGPT from scratch? | CloudxLab Blog\",\"isPartOf\":{\"@id\":\"https:\/\/cloudxlab.com\/blog\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/cloudxlab.com\/blog\/building-your-own-chatgpt-from-scratch\/#primaryimage\"},\"datePublished\":\"2024-02-08T09:37:37+00:00\",\"dateModified\":\"2025-11-11T20:00:51+00:00\",\"author\":{\"@id\":\"https:\/\/cloudxlab.com\/blog\/#\/schema\/person\/4438d405318314ec50940bde93ef548a\"},\"description\":\"We'll explore the fundamentals of ML, including how machines generate words, the GPT architecture and will then code our own ChatGPT from scratch.\",\"breadcrumb\":{\"@id\":\"https:\/\/cloudxlab.com\/blog\/building-your-own-chatgpt-from-scratch\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/cloudxlab.com\/blog\/building-your-own-chatgpt-from-scratch\/\"]}]},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/cloudxlab.com\/blog\/building-your-own-chatgpt-from-scratch\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"item\":{\"@type\":\"WebPage\",\"@id\":\"https:\/\/cloudxlab.com\/blog\/\",\"url\":\"https:\/\/cloudxlab.com\/blog\/\",\"name\":\"Home\"}},{\"@type\":\"ListItem\",\"position\":2,\"item\":{\"@id\":\"https:\/\/cloudxlab.com\/blog\/building-your-own-chatgpt-from-scratch\/#webpage\"}}]},{\"@type\":\"Person\",\"@id\":\"https:\/\/cloudxlab.com\/blog\/#\/schema\/person\/4438d405318314ec50940bde93ef548a\",\"name\":\"Shubh Tripathi\",\"image\":{\"@type\":\"ImageObject\",\"@id\":\"https:\/\/cloudxlab.com\/blog\/#personlogo\",\"inLanguage\":\"en-US\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/76bb13891affbf9da48fa9701d774ff0?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/76bb13891affbf9da48fa9701d774ff0?s=96&d=mm&r=g\",\"caption\":\"Shubh Tripathi\"}}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","_links":{"self":[{"href":"https:\/\/cloudxlab.com\/blog\/wp-json\/wp\/v2\/posts\/4188"}],"collection":[{"href":"https:\/\/cloudxlab.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/cloudxlab.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/cloudxlab.com\/blog\/wp-json\/wp\/v2\/users\/36"}],"replies":[{"embeddable":true,"href":"https:\/\/cloudxlab.com\/blog\/wp-json\/wp\/v2\/comments?post=4188"}],"version-history":[{"count":31,"href":"https:\/\/cloudxlab.com\/blog\/wp-json\/wp\/v2\/posts\/4188\/revisions"}],"predecessor-version":[{"id":4814,"href":"https:\/\/cloudxlab.com\/blog\/wp-json\/wp\/v2\/posts\/4188\/revisions\/4814"}],"wp:attachment":[{"href":"https:\/\/cloudxlab.com\/blog\/wp-json\/wp\/v2\/media?parent=4188"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/cloudxlab.com\/blog\/wp-json\/wp\/v2\/categories?post=4188"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/cloudxlab.com\/blog\/wp-json\/wp\/v2\/tags?post=4188"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}