Project - Building a RAG Chatbot from Your Website Data using OpenAI and Langchain

4 / 26

LLM Powered Chatbots

We can use LLMs to build personal chatbots too. LLMs can power chatbots that can engage in natural conversations, understand complex questions, adapt to different contexts, etc. Need product recommendations? Want help troubleshooting an issue? Ask your LLM-powered chatbot!

But can LLMs Answer Questions About Our Company Data? The answer is No.

While large language models (LLMs) are impressive, their knowledge is limited to their training data. Private data? New info? They won't know it (imagine asking about last week's news - blank stare).

enter image description here

Now one solution is to retrain or fine-tune LLMs on our personal or company's data. But that doesn't generally works well. LLMs require time and computing to retrain. Further, you require huge dataset to train.That’s why retraining or fine-tuning the model is not worth it.

Second solution is to provide all our documents in prompt itself and tell the LLM to use them to answer queries? That sounds fascinating, but it’s not quite possible because there is a limit on the input which we can provide in the prompt to the LLM. For example, ChatGPT have a maximum token limit, which is typically around 4096 tokens for gpt-3.5-turbo model.

enter image description here

Also, large prompts take much longer for LLMs to process, leading to slower response times and overloading LLMs with information can result in lower quality responses and potentially inaccurate outputs.

So, how can we build LLM-powered chatbots then?

Here’s a trick. Instead of passing all the documents, we can pass the documents relevant to the user’s query, or we can say, required to answer user’s query. We call this RAG (Retrieval Augmented Generation).


No hints are availble for this assesment

Answer is not availble for this assesment

Loading comments...