Project - Building a RAG Chatbot from Your Website Data using OpenAI and Langchain

21 / 26

Step 6: Chaining all the components

Let's create a chain of components.

INSTRUCTIONS
  1. Importing Libraries:

    from langchain_openai import ChatOpenAI
    from langchain.chains import ConversationalRetrievalChain
    
  2. Define a function make_chain that creates a chain of langchain components.

    def make_chain():
    
  3. Now write further code inside this function.

  4. Initialize LLM that will be used to answer queries. We will use gpt-3.5-turbo model, which is default model used in ChatGPT:

    model = ChatOpenAI(
        model_name="gpt-3.5-turbo",
        temperature=0.0,
        verbose=True
    )
    
  5. Initialize prompt using the prompt template we defined:

    prompt = get_prompt()
    
  6. Initialize Retriever. We will initialize our Chroma vector store as retriever.

    vector_store = get_chroma_client()
    retriever = vector_store.as_retriever(search_type="mmr", verbose=True)
    
  7. Create the chain of these components. We will use ConversationalRetrievalChain from langchain which is generally used for building chatbots:

    chain = ConversationalRetrievalChain.from_llm(
        model,
        retriever=retriever,
        return_source_documents=True,
        combine_docs_chain_kwargs=dict(prompt=prompt),
        verbose=True,
        rephrase_question=False,
    )
    return chain
    

Above return_source_documents when set to True, returns the documents retrieved from the vector store along with the answer. rephrase_question when set to True, rephrase the user question according to the whole conversation or we can say chat_history. For this demonstration, we will not be using chat_history, that's why we have set it to False.

See Answer

No hints are availble for this assesment


Note - Having trouble with the assessment engine? Follow the steps listed here

Loading comments...