LLM ChatBot with Custom Context In Minutes

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 
Announcements
L1 Bithead

Title_Chabot_palo-alto-networks.jpg

 

LLM ChatBot with Custom Context In Minutes

 

Your OWN ChatbotYour OWN Chatbot

In minutes yes that’s true….. at the end of the article, you would have a working LLM chatbot with the context that you provide… wait what context are you talking about…. let me explain if you ask ChatGPT about the LangChain framework it won’t have any idea as it is trained on data up to 2021. So let’s say you want to ask your chatbot about LangChain which becomes your context and you ask questions around those contexts which is a query to the chatbot.


In the recent rat race of AI/ML, I think it is high time people get exposed to this amazing framework known as LangChain.

 

LangChain is a framework for developing applications powered by language models.

 

It enables applications that are:

 

  • Data-aware: connect a language model to other sources of data
  • Agentic: allows a language model to interact with its environment

 

The main value props of LangChain are:

 

  • Components: abstractions for working with language models, along with a collection of implementations for each abstraction. Components are modular and easy to use, whether you are using the rest of the LangChain framework or not
  • Off-the-shelf chains: a structured assembly of components for accomplishing specific higher-level tasks

 

Ok, now you know what langChain is… so let's get our hands dirty and do some coding to get your own chatbot.

 

Step 1: Get the context/data you want the chatbot to answer questions around

I will be taking the example of backstage documentation here. You can download the documentation using the command below.

 

wget - recursive - level=inf - page-requisites - convert-links - no-parent - no-clobber https://backstage.io/docs/

 

This will download all the HTML files to the directory like so:

 

Fig 3_Chabot_palo-alto-networks.png

 

Step 2: Import packages, load the files and extract data

 

import os

from langchain.document_loaders import ReadTheDocsLoader

from langchain.embeddings import OpenAIEmbeddings

from langchain.text_splitter import RecursiveCharacterTextSplitter

from langchain.vectorstores import Pinecone

from langchain.chat_models import ChatOpenAI

from langchain.chains import ConversationalRetrievalChain

import streamlit as st

from streamlit_chat import message

import pinecone




folder_path = "<PATH_TO_DOCS>"

file_names = os.listdir(folder_path)

#load/reads the files

loader = ReadTheDocsLoader(file_path)

raw_document = loader.load()

 

LangChain takes of omitting unnecessary data such as HTML tags and other garbage text so that you only focus on the application development rather than worrying about all those kinds of stuff.

 

You can read more about the ReadTheDocsLoader and other loads here.

 

Step 3: Split the documents into smaller chunks and convert them to embedding, and store them in VectorDB

 

#RecursiveCharacterTextSplitter ->split on document in order until the chunks are small enough.

text_splitter = RecursiveCharacterTextSplitter(

   chunk_size=400, chunk_overlap=50, separators=["\n\n", "\n", " ", ""]

)




documents = text_splitter.split_documents(raw_document)

#Intialize the pincode vectorDB to store the embedding of the document

pinecone.init(

   api_key="<PINE_API_KEY>",

   environment="<PINECONE_ENVIRONMENT>"
)

#intialize the OpenAI embedding which creates vector of the chunks

embeddings = OpenAIEmbeddings(openai_api_key="<OPEN_AI_API>")

INDEX_NAME = "<PINECONE_INDEX_NAME>"

#Sending all the embedding to pinecone vectorDB

Pinecone.from_documents(documents, embeddings,index_name="backstage-docs")

 

This completes the data connection of the chatbot creation.

 

Data connectionData connection

 

Step 4: Creating chains to talk to LLM and get you questions answered

 

#This is used to search the closest data to your query/question

docsearch = Pinecone.from_existing_index(

       embedding=embeddings,

       index_name="<INDEX_NAME>"

   )

#Wrapper around OpenAI Chat large language models model used"gpt-3.5-turbo"

chat = ChatOpenAI(

     verbose=True, #To get all the logs how LLM reached to answer
     temperature=0,#this is to set how creative you want the LLM to be temperature=0 least creative and 1 being most creative

     openai_api_key="<OPEN_AI_API>"

 )

#Chain for having a conversation based on retrieved documents.

qa = ConversationalRetrievalChain.from_llm(

   llm=chat, retriever=docsearch.as_retriever(), return_source_documents=True

)

#This chain takes in chat history (a list of messages) and new questions, and then returns an answer to that question

return qa({"question": query, "chat_history": chat_history})

You can read about other chains here.
With these steps, you have created a backend for your chatbot and you start the conversations.

 

Now let’s create a frontend for the same so that you get the feeling of chatGPT.

 

Step 5: Frontend using streamlit

 

st.header("<NAME OF THE BOT>")

#setting the history of the chat for continues context

if (

   "chat_answers_history" not in st.session_state

   and "user_prompt_history" not in st.session_state

   and "chat_history" not in st.session_state

: (
   st.session_state["chat_answers_history"] = []

   st.session_state["user_prompt_history"] = []

   st.session_state["chat_history"] = []




#Question text box and submit button

prompt = st.text_input("Prompt", placeholder="Enter your message here...") or st.button(

   "Submit"

)




if prompt:

#spinner for waiting till question is answered by Chatbot

   with st.spinner("Generating response...": (

#run_llm contains all the logic we wrote in the previous steps

       generated_response = run_llm(

           query=prompt, chat_history=st.session_state["chat_history"]#Sending current chat history to set context.

       )

       #get the source of document from which documents your Chatbot gave answers

       sources = set(

           [doc.metadata["source"] for doc in generated_response["source_documents"]]

       )

       formatted_response = (
           f"{generated_response['answer']} \n\n {create_sources_string(sources)}"

       )




       st.session_state.chat_history.append((prompt, generated_response["answer"]))

       st.session_state.user_prompt_history.append(prompt)

       st.session_state.chat_answers_history.append(formatted_response)




if st.session_state["chat_answers_history"]:

   for generated_response, user_query in zip(

       st.session_state["chat_answers_history"],

       st.session_state["user_prompt_history"],

   : (

       message(

           user_query,

           is_user=True,

       )

       message(generated_response)

 

Run it using:

#runs on http://localhost:8501/ by default

streamlit run main.py

 

Your OWN ChatbotYour OWN Chatbot

Conclusion

 

Langchain is an amazing framework to help you create an AI-powered application in with the great support of docs and LLM.
You can get more info about LangChain from here: