Creating Your Own Chatbot: A Beginner-Friendly Tutorial with OpenAI, LangChain, Gradio, and Wikipedia

If you’re like me, you’ve been hearing a ton about LangChain and OpenAI. I was curious to see how difficult it would be to build one of these chatbots myself. It’s been my experience historically that if I just dig in, I find that topics are often more approachable than I assume, especially since the python libraries coming out are quite high-level. I was also super interested in learning what langchain’s function is, since I’d been hearing so much about it. In this beginner-friendly tutorial, I’ll guide you through the process of creating your own chatbot using Python and leveraging powerful tools such as OpenAI, LangChain, Gradio, and Wikipedia. Even if you’re new to Python or have never built a chatbot before, fear not — we’ll take it step by step. Let’s get started on your chatbot development adventure!

We’ll cover a couple of things:

  • About the app we’re building

  • What is LangChain?

  • Tutorial

The App We’re Building:

Here we’re going to build a quick Gradio app that will allow us to leverage OpenAI’s GPT-3.5 to enter a question, get a response returned, and we’re able to customize the behavior of our chatbot by modifying different parameters. I was super impressed with how easy it was to create a Gradio web app with a couple lines of code. Of course, this is a “hello world” level example, but still so cool.

The parameters that we’re able to configure are temperature and model_name. The temperature when set to zero gives us a very deterministic response, as the value gets larger, the response that is give is more random a temperature between 0.7 and 0.9 is often used for creative tasks, although the higher you set the number, the more you might need to worry about hallucinations.

This is a picture of the finished web app:

What is LangChain?:

I’ve learned that LangChain is super cool, no wonder why everyone is talking about it. Basically, if you ask a complex question, you’ll leverage a model (potentially multiple models), and a number of “tools” to get to your answer. LangChain is the library that decides what you need and in what order and then puts all the pieces together to get your answer. “Justin Beiber’s age times 4” might require that LangChain goes to wikipedia to get the birthdate if the answer isn’t in the LLM training data, and then go to a math tool to multiply the number by 4. Wikipedia and the math tool in this case are not part of LangChain, but LangChain will decide what it needs to leverage and in what order, and then execute.

App building tutorial:

For this I suggest opening my Google Colab. All you’d need to do here is enter your own API key and run the cells, then you’d have a working starter app. To get an API key, you’d go to OpenAI.

First we pip install our packages, import a set of libraries, and set our API key.

%pip install langchain openai wikipedia gradio

# Importing necessary dependencies
import os  # used for working with environment variables, so we can store our API key
from langchain.chat_models import ChatOpenAI
from langchain.memory import ConversationBufferMemory
from langchain.agents import (
    load_tools,
    initialize_agent,
    AgentType # "Agents use an LLM to determine which actions to take and in what order" - LangChain docs
)
import langchain
import gradio as gr
langchain.debug = True  # verbose thought process logs as it runs

# Set the value of the 'OPENAI_API_KEY' environment variable. You get $5 free when you sign up
os.environ['OPENAI_API_KEY'] = '[YOUR API KEY HERE]'

Hello, World!

Now that we have our libraries, we’re ready to start building our chatbot.

First, we instantiate the “ChatOpenAI” class, establishing a connection to the ChatGPT language model. By specifying a temperature value of 0 and model name as “gpt-3.5-turbo,” we configure the behavior of the language model. You could use a different model or a higher value of temperature. The “verbose=True” setting enables detailed logging to provide insights into the chatbot’s thought process.

Next, we load the necessary tools, including the “wikipedia” module, using the “load_tools” function. This step connects our chatbot to all of the information available in Wikipedia. The LangChain functions allow seamless integration with ChatGPT and determine if and when the Wikipedia tool is needed during conversations.

To enable memory and maintain conversation history, we instantiate the “ConversationBufferMemory” class. By specifying a memory key as “chat_history” and setting “return_messages=True,” we ensure that the chatbot retains the context of previous interactions.

Finally, we initialize the agent for conversation using the loaded tools. The “initialize_agent” function takes in the tools, the ChatGPT language model, and specifies the agent type as “CHAT_CONVERSATIONAL_REACT_DESCRIPTION.” This agent facilitates interactive and responsive conversations while providing detailed logging with “verbose=True.” The “handle_parsing_errors” message assists in error checking, and the memory component allows the chatbot to maintain coherence throughout the conversation.

With these code snippets, our chatbot is now equipped with the necessary connections, tools, memory, and agent initialization to engage in captivating and intelligent conversations.


# Creating an instance of the ChatOpenAI class for conversation - this is the connection to ChatGPT
chat = ChatOpenAI(temperature=2.5, model_name="gpt-3.5-turbo", verbose=True)

# This is a connection to Wikipedia's data - LangChain functions, It'll start with ChatGPT and then determine if it needs the Wikipedia tool.
tools = load_tools(["wikipedia"], llm=chat)

# This was just copied from the docs, but we need it to have memory.
memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)

# Initializing the agent for conversation using the loaded tools - Give it the tools, the LLM,
# and the GPT connection.
agent = initialize_agent(
    tools,
    chat,
    agent=AgentType.CHAT_CONVERSATIONAL_REACT_DESCRIPTION,
    verbose=True,
    handle_parsing_errors="Check your output and make sure it conforms!",
    memory=memory
)

This next line of code defines a function called “call_agent” that when invoked, enables users to interact with the chatbot by posing questions or providing inputs. The main reason for needing the “call_agent” function is that gradio will automatically pass the input as a parameter to our function, and the agent.run call uses a kwarg. The “call_agent” function utilizes the “agent.run()” method, which triggers the agent to process the user’s question. The agent leverages the tools, language model, and memory components to generate an appropriate response based on the input. With this function, the chatbot becomes fully operational, yay!

# The chatbot is ready now and you can ask it questions. This function is used to call the agent and get a response based on the question asked
def call_agent(user_question):
    response = agent.run(input=user_question)
    return response

Next we need our web app. Gradio allows us to design and launch an interactive interface that facilitates seamless communication between users and the chatbot.

Inside the with gr.Blocks() as demo context, we define the components of our Gradio interface. We begin by creating a title using the gr.HTML function, then we set up a gr.Textbox component. This textbox serves as the interface where users can enter their questions or queries for the chatbot.

For displaying the chatbot’s responses, we create another gr.Textbox. This textbox will show the chatbot’s generated responses to the user’s input.

To trigger the chatbot’s response generation, we include a gr.Button. When the user clicks this button, it calls the call_agent function we defined earlier, passing the user’s input from the input textbox and displaying the chatbot’s response in the output textbox.

Finally, we launch the Gradio interface using demo.launch(). By setting share=True, we allow others to access and interact with the chatbot through a shareable link. The debug=True option enables verbose logs for troubleshooting during the development process.

# Creating a Gradio interface for the chatbot
with gr.Blocks() as demo:
    title = gr.HTML("<h1>The Data Moves Me Chatbot</h1>")
    input = gr.Textbox(label="What would you like to know?")  # Textbox for user input
    output = gr.Textbox(label="Here ya go, Champ:")  # Textbox for chatbot response
    btn = gr.Button("Gimme the answer")  # Button to trigger the agent call
    btn.click(fn=call_agent, inputs=input, outputs=output)

# Launching the Gradio interface
demo.launch(share=True, debug=True)

Summary:

Hopefully this gave you a working app and some context about how all of these pieces work together. I was pleasantly surprised with how intuitive the openai and langchain libraries were for getting started (I haven’t gotten much further than this very introductory example though) and how easy it was to stand up a gradio app.

If you've tried Coursera or other MOOCs to learn python and you're still looking for the course that'll take you much further, like working in VS Code, setting up your environment, and learning through realistic projects.. this is the course I would recommend: Python Course. 

By starting with chatbot development, we now have the potential to build intelligent virtual assistants, customer support bots, or interactive information providers. The possibilities are limitless if you continue to expand and enhance your chatbot's capabilities. Please let me know if you take this beginning and do something neat with it, I’d love to hear from you. Happy coding!

Next
Next

Object Detection Using YOLOv5 Tutorial – Part 3