Building an Interactive LLM Chatbot with HeatWave Using Python

February 11, 2025

This article builds upon the concepts introduced in my previous blog posts:

For a deeper understanding, also consider reading these articles.

Building an Interactive LLM Chatbot with HeatWave Using Python

AI-powered applications require robust and scalable database solutions to manage and process large amounts of data efficiently. HeatWave is an excellent choice for such applications, providing high-performance OLTP, analytics, machine learning and generative artificial intelligence capabilities.

In this article, we will explore a Python 3 script that connects to an HeatWave instance and enables users to interact with different large language models (LLMs) dynamically. This script demonstrates how to:

By the end of this article, you’ll have a deep understanding of how to integrate HeatWave with Python for AI-driven applications.

HeatWave Chat

HeatWave Chat enables you to engage in human-like conversations with an AI. Within a single session, you can ask multiple queries and receive relevant responses. This conversational agent leverages powerful LLMs to understand your input and generate natural-sounding replies.
HeatWave Chat enhances the conversation by utilizing a chat history, allowing you to ask follow-up questions seamlessly. Furthermore, it employs vector search to access and utilize knowledge stored within its built-in vector store.
All communication and processing occur securely within the HeatWave service, ensuring fast and reliable responses.

In this article I’m using HeatWave 9.2.0-u1-cloud.

SQL > SELECT version();
+----------------+
| version()      |
+----------------+
| 9.2.0-u1-cloud |
+----------------+

Talk is cheap. Show me the code!

I use this quote from (apparently) Linus Torvald to warn you that I’m not a developer, so this code, available on my Github, is provided for illustrative purposes only. It may contain errors or limitations. Please use it at your own risk and adapt it to your specific needs (also feel free to share back).

Code Explanation

This Python script is a complete implementation of a chatbot system that interacts with HeatWave. It allows users to select and interact with different LLMs (Large Language Models).

1. Importing Required Modules

import mysql.connector
from typing import Tuple, Optional
from mysql.connector.cursor import MySQLCursor
from config.config_heatwave import DB_CONFIG  # Import the MySQL configuration
  • mysql.connector: Used to interact with the HeatWave MySQL database.
  • typing (Tuple, Optional): Provides type hints for better code readability and maintainability.
  • MySQLCursor: A specific cursor class from mysql.connector for executing MySQL queries.
  • DB_CONFIG: Imported from the external config file, storing database credentials and settings.

2. Connecting to the HeatWave MySQL Database

def connect_to_mysql() -> mysql.connector.MySQLConnection:
    """Establish a connection to the MySQL database."""
    return mysql.connector.connect(**DB_CONFIG)
  • Establishes a MySQL connection using DB_CONFIG.
  • Returns a MySQLConnection object that will be used to execute queries.

3. Loading LLM Models into HeatWave

def load_llm(cursor: MySQLCursor, llm_options: Tuple[str, ...]) -> None:
    """Load language models into HeatWave."""
    sql_load_llm = 'sys.ML_MODEL_LOAD'
    for llm in llm_options:
        args = (llm, 'NULL')
        cursor.callproc(sql_load_llm, args)
        print(f"LLM Loaded: {llm}")
    print("All LLMs loaded successfully.")
  • sys.ML_MODEL_LOAD loads a large language model into the HeatWave Cluster.
  • callproc() executes the stored procedure for each LLM in llm_options.
  • Confirms successful loading with print statements.

4. Selecting an LLM Model

def select_llm(llm_options: Tuple[str, ...]) -> str:
    """
    Prompt the user to select an LLM from the provided options.    
    Supports up to 4 choices.
    """
    option_map = {str(i + 1): llm for i, llm in enumerate(llm_options)}

    while True:
        # Dynamically build the prompt based on available options
        prompt = "Choose your LLM:\n"
        for i, llm in enumerate(llm_options):
            prompt += f"{i + 1}-({llm})\n"
        prompt += "Enter your choice: "

        choice = input(prompt)

        # Validate user input
        if choice in option_map:
            return option_map[choice]

        print(f"Invalid choice. Please select a number between 1 and {len(llm_options)}.")
  • Prompts the user to choose from up to 4 LLMs (2 from HeatWave – Llama & Mistral) and 2 from OCI Generative AI ServiceCohere & Llama).
  • Uses a dictionary (option_map) to map number inputs to model names.
  • Validates the user’s choice and returns the corresponding model.

5. Chat Functionality

def chat(cursor: MySQLCursor, llm: str) -> Optional[str]:
    """Facilitate user chat with HeatWave."""
    question = input("Ask HeatWave?> ")
    response, chat_info = hw_chat(cursor, question, llm)
    print(f"Response: {response}")
    print("-" * 30)
    return chat_info
  • Prompts the user for input.
  • Calls hw_chat() to send the query to HeatWave and retrieve the response.
  • Displays the chatbot’s response.

6. Interacting with HeatWave (Chat Processing)

def hw_chat(cursor: MySQLCursor, user_query: str, llm: str) -> Tuple[str, Optional[str]]:
    """Send a user query to HeatWave and return the response."""
    sp_hw_chat = 'sys.HEATWAVE_CHAT'
    cursor.callproc(sp_hw_chat, (user_query,))
    response = ""
    for result in cursor.stored_results():
        response = result.fetchone()[0]

    chat_info = get_chat_options(cursor)
    return response, chat_info
  • Calls HeatWave’s stored procedure (sys.HEATWAVE_CHAT) with the user’s query.
  • Retrieves the response and any session-based chat options.

7. Retrieving and Managing Chat Options

Get Chat Options

def get_chat_options(cursor: MySQLCursor) -> Optional[str]:
    """Retrieve the session variable 'chat_options'."""
    cursor.execute("SELECT @chat_options")
    chat_options = cursor.fetchone()[0]
    return chat_options
  • Fetches the session variable @chat_options that holds chat configurations.

Set Chat Options

def set_chat_options(cursor: MySQLCursor, llm: str) -> None:
    """Initialize or update the session variable 'chat_options'."""
    chat_options = get_chat_options(cursor)
    if not chat_options:
        # Initialize @chat_options if not set
        options = f'{{"model_options": {{"model_id": "{llm}"}}}}'
        sql = f"SET @chat_options = '{options}'"
    else:
        # Update @chat_options if already exists
        sql = f"SET @chat_options = JSON_SET(@chat_options, '$.model_options.model_id', '{llm}')"
    cursor.execute(sql)
    print(f"Using model: {llm}")
    print("-" * 40)
  • Initializes or updates the @chat_options session variable with the selected LLM.
  • Uses JSON_SET() to update an existing chat session.

8. Main Function Execution

def main() -> None:
    """Main function to run the LLM interaction."""
    try:
        with connect_to_mysql() as connection:
            with connection.cursor() as cursor:
                # Define available LLM options
                llm_options = ("llama3-8b-instruct-v1", "mistral-7b-instruct-v1", "cohere.command-r-plus-08-2024", "meta.llama-3.1-70b-instruct")

                # Load LLMs
                load_llm(cursor, llm_options)

                # Prompt user to select an LLM
                selected_llm = select_llm(llm_options)

                # Set chat options for the selected LLM
                set_chat_options(cursor, selected_llm)

                # Begin chat loop
                while True:
                    chat(cursor, selected_llm)
    except mysql.connector.Error as err:
        print(f"Database error: {err}")
    except KeyboardInterrupt:
        print("\nExiting the application.")
    finally:
        print("Goodbye!")
  • Uses context managers (with statement) to manage MySQL connections safely.
  • Calls load_llm(), select_llm(), and set_chat_options() before starting the chat loop.
  • Handles database errors and user interruption (Ctrl+C).

9. Running the Script

if __name__ == "__main__":
    main()
  • Ensures the script executes main() only when run directly.

Final Thoughts

You can further extend this solution to use HeatWave Chat with Retrieval-Augmented Generation (RAG). RAG is a technique that combines the power of Large Language Models with external knowledge bases. The primary goal of RAG is to enhance the quality, relevance, and accuracy of AI-generated responses by retrieving relevant information from these sources.
You can find more information and a practical example in this article: Simplifying AI Development: A Practical Guide to HeatWave GenAI’s RAG & Vector Store Features

You can also support additional LLMs, APIs, or advanced user interfaces.
To summarize, developers and data scientists can easily build intelligent applications that efficiently handle real-time AI interactions while leveraging the power of HeatWave.

Peroration

This Python script demonstrates that with a few lines of codes, you can easily build a simple but robust chatbot system leveraging HeatWave GenAI and its in-database or external (from OCI Generative AI Service) LLMs.

It’s an effective way to integrate HeatWave (and its MySQL API) with Python for AI-driven apps like chatbot interactions. By leveraging stored procedures and session variables, it allows seamless LLM management and user interaction.

Key takeaways from this implementation:

Stay tuned for more insights!

Follow me on Linkedin

Watch my videos on my YouTube channel and subscribe.

My Slideshare account.

My Speaker Deck account.

Thanks for using HeatWave & MySQL!

Leave a Reply