
{"id":7711,"date":"2025-04-15T09:42:34","date_gmt":"2025-04-15T08:42:34","guid":{"rendered":"https:\/\/dasini.net\/blog\/?p=7711"},"modified":"2025-04-15T09:42:35","modified_gmt":"2025-04-15T08:42:35","slug":"build-an-ai-powered-search-engine-with-heatwave-genai-part-3","status":"publish","type":"post","link":"https:\/\/dasini.net\/blog\/2025\/04\/15\/build-an-ai-powered-search-engine-with-heatwave-genai-part-3\/","title":{"rendered":"Build an AI-Powered Search Engine with HeatWave GenAI (part 3)"},"content":{"rendered":"\n<p>In <a href=\"https:\/\/dasini.net\/blog\/2025\/03\/13\/build-an-ai-powered-search-engine-with-heatwave-genai-part-1\/\" target=\"_blank\" rel=\"noopener\" title=\"Build an AI-Powered Search Engine with HeatWave GenAI (part 1)\">Build an AI-Powered Search Engine with HeatWave GenAI (part 1)<\/a>, we introduced the fundamentals of creating an<strong> AI-powered search engine using HeatWave GenAI<\/strong>. We highlighted the advantages of semantic search powered by <strong>large language models<\/strong> over traditional SQL-based approaches and provided a hands-on guide for <strong>generating embeddings<\/strong> and <strong>running similarity searches<\/strong> \u2014 key techniques that significantly improve the retrieval of relevant content.<\/p>\n\n\n\n<p>In the second opus \u2014 <a href=\"https:\/\/dasini.net\/blog\/2025\/04\/08\/build-an-ai-powered-search-engine-with-heatwave-genai-part-2\/\" target=\"_blank\" rel=\"noopener\" title=\"Build an AI-Powered Search Engine with HeatWave GenAI (part 2)\">Build an AI-Powered Search Engine with HeatWave GenAI (part 2)<\/a> \u2014 we shifted our focus to improving search result quality through <strong>reranking strategies<\/strong> and the use of article summaries for embedding generation. We demonstrated how to implement these enhancements entirely <strong>within HeatWave using JavaScript-based stored procedures<\/strong>. By assigning different weights to title and excerpt distances, and generating embeddings from sanitized summaries, we boosted the precision and relevance of search results. This approach showcases <strong>HeatWave GenAI<\/strong>\u2019s ability to <strong>embed advanced AI capabilities directly within the database layer<\/strong>.<\/p>\n\n\n\n<p>In this third installment, we\u2019ll take it a step further by incorporating full article content into the search engine. While titles, excerpts, or even summaries may work well in many cases, there are situations where deeper, more detailed information is needed to return truly relevant answers.<\/p>\n\n\n\n<div style=\"height:50px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<h2 class=\"wp-block-heading\">What are we going to do?<\/h2>\n\n\n\n<p>The process is slightly more complex than what we&rsquo;ve covered so far (in <a href=\"https:\/\/dasini.net\/blog\/2025\/03\/13\/build-an-ai-powered-search-engine-with-heatwave-genai-part-1\/\" target=\"_blank\" rel=\"noopener\" title=\"Build an AI-Powered Search Engine with HeatWave GenAI (part 1)\">part 1<\/a> &amp; <a href=\"https:\/\/dasini.net\/blog\/2025\/04\/08\/build-an-ai-powered-search-engine-with-heatwave-genai-part-2\/\" target=\"_blank\" rel=\"noopener\" title=\"Build an AI-Powered Search Engine with HeatWave GenAI (part 2)\">part 2<\/a>). In WordPress, article content is stored in HTML format within the <code><em>post_content<\/em><\/code> column. This will be our starting point, and the goal is to generate embeddings. <\/p>\n\n\n\n<p>To achieve this, we\u2019ll need to write a few lines of code. While this could be done directly within HeatWave using JavaScript stored procedures \u2014 as we saw in <a href=\"https:\/\/dasini.net\/blog\/2025\/04\/08\/build-an-ai-powered-search-engine-with-heatwave-genai-part-2\/#js_sp_ai\" target=\"_blank\" rel=\"noopener\" title=\"A Javascript, stored procedure &amp; AI story\">part 2: A Javascript, stored procedure &amp; AI story<\/a> \u2014 I\u2019ll instead use the unofficial language of data: Python.<br>Please bear in mind that\u00a0<strong>I\u2019m not a developer<\/strong>, so\u00a0<strong>this code is provided for illustrative purposes only<\/strong>. It may contain errors or limitations. Please\u00a0<strong>use it at your own risk<\/strong>\u00a0and adapt it to your specific needs (also feel free to share back).<\/p>\n\n\n\n<div style=\"height:25px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<p>Below are the steps we\u2019ll follow to move forward:<\/p>\n\n\n\n<ol class=\"wp-block-list\">\n<li>Defining the embeddings storage table<\/li>\n\n\n\n<li>Fetch articles from the database.<\/li>\n\n\n\n<li>Remove HTML tags and normalize whitespace.<\/li>\n\n\n\n<li>Split articles into overlapping chunks of words.<\/li>\n\n\n\n<li>Generate embeddings for a given article.<\/li>\n\n\n\n<li>Insert article chunks with their embeddings into HeatWave.<\/li>\n<\/ol>\n\n\n\n<p>Let\u2019s break down each step together!<\/p>\n\n\n\n<div style=\"height:25px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<p>I&rsquo;am using HeatWave 9.2.2:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code lang=\"sql\" class=\"language-sql\">SELECT version();\n+-------------+\n| version()   |\n+-------------+\n| 9.2.2-cloud |\n+-------------+<\/code><\/pre>\n\n\n\n<div style=\"height:50px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<h2 class=\"wp-block-heading\">Defining the embeddings storage table<\/h2>\n\n\n\n<p>I created a new table named <code><em>wp_post_chunks_embeddings_minilm<\/em><\/code> to store the embeddings generated from article chunks.<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><em><code>chunk_embedding<\/code><\/em>: embedding of the articles<\/li>\n\n\n\n<li><em><code>chunk_text<\/code><\/em>: contains the part of the articles. Mostly useful for debug<\/li>\n\n\n\n<li>and some IDs, \n<ul class=\"wp-block-list\">\n<li><em><code>chunk_id<\/code><\/em>: id of each chunk<\/li>\n\n\n\n<li><em><code>post_id<\/code><\/em>: useful to join the <em><code>wp_posts<\/code><\/em> table and get article information<\/li>\n\n\n\n<li>and finally, the table primary key: <em><code>id<\/code><\/em> <\/li>\n<\/ul>\n<\/li>\n<\/ul>\n\n\n\n<p>From <em><code>wp_posts<\/code><\/em> table, I&rsquo;ll use the <em><code>ID<\/code><\/em> (the primary key) and <em><code>post_content<\/code><\/em> which contains the article in HTML format. <br>All these information will be used by the semantic AI-powered search engine.<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code lang=\"sql\" class=\"language-sql\">SHOW CREATE TABLE wp_post_chunks_embeddings_minilm\\G\n*************************** 1. row ***************************\n       Table: wp_post_chunks_embeddings_minilm\nCreate Table: CREATE TABLE `wp_post_chunks_embeddings_minilm` (\n  `id` bigint unsigned NOT NULL AUTO_INCREMENT,\n  `post_id` bigint unsigned DEFAULT NULL,\n  `chunk_id` bigint unsigned DEFAULT NULL,\n  `chunk_text` text,\n  `chunk_embedding` vector(2048) DEFAULT NULL,\n  PRIMARY KEY (`id`)\n) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci<\/code><\/pre>\n\n\n\n<div style=\"height:50px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<h2 class=\"wp-block-heading\">Fetch articles from the database<\/h2>\n\n\n\n<p>First, we need to <strong>grab a batch of published blog articles<\/strong> from HeatWave MySQL. Think of it as scooping out a small chunk of posts at a time from a big pool of content. Pagination is used in order to manage the memory consumption.<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code lang=\"python\" class=\"language-python\">def fetch_articles_batch(db_config: Dict[str, Any], limit: int = 10, offset: int = 0) -&gt; List[Dict[str, Any]]:\n    \"\"\"Fetch a batch of articles from the database\"\"\"\n\n    with mysql.connector.connect(**db_config) as conn:\n        with conn.cursor(dictionary=True) as cursor:\n            cursor.execute(\"\"\"\n                SELECT ID, post_content FROM wp_posts \n                WHERE post_status = 'publish' AND post_type = 'post'\n                LIMIT %s OFFSET %s\n            \"\"\", (limit, offset))\n\n            return cursor.fetchall()<\/code><\/pre>\n\n\n\n<div style=\"height:20px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<h2 class=\"wp-block-heading\">Remove HTML tags and normalize whitespace.<\/h2>\n\n\n\n<p>Articles are stored in HTML format in the database. So the content must be <strong>cleaned<\/strong> because we need the raw text ie without all the formatting tags. With this function we:<\/p>\n\n\n\n<ol class=\"wp-block-list\">\n<li><strong>Remove all the HTML tags<\/strong><\/li>\n\n\n\n<li><strong>Replace them with spaces<\/strong> (so you don\u2019t accidentally smash words together)<\/li>\n\n\n\n<li><strong>Collapse extra whitespace<\/strong><\/li>\n\n\n\n<li><strong>Trimming off any leading or trailing spaces<\/strong><\/li>\n<\/ol>\n\n\n\n<pre class=\"wp-block-code\"><code lang=\"python\" class=\"language-python\">def strip_html_tags_with_space(html_string: str) -&gt; str:\n    \"\"\"Remove HTML tags and normalize whitespace\"\"\"\n\n    if not html_string:\n        return \"\"\n\n    text = re.sub(r'&lt;[^&gt;]+&gt;', ' ', html_string)\n    text = re.sub(r'\\s+', ' ', text)\n\n    return text.strip()<\/code><\/pre>\n\n\n\n<p>Please also note that, depending of the context, extra processing can be done like <a href=\"https:\/\/en.wikipedia.org\/wiki\/Stemming\" target=\"_blank\" rel=\"noopener\" title=\"Stemming on Wikipedia\">Stemming<\/a> or <a href=\"https:\/\/en.wikipedia.org\/wiki\/Lemmatization\" target=\"_blank\" rel=\"noopener\" title=\"Lemmatization on Wikipedia\">Lemmatization<\/a>.<\/p>\n\n\n\n<div style=\"height:20px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<h2 class=\"wp-block-heading\">Split articles into overlapping chunks of words<\/h2>\n\n\n\n<p>Each article is <strong> a big block of text<\/strong> and it must be <strong>split it into smaller overlapping chunks<\/strong>. It is like slicing a loaf of bread, where each slice overlaps a bit with the one before it.<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code lang=\"python\" class=\"language-python\">def split_text_by_words(text: str, post_id: int, chunk_size: int = 400, overlap: int = 80) -&gt; List[Dict[str, Any]]:\n    \"\"\"Split text into overlapping chunks of words\"\"\"\n\n    words = text.split()\n    chunks = []\n    start = 0\n    chunk_id = 0\n\n    while start &lt; len(words):\n        end = start + chunk_size\n        chunk_words = words[start:end]\n        chunk_text = ' '.join(chunk_words)\n\n        char_start = len(' '.join(words[:start]))\n        char_end = len(' '.join(words[:end]))\n\n        chunks.append({\n            \"post_id\": post_id,\n            \"chunk_id\": chunk_id,\n            \"char_start\": char_start,\n            \"char_end\": char_end,\n            \"text\": chunk_text\n        })\n\n        start += chunk_size - overlap\n        chunk_id += 1\n\n    return chunks<\/code><\/pre>\n\n\n\n<p>Chunking in RAG systems presents several challenges, including issues with <strong>chunk size<\/strong> (too small or too large), <strong>semantic coherence<\/strong>, and <strong>context understanding<\/strong>. These challenges can negatively impact retrieval accuracy, efficiency, and precision. Addressing them requires a carefully balanced approach to optimize chunking strategies while preserving computational efficiency and maintaining nuanced contextual understanding.<\/p>\n\n\n\n<p><code><em>chunk_size<\/em><\/code>: Defines the number of words per chunk. In this case, I&rsquo;ve chosen 400 words. The ideal value can vary depending on your specific use case and content structure.<\/p>\n\n\n\n<p><code><em>overlap<\/em><\/code>: Specifies how much each chunk overlaps with the next one\u2014here, it&rsquo;s set to 20% of the chunk size. This overlap helps preserve context across chunks, improving continuity and retrieval accuracy.<\/p>\n\n\n\n<p>Each chunk contains the id of the article (<em><code>post_id<\/code><\/em>), the id of the chunk (<em><code>chunk_id<\/code><\/em>) and the chunk itself (<em><code>test<\/code><\/em>). <code style=\"\"><i>char_start<\/i><\/code> &amp; <em><code>char_end<\/code><\/em> are only there for debug purpose. <\/p>\n\n\n\n<div style=\"height:20px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<h2 class=\"wp-block-heading\">Generate embeddings for a given article<\/h2>\n\n\n\n<p>Now it is time to <strong>generate an embedding<\/strong> (aka a numerical vector) for each chunk. Think of an embedding as a way to turn text into numbers so that machines can <em>understand<\/em> and <em>compare<\/em> it.<\/p>\n\n\n\n<p>We are using the HeatWave&rsquo;s <a href=\"https:\/\/dev.mysql.com\/doc\/heatwave\/en\/mys-hwgenai-ml-embed-row.html\" target=\"_blank\" rel=\"noopener\" title=\"ML_EMBED_ROW encodes text into a vector embedding\"><strong>ML_EMBED_ROW<\/strong><\/a> stored function to generate embeddings:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code lang=\"python\" class=\"language-python\">def embed_row(db_config: Dict[str, Any], search_item: str, embed_options: str = '{\"model_id\": \"minilm\"}') -&gt; str:\n    \"\"\"Generate an embedding for a given text using the ML_EMBED_ROW function\"\"\"\n\n    with mysql.connector.connect(**db_config) as conn:\n        with conn.cursor() as cursor:\n            sql_embed_row = 'SELECT VECTOR_TO_STRING(vect) FROM (SELECT sys.ML_EMBED_ROW(%s, %s) AS vect) AS dt;'\n            cursor.execute(sql_embed_row, (search_item, embed_options))\n            result = cursor.fetchone()\n\n            return \"\".join(result) if result else \"\"<\/code><\/pre>\n\n\n\n<p>The trick here in Python is obtaining the <strong>string representation of the vector<\/strong>, which we can do using the <a href=\"https:\/\/dev.mysql.com\/doc\/refman\/9.2\/en\/vector-functions.html#function_vector-to-string\" target=\"_blank\" rel=\"noopener\" title=\"VECTOR_TO_STRING(vector) - returns the string representation of the binary representation of a VECTOR \"><strong>VECTOR_TO_STRING<\/strong><\/a> function.<\/p>\n\n\n\n<p><code><em>embed_options<\/em><\/code>: specifies the settings for the embedding model. Here we are using  <code>\"<em>minilm<\/em>\"<\/code>.<\/p>\n\n\n\n<div style=\"height:20px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<h2 class=\"wp-block-heading\">Insert article chunks with their embeddings into HeatWave<\/h2>\n\n\n\n<p>Now it is time to store the text chunks and their corresponding embeddings in the <strong>vector store<\/strong> \u2014 <strong> HeatWave<\/strong> \u2014 specifically into the <em><code>wp_post_chunks_embeddings_minilm<\/code><\/em> table designed for this purpose:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code lang=\"python\" class=\"language-python\">def insert_chunks_into_mysql(chunks: List[Dict[str, Any]], db_config: Dict[str, Any], batch_size: int = 900):\n    \"\"\"Insert text chunks with embeddings into the database\"\"\"\n\n    with mysql.connector.connect(**db_config) as conn:\n        with conn.cursor() as cursor:\n            insert_query = \"\"\"\n                INSERT INTO wp_post_chunks_embeddings_minilm (post_id, chunk_id, chunk_text, chunk_embedding)\n                VALUES (%s, %s, %s, STRING_TO_VECTOR(%s))\n            \"\"\"\n            buffer = []\n            for i, chunk in enumerate(chunks, 1):\n                chunk_embedding = embed_row(db_config, chunk['text'])\n                buffer.append((\n                    chunk['post_id'],\n                    chunk['chunk_id'],\n                    chunk['text'],\n                    chunk_embedding\n                ))\n\n                if i % batch_size == 0:\n                    cursor.executemany(insert_query, buffer)\n                    conn.commit()\n                    buffer.clear()\n\n            if buffer:\n                cursor.executemany(insert_query, buffer)\n                conn.commit()\n\n    print(f\"Inserted {len(chunks)} chunks into database.\")<\/code><\/pre>\n\n\n\n<p><code><em>batch_size<\/em><\/code>: defines how many chunks are inserted into the database at once. In this case, once the buffer reaches 900 chunks, the data is flushed in bulk \u2014 a much more efficient approach than inserting each chunk individually, both in terms of performance and memory usage.<\/p>\n\n\n\n<p>A key step here is <strong>converting the embeddings into HeatWave&rsquo;s native vector type<\/strong> before insertion. This is done using the <code><strong><em><a href=\"https:\/\/dev.mysql.com\/doc\/refman\/9.2\/en\/vector-functions.html#function_string-to-vector\" target=\"_blank\" rel=\"noopener\" title=\"Converts a string representation of a vector to a binary one\">STRING_TO_VECTOR<\/a><\/em><\/strong><\/code> function.<\/p>\n\n\n\n<div style=\"height:50px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<h2 class=\"wp-block-heading\">Query Encoding and Vector Similarity Operations<\/h2>\n\n\n\n<p>Now it&rsquo;s time to transform the user&rsquo;s search query into a <strong>vector embedding<\/strong>, a numerical representation that captures its underlying meaning. This process, known as <strong>query encoding<\/strong>, allows us to go beyond simple keyword matching. Once the query is encoded as a vector, we can then perform a <strong>similarity search<\/strong> by comparing it against the precomputed vector embeddings of our documents within HeatWave. This comparison, often using metrics like <strong>cosine similarity<\/strong>, identifies the documents whose vector representations are most closely aligned with the query vector, thus retrieving the most semantically relevant results.<\/p>\n\n\n\n<div style=\"height:20px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<h3 class=\"wp-block-heading\">Encode the query into a vector embedding<\/h3>\n\n\n\n<p>Like we have seen in&nbsp;<a href=\"https:\/\/dasini.net\/blog\/2025\/03\/13\/build-an-ai-powered-search-engine-with-heatwave-genai-part-1\/\" target=\"_blank\" rel=\"noopener\" title=\"Build an AI-Powered Search Engine with HeatWave GenAI (part 1)\"><\/a><a href=\"https:\/\/dasini.net\/blog\/2025\/03\/13\/build-an-ai-powered-search-engine-with-heatwave-genai-part-1\/\" target=\"_blank\" rel=\"noopener\" title=\"Build an AI-Powered Search Engine with HeatWave GenAI (part 1)\">Build an AI-Powered Search Engine with HeatWave GenAI (part 1)<\/a> and in <a href=\"https:\/\/dasini.net\/blog\/2025\/04\/08\/build-an-ai-powered-search-engine-with-heatwave-genai-part-2\/\" target=\"_blank\" rel=\"noopener\" title=\"Build an AI-Powered Search Engine with HeatWave GenAI (part 2)\">Build an AI-Powered Search Engine with HeatWave GenAI (part 2)<\/a>, to generate a vector embedding for the query, we use the&nbsp;<a href=\"https:\/\/dev.mysql.com\/doc\/heatwave\/en\/mys-hwgenai-ml-embed-row.html\" target=\"_blank\" rel=\"noopener\" title=\"ML_EMBED_ROW uses the specified embedding model to encode the specified text or query into a vector embedding.\">ML_EMBED_ROW<\/a>&nbsp;routine. <br>This function applies the specified embedding model to encode the given text into a vector representation. The routine returns a&nbsp;<code><a href=\"https:\/\/dev.mysql.com\/doc\/refman\/9.2\/en\/vector.html\" target=\"_blank\" rel=\"noopener\" title=\" The MySQL VECTOR Type\"><code>VECTOR<\/code><\/a><\/code>&nbsp;containing the numerical embedding of the text.<\/p>\n\n\n\n<p>Using it is straightforward. Let\u2019s define two variables:&nbsp;<code><em>@searchItem<\/em><\/code>&nbsp;(the text to encode) and&nbsp;<em><code>@embeddOptions<\/code><\/em>&nbsp;(the embedding model used for encoding):<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code lang=\"sql\" class=\"language-sql\">SET @embeddOptions = '{\"model_id\": \"minilm\"}';\nSET @searchItem = \"Generative artificial intelligence\";\n\n-- Encode the query using the embedding model\nSELECT sys.ML_EMBED_ROW(@searchItem, @embeddOptions) into @searchItemEmbedding;<\/code><\/pre>\n\n\n\n<div style=\"height:20px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<h3 class=\"wp-block-heading\">Similarity search<\/h3>\n\n\n\n<p>To retrieve relevant blog content, we perform vector similarity calculations using the&nbsp;<code><a href=\"https:\/\/dev.mysql.com\/doc\/refman\/9.2\/en\/vector-functions.html#function_distance\" target=\"_blank\" rel=\"noopener\" title=\"Calculates the distance between two vectors per the specified calculation method\"><strong>DISTANCE<\/strong><\/a><\/code>&nbsp;function. This function computes the distance between two vectors using&nbsp;<code><strong>COSINE<\/strong><\/code>,&nbsp;<code><strong>DOT<\/strong><\/code>, or&nbsp;<code><strong>EUCLIDEAN<\/strong><\/code>&nbsp;distance metrics.<br>Here, the two vectors being compared are the encoded query (<code>@searchItemEmbedding<\/code>) and the precomputed embeddings stored in the&nbsp;<code><em>wp_posts &amp; wp_post_chunks_embeddings_minilm<\/em><\/code> &nbsp;tables.<\/p>\n\n\n\n<p>You can now perform a <strong>cosine similarity search <\/strong>on the full articles using the following approach:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code lang=\"sql\" class=\"language-sql\">SQL &gt;\n-- Find the most semantically relevant WordPress posts \nSELECT \n    post_title,\n    post_excerpt,\n    guid,\n    min_distance\nFROM (\n    SELECT \n        p.post_title,\n        p.post_excerpt,\n        p.guid,\n        DISTANCE(@searchItemEmbedding, e.chunk_embedding, 'COSINE') AS min_distance,\n        ROW_NUMBER() OVER (PARTITION BY p.post_title ORDER BY DISTANCE(@searchItemEmbedding, e.chunk_embedding, 'COSINE')) AS rn\n    FROM wp_posts p\n        INNER JOIN wp_post_chunks_embeddings_minilm e ON p.ID = e.post_id\n) AS ranked\nWHERE rn = 1\nORDER BY min_distance ASC\nLIMIT 5\\G\n\n*************************** 1. row ***************************\n  post_title: HeatWave GenAI: Sentiment Analysis Made Easy-Peasy\npost_excerpt: This new AI tech, called generative AI (or GenAI), can dive deep into what people are saying and tell us if they\u2019re feeling positive, negative, or neutral.\nLet\u2019s see how HeatWave GenAI, can help you to enhance your understanding of customer sentiment, improve decision-making, and drive business success.\n        guid: https:\/\/dasini.net\/blog\/?p=3456\nmin_distance: 0.4673360586166382\n*************************** 2. row ***************************\n  post_title: HeatWave GenAI: Your AI-Powered Content Creation Partner\npost_excerpt: Generative artificial intelligence (GenAI) is reshaping the content creation landscape. By training on vast datasets, these \"intelligent\" systems can produce new, human-quality content across a multitude of domains.\n\nOracle's HeatWave GenAI (starting with version 9.0.1) is at the forefront of this revolution, offering an integrated platform that combines in-database large language models (LLMs), vector stores, and scale-out vector processing to streamline content generation.\nThis article explores how HeatWave GenAI is empowering businesses to produce high-quality content rapidly and effectively, making it an indispensable tool for industries demanding speed, accuracy, and security.\n        guid: https:\/\/dasini.net\/blog\/?p=1234\nmin_distance: 0.4789550304412842\n*************************** 3. row ***************************\n  post_title: Simplifying AI Development: A Practical Guide to HeatWave GenAI\u2019s RAG &amp;amp; Vector Store Features\npost_excerpt: This tutorial explores HeatWave GenAI, a cloud service that simplifies interacting with unstructured data using natural language. It combines large language models, vector stores, and SQL queries to enable tasks like content generation, chatbot, and retrieval-augmented generation (RAG). The focus is on RAG and how HeatWave GenAI\u2019s architecture helps users gain insights from their data.\n        guid: https:\/\/dasini.net\/blog\/?p=2345\nmin_distance: 0.5191197395324707\n*************************** 4. row ***************************\n  post_title: Building an Interactive LLM Chatbot with  HeatWave Using Python\npost_excerpt: AI-powered applications require robust and scalable database solutions to manage and process large amounts of data efficiently. HeatWave is an excellent choice for such applications, providing high-performance OLTP, analytics, machine learning and generative artificial intelligence capabilities.\n\nIn this article, we will explore a Python 3 script that connects to an HeatWave instance and enables users to interact with different large language models (LLMs) dynamically.\n        guid: https:\/\/dasini.net\/blog\/?p=5678\nmin_distance: 0.6004981994628906\n*************************** 5. row ***************************\n  post_title: In-Database LLMs for Efficient Text Translation with HeatWave GenAI\npost_excerpt: While HeatWave GenAI excels at generating English text, its capabilities extend to translation as well. Let's explore how we can effectively translate English content into French using this powerful tool.\n        guid: https:\/\/dasini.net\/blog\/?p=9876\nmin_distance: 0.6385803818702698<\/code><\/pre>\n\n\n\n<p>This SQL query is designed to retrieve the <strong>top 5 WordPress posts<\/strong> that are <strong>most semantically similar<\/strong> to a given embedding vector (<code>@searchItemEmbedding<\/code>), using <strong>cosine distance<\/strong> for comparison \u2014 but with a twist: it <strong>only considers the single best-matching chunk per post<\/strong>.<\/p>\n\n\n\n<p>Of course, you can also apply the techniques from <a href=\"https:\/\/dasini.net\/blog\/2025\/03\/13\/build-an-ai-powered-search-engine-with-heatwave-genai-part-1\/\" target=\"_blank\" rel=\"noopener\" title=\"Build an AI-Powered Search Engine with HeatWave GenAI (part 1)\">part 1<\/a> and <a href=\"https:\/\/dasini.net\/blog\/2025\/04\/08\/build-an-ai-powered-search-engine-with-heatwave-genai-part-2\/\" target=\"_blank\" rel=\"noopener\" title=\"Build an AI-Powered Search Engine with HeatWave GenAI (part 2)\">part<\/a><a href=\"https:\/\/dasini.net\/blog\/2025\/04\/08\/build-an-ai-powered-search-engine-with-heatwave-genai-part-2\/\" target=\"_blank\" rel=\"noopener\" title=\"\"> 2<\/a> to further improve the results. Methods like reranking or incorporating additional fields\u2014such as the article title\u2014while fine-tuning the weights can significantly enhance result relevance.<\/p>\n\n\n\n<div style=\"height:50px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<h2 class=\"wp-block-heading\">Peroration<\/h2>\n\n\n\n<p>In this third and final installment of our series on <strong>building an AI-powered search engine with HeatWave GenAI<\/strong>, we&rsquo;ve successfully expanded our capabilities to incorporate the full content of articles into the semantic search process. <\/p>\n\n\n\n<p>By implementing a more intricate pipeline involving fetching, cleaning, chunking, embedding, and storing article text, we&rsquo;ve aimed for a deeper level of understanding and more relevant search results. This evolution, building upon the foundations laid in Parts 1 and 2 with basic <strong>embedding<\/strong> and <strong>similarity searches<\/strong>, and refined with <strong>reranking<\/strong> and summary utilization, now leverages the comprehensive information within each article. <\/p>\n\n\n\n<p>The demonstrated <strong>SQL queries<\/strong> and <strong>Python<\/strong> code illustrate how HeatWave GenAI can efficiently manage and query <strong>vector embeddings<\/strong> derived from large text datasets, paving the way for a highly sophisticated and context-aware search experience. <\/p>\n\n\n\n<p>While further optimizations are always possible, this three-part journey showcases the powerful potential of HeatWave GenAI for creating intelligent, in-database search solutions.<\/p>\n\n\n\n<div style=\"height:20px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<p><a href=\"https:\/\/www.linkedin.com\/groups\/12524512\/\" target=\"_blank\" rel=\"noopener\" title=\"Olivier DASINI on Linkedin\">Follow me on Linkedin<\/a><\/p>\n\n\n\n<p>Watch my videos on my <a href=\"https:\/\/www.youtube.com\/channel\/UC12TulyJsJZHoCmby3Nm3WQ\" target=\"_blank\" rel=\"noreferrer noopener\" title=\"Olivier's MySQL Channel\">YouTube channel<\/a> and <a href=\"https:\/\/www.youtube.com\/channel\/UC12TulyJsJZHoCmby3Nm3WQ\/?sub_confirmation=1\" target=\"_blank\" rel=\"noreferrer noopener\" title=\"Subscribe\">subscribe<\/a>.<\/p>\n\n\n\n<p>My <a href=\"https:\/\/www.slideshare.net\/freshdaz\" target=\"_blank\" rel=\"noreferrer noopener\" title=\"Olivier DASINI on Slideshare\">Slideshare account<\/a>.<\/p>\n\n\n\n<p>My <a href=\"https:\/\/speakerdeck.com\/freshdaz\/\" target=\"_blank\" rel=\"noreferrer noopener\" title=\"Olivier DASINI on Speaker Deck\">Speaker Deck account<\/a>.<\/p>\n\n\n\n<div style=\"height:25px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<p class=\"has-vivid-red-color has-text-color\"><strong>Thanks for using HeatWave &amp; MySQL!<\/strong><\/p>\n\n\n\n<div style=\"height:50px\" aria-hidden=\"true\" class=\"wp-block-spacer\"><\/div>\n\n\n\n<p><\/p>\n","protected":false},"excerpt":{"rendered":"<p>In this latest post, the final part of my series on building an AI-powered search engine with HeatWave GenAI, I dive into enhancing AI-powered search by embedding full article content into HeatWave.<\/p>\n<p>By cleaning HTML, chunking content, generating embeddings, and running semantic similarity searches directly within HeatWave, we unlock highly relevant, context-rich search results.<br \/>\nIf you&rsquo;re building smart search engines or working with RAG pipelines, you\u2019ll find practical tips and SQL-powered techniques you can apply right away.<\/p>\n<p>This series demonstrates the powerful capabilities of HeatWave GenAI for building sophisticated, in-database AI-driven search solutions. Check out the article to level up your understanding of semantic search! <\/p>\n","protected":false},"author":2,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"jetpack_post_was_ever_published":false,"footnotes":""},"categories":[1702,1740,1744,1694,1719,339],"tags":[306,1700,1697,1738,1735],"class_list":["post-7711","post","type-post","status-publish","format-standard","hentry","category-ai","category-artificial-intelligence","category-coding","category-heatwave-en","category-mds-en","category-tuto-en","tag-cloud","tag-genai","tag-heatwave-fr-en","tag-llm","tag-python-en"],"aioseo_notices":[],"jetpack_featured_media_url":"","jetpack_sharing_enabled":true,"jetpack_shortlink":"https:\/\/wp.me\/p9LfWW-20n","jetpack-related-posts":[{"id":7363,"url":"https:\/\/dasini.net\/blog\/2025\/03\/13\/build-an-ai-powered-search-engine-with-heatwave-genai-part-1\/","url_meta":{"origin":7711,"position":0},"title":"Build an AI-Powered Search Engine with HeatWave GenAI (part 1)","author":"Olivier DASINI","date":"13 mars 2025","format":false,"excerpt":"Discover how to build an AI-powered search engine for your applications using HeatWave GenAI. This approach leverages large language models (LLMs) for semantic search, offering a smarter alternative to traditional SQL and full-text search methods. By using embeddings\u2014vector representations of words\u2014the search engine understands context and intent, delivering more relevant\u2026","rel":"","context":"Dans &quot;AI&quot;","block_context":{"text":"AI","link":"https:\/\/dasini.net\/blog\/category\/ai\/"},"img":{"alt_text":"","src":"https:\/\/i0.wp.com\/dasini.net\/blog\/wp-content\/uploads\/2025\/03\/HW_GenaI_search_engine.gif?resize=350%2C200&ssl=1","width":350,"height":200,"srcset":"https:\/\/i0.wp.com\/dasini.net\/blog\/wp-content\/uploads\/2025\/03\/HW_GenaI_search_engine.gif?resize=350%2C200&ssl=1 1x, https:\/\/i0.wp.com\/dasini.net\/blog\/wp-content\/uploads\/2025\/03\/HW_GenaI_search_engine.gif?resize=525%2C300&ssl=1 1.5x, https:\/\/i0.wp.com\/dasini.net\/blog\/wp-content\/uploads\/2025\/03\/HW_GenaI_search_engine.gif?resize=700%2C400&ssl=1 2x, https:\/\/i0.wp.com\/dasini.net\/blog\/wp-content\/uploads\/2025\/03\/HW_GenaI_search_engine.gif?resize=1050%2C600&ssl=1 3x, https:\/\/i0.wp.com\/dasini.net\/blog\/wp-content\/uploads\/2025\/03\/HW_GenaI_search_engine.gif?resize=1400%2C800&ssl=1 4x"},"classes":[]},{"id":7812,"url":"https:\/\/dasini.net\/blog\/2025\/05\/13\/oracle-dev-days-2025-french-edition\/","url_meta":{"origin":7711,"position":1},"title":"Oracle Dev Days 2025 \u2013 French Edition","author":"Olivier DASINI","date":"13 mai 2025","format":false,"excerpt":"Join the Oracle Dev Days \u2013 French Edition, from May 20 to 22, 2025! This must-attend event (in French) offers a rich program exploring the latest advancements in AI, databases, cloud, and Java. Join me on May 21 at 2:00 PM for the day dedicated to \u201cDatabase & AI.\u201d I\u2019ll\u2026","rel":"","context":"Dans &quot;Conf\u00e9rence&quot;","block_context":{"text":"Conf\u00e9rence","link":"https:\/\/dasini.net\/blog\/category\/conference-en\/"},"img":{"alt_text":"","src":"https:\/\/i0.wp.com\/dasini.net\/blog\/wp-content\/uploads\/2025\/05\/Olivier_Dasini_HeatWave.png?resize=350%2C200&ssl=1","width":350,"height":200,"srcset":"https:\/\/i0.wp.com\/dasini.net\/blog\/wp-content\/uploads\/2025\/05\/Olivier_Dasini_HeatWave.png?resize=350%2C200&ssl=1 1x, https:\/\/i0.wp.com\/dasini.net\/blog\/wp-content\/uploads\/2025\/05\/Olivier_Dasini_HeatWave.png?resize=525%2C300&ssl=1 1.5x, https:\/\/i0.wp.com\/dasini.net\/blog\/wp-content\/uploads\/2025\/05\/Olivier_Dasini_HeatWave.png?resize=700%2C400&ssl=1 2x, https:\/\/i0.wp.com\/dasini.net\/blog\/wp-content\/uploads\/2025\/05\/Olivier_Dasini_HeatWave.png?resize=1050%2C600&ssl=1 3x"},"classes":[]},{"id":7566,"url":"https:\/\/dasini.net\/blog\/2025\/04\/08\/build-an-ai-powered-search-engine-with-heatwave-genai-part-2\/","url_meta":{"origin":7711,"position":2},"title":"Build an AI-Powered Search Engine with HeatWave GenAI (part 2)","author":"Olivier DASINI","date":"8 avril 2025","format":false,"excerpt":"In this part 2 we focused on enhancing search relevance. We introduced reranking techniques using weighted distances of titles and excerpts to refine initial search results. Then we delved into leveraging article summaries for more effective semantic search, utilizing HeatWave's capability to execute JavaScript stored procedures for sanitizing HTML content\u2026","rel":"","context":"Dans &quot;AI&quot;","block_context":{"text":"AI","link":"https:\/\/dasini.net\/blog\/category\/ai\/"},"img":{"alt_text":"Similarity search across title, excerpt and summary","src":"https:\/\/i0.wp.com\/dasini.net\/blog\/wp-content\/uploads\/2025\/04\/HWGenAIsearchEngine2.gif?resize=350%2C200&ssl=1","width":350,"height":200,"srcset":"https:\/\/i0.wp.com\/dasini.net\/blog\/wp-content\/uploads\/2025\/04\/HWGenAIsearchEngine2.gif?resize=350%2C200&ssl=1 1x, https:\/\/i0.wp.com\/dasini.net\/blog\/wp-content\/uploads\/2025\/04\/HWGenAIsearchEngine2.gif?resize=525%2C300&ssl=1 1.5x, https:\/\/i0.wp.com\/dasini.net\/blog\/wp-content\/uploads\/2025\/04\/HWGenAIsearchEngine2.gif?resize=700%2C400&ssl=1 2x, https:\/\/i0.wp.com\/dasini.net\/blog\/wp-content\/uploads\/2025\/04\/HWGenAIsearchEngine2.gif?resize=1050%2C600&ssl=1 3x, https:\/\/i0.wp.com\/dasini.net\/blog\/wp-content\/uploads\/2025\/04\/HWGenAIsearchEngine2.gif?resize=1400%2C800&ssl=1 4x"},"classes":[]},{"id":6752,"url":"https:\/\/dasini.net\/blog\/2024\/08\/07\/heatwave-genai-your-ai-powered-content-creation-partner\/","url_meta":{"origin":7711,"position":3},"title":"HeatWave GenAI: Your AI-Powered Content Creation Partner","author":"Olivier DASINI","date":"7 ao\u00fbt 2024","format":false,"excerpt":"Generative artificial intelligence (GenAI) is reshaping the content creation landscape. By training on vast datasets, these \"intelligent\" systems can produce new, human-quality content across a multitude of domains. Oracle's HeatWave GenAI (starting with version 9.0.1) is at the forefront of this revolution, offering an integrated platform that combines in-database large\u2026","rel":"","context":"Dans &quot;AI&quot;","block_context":{"text":"AI","link":"https:\/\/dasini.net\/blog\/category\/ai\/"},"img":{"alt_text":"","src":"https:\/\/i0.wp.com\/dasini.net\/blog\/wp-content\/uploads\/2024\/07\/hw_product_image.png?resize=350%2C200&ssl=1","width":350,"height":200,"srcset":"https:\/\/i0.wp.com\/dasini.net\/blog\/wp-content\/uploads\/2024\/07\/hw_product_image.png?resize=350%2C200&ssl=1 1x, https:\/\/i0.wp.com\/dasini.net\/blog\/wp-content\/uploads\/2024\/07\/hw_product_image.png?resize=525%2C300&ssl=1 1.5x, https:\/\/i0.wp.com\/dasini.net\/blog\/wp-content\/uploads\/2024\/07\/hw_product_image.png?resize=700%2C400&ssl=1 2x, https:\/\/i0.wp.com\/dasini.net\/blog\/wp-content\/uploads\/2024\/07\/hw_product_image.png?resize=1050%2C600&ssl=1 3x, https:\/\/i0.wp.com\/dasini.net\/blog\/wp-content\/uploads\/2024\/07\/hw_product_image.png?resize=1400%2C800&ssl=1 4x"},"classes":[]},{"id":7252,"url":"https:\/\/dasini.net\/blog\/2025\/02\/11\/building-an-interactive-llm-chatbot-with-heatwave-using-python\/","url_meta":{"origin":7711,"position":4},"title":"Building an Interactive LLM Chatbot with  HeatWave Using Python","author":"Olivier DASINI","date":"11 f\u00e9vrier 2025","format":false,"excerpt":"AI-powered applications require robust and scalable database solutions to manage and process large amounts of data efficiently. HeatWave is an excellent choice for such applications, providing high-performance OLTP, analytics, machine learning and generative artificial intelligence capabilities. In this article, we will explore a Python 3 script that connects to an\u2026","rel":"","context":"Dans &quot;HeatWave&quot;","block_context":{"text":"HeatWave","link":"https:\/\/dasini.net\/blog\/category\/heatwave-en\/"},"img":{"alt_text":"simple but robust chatbot system leveraging HeatWave GenAI and its in-database Mistral LLM","src":"https:\/\/i0.wp.com\/dasini.net\/blog\/wp-content\/uploads\/2025\/02\/HW-Chat-mistral-7b.gif?resize=350%2C200&ssl=1","width":350,"height":200,"srcset":"https:\/\/i0.wp.com\/dasini.net\/blog\/wp-content\/uploads\/2025\/02\/HW-Chat-mistral-7b.gif?resize=350%2C200&ssl=1 1x, https:\/\/i0.wp.com\/dasini.net\/blog\/wp-content\/uploads\/2025\/02\/HW-Chat-mistral-7b.gif?resize=525%2C300&ssl=1 1.5x, https:\/\/i0.wp.com\/dasini.net\/blog\/wp-content\/uploads\/2025\/02\/HW-Chat-mistral-7b.gif?resize=700%2C400&ssl=1 2x, https:\/\/i0.wp.com\/dasini.net\/blog\/wp-content\/uploads\/2025\/02\/HW-Chat-mistral-7b.gif?resize=1050%2C600&ssl=1 3x"},"classes":[]},{"id":7058,"url":"https:\/\/dasini.net\/blog\/2024\/12\/10\/simplifying-ai-development-a-practical-guide-to-heatwave-genais-rag-vector-store-features\/","url_meta":{"origin":7711,"position":5},"title":"Simplifying AI Development: A Practical Guide to HeatWave GenAI\u2019s RAG &amp; Vector Store Features","author":"Olivier DASINI","date":"10 d\u00e9cembre 2024","format":false,"excerpt":"This tutorial explores HeatWave GenAI, a cloud service that simplifies interacting with unstructured data using natural language. It combines large language models, vector stores, and SQL queries to enable tasks like content generation, chatbot, and retrieval-augmented generation (RAG). The focus is on RAG and how HeatWave GenAI\u2019s architecture helps users\u2026","rel":"","context":"Dans &quot;AI&quot;","block_context":{"text":"AI","link":"https:\/\/dasini.net\/blog\/category\/ai\/"},"img":{"alt_text":"","src":"https:\/\/i0.wp.com\/dasini.net\/blog\/wp-content\/uploads\/2024\/12\/HeatWave_chatbot3.gif?resize=350%2C200&ssl=1","width":350,"height":200,"srcset":"https:\/\/i0.wp.com\/dasini.net\/blog\/wp-content\/uploads\/2024\/12\/HeatWave_chatbot3.gif?resize=350%2C200&ssl=1 1x, https:\/\/i0.wp.com\/dasini.net\/blog\/wp-content\/uploads\/2024\/12\/HeatWave_chatbot3.gif?resize=525%2C300&ssl=1 1.5x, https:\/\/i0.wp.com\/dasini.net\/blog\/wp-content\/uploads\/2024\/12\/HeatWave_chatbot3.gif?resize=700%2C400&ssl=1 2x, https:\/\/i0.wp.com\/dasini.net\/blog\/wp-content\/uploads\/2024\/12\/HeatWave_chatbot3.gif?resize=1050%2C600&ssl=1 3x, https:\/\/i0.wp.com\/dasini.net\/blog\/wp-content\/uploads\/2024\/12\/HeatWave_chatbot3.gif?resize=1400%2C800&ssl=1 4x"},"classes":[]}],"jetpack_likes_enabled":true,"_links":{"self":[{"href":"https:\/\/dasini.net\/blog\/wp-json\/wp\/v2\/posts\/7711","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/dasini.net\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/dasini.net\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/dasini.net\/blog\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/dasini.net\/blog\/wp-json\/wp\/v2\/comments?post=7711"}],"version-history":[{"count":88,"href":"https:\/\/dasini.net\/blog\/wp-json\/wp\/v2\/posts\/7711\/revisions"}],"predecessor-version":[{"id":7801,"href":"https:\/\/dasini.net\/blog\/wp-json\/wp\/v2\/posts\/7711\/revisions\/7801"}],"wp:attachment":[{"href":"https:\/\/dasini.net\/blog\/wp-json\/wp\/v2\/media?parent=7711"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/dasini.net\/blog\/wp-json\/wp\/v2\/categories?post=7711"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/dasini.net\/blog\/wp-json\/wp\/v2\/tags?post=7711"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}