‘Get a taste of AI’ #3: How vectors add flavour and value

Date
9 février 2024

Every new era requires new fuel, and the AI era is no exception. Simply collecting raw data is not enough to effectively utilise generative AI – you also need to process and interpret the data. The vector database bridges the gap between data and AI, enabling valuable output and optimal customer experiences. In this article in our ‘Get a taste of AI’ series, you will learn how vectors work and gain the tools to apply them in your daily practice.

The term “vector” may not be familiar to the general public, but its impact certainly is. Vectors provide more accurate results and recommendations in search engines and recommendation engines, improving the user experience. They add context and relevance, making the difference between a positive and a negative experience. But how does this work exactly? And how can your organisation leverage vectors for AI?

ai taste

Vectors translate reality into data

A vector is a representation of numerical points, such as in a three-dimensional coordinate system with X, Y and Z. With vectors, you can perform various calculations, such as determining the distance to a restaurant in Amsterdam or identifying which city lies below sea level.

In the context of AI, vectors can also represent words, known as embeddings. Embeddings allow you to define meanings and semantics. For example, you can create an embedding of the flavour profile of a dish: salty, sweet, bitter, sour, and umami. The values of these numbers are determined during model training. This enables you to capture the flavour profile of recipes and compare them: which recipes are exceptionally sweet, and which recipes are similar to each other.

ai moet je proeven blog image

Recipes are also described in countless dimensions using vectors, such as spiciness, crumb likelihood, texture, cuisine, preparation time, ingredients used, etc. Each dimension allows you to compare or group different recipes together. This is done by measuring the distance between vectors along the dimension you are searching on. These vectors are stored in a vector database. The more dimensions you have, the more complex and specific comparisons and groupings you can make with vectors. The ability to convert a piece of text into a vector and perform various calculations, such as assigning weights and synonyms, enables natural language processing (NLP).

When you combine a vector database with an LLM model like ChatGPT, you can achieve some particularly interesting things. The main advantage is gaining control over the data that the LLM model operates with.

Taking control of your data with vectors

ChatGPT is limited by a knowledge cutoff to a specific timeframe: GPT 3.5 until January 2022 and GPT 4.0 until April 2023. While users can provide temporary inputs for current answers, the model remains based on data up until that date. Updating the model with new data is time-consuming and costly: training the GPT-3 model, for example, was estimated to cost $12 million, and the GPT-4 model even more than $100 million. The big question then is how a company can make its business information accessible and up-to-date through an LLM model without having to retrain or fine-tune it. The answer lies in dividing the tasks.

An LLM model is essentially a smart language building block that can be added anywhere. A vector database is a separate building block that can be connected to an LLM model, just like a “standard” database with product information, for example. This combination of tasks is called retrieval augmented generation (RAG). Based on the input, you retrieve pre-selected information from the (vector) database, and then use this information with the LLM. The (vector) database can be kept up-to-date with, for example, the three most relevant pages of your website, the latest energy prices, or the availability of your products. This way, you not only keep the data up-to-date but also maintain full control. This allows you to steer the success of your LLM model yourself.

The value of semantic search

Vectors and embeddings make data understandable and interpretable, enabling semantic search. This unlocks various business applications:

Vectors can be applied anywhere there is interaction with data, information, and knowledge. Extensive textual documentation, such as terms and conditions or manuals, can be unlocked using vector databases to provide specific answers based on complex queries. The versatility of vectors opens the door to crucial information, enabling knowledge-intensive organisations to optimise their operational processes and add more value to their services.

And the beauty of vectors? Organisations can implement them quickly and create added value.

Implementing the vector database in practice

For the technical aspect, you’ll need someone with technical knowledge to create a simple application, typically a developer. Nowadays, there are many tools available, both paid and free. It’s a matter of making time and experimenting. Here’s what you’ll need:

  • An LLM model. You can use paid SaaS services from OpenAI and Azure OpenAI or run your own local LLM model, such as Meta’s LLaMA 2 or Mistral from France.

  • A vector database. There are both paid and free tools available, such as Pinecone and Weaviate, a Dutch open-source startup offering serverless, on-premises, and cloud options.

Using a pre-trained LLM model saves significant time. OpenAI recently introduced a user-friendly way to build your own GPT (Generative Pre-trained Transformer) without coding. You can access the vector database directly via API or through a client provided by the chosen solution, such as the Pinecone client or Weaviate client. After connecting the client, creating an index, and adding vectors, you can perform your first “nearest-neighbour” search. According to Weaviate, you can get this quickstart up and running with test vectors in just 20 minutes.

Applying a vector database is not solely an IT task. Due to its various applications, the use of vectors and embeddings is becoming more integrated with an organisation’s operational and commercial processes. For example, providing better answers to customers online directly impacts the workload of the customer service department and customer satisfaction. Therefore, consider the application and the creation of added value as your starting point.

Best practices for vector databases

The design and construction of an LLM model are becoming easier with the availability of GPTs. OpenAI aims to encourage the creation of specific chatbots, and a marketplace for offering and consuming these chatbots is on the horizon, which will further accelerate innovation and application. To stay ahead of the competition, apply the following best practices to maximise the value of vector databases.

  1. The vector database does not replace other databases. You can connect an LLM model to various databases, such as a product database, to provide direct answers for which semantic search is not required. For example, “Is this winter jacket available in size XXXL?” or “What is the availability?”. For search queries that require meaning and relationships, use a vector database.

  2. Garbage in, garbage out. The results of a vector database depends on the quality of the data. The data must be accurate, well-organised, and properly transformed into vectors. The number of dimensions plays a significant role: the more dimensions you capture, for example, about recipes, the more specific questions you can answer. Such as the presence of gluten or peanuts. Therefore, use only high-quality data and focus on continuously improving it.

  3. Vectors can represent anything. Vectors, as a numerical representation, can represent various things. When you assign meaning to a vector, it is called an embedding. For example, you can create a vector with the meaning of words, images, or sounds. This allows users to search for a specific image and request related images. This is how Google Photos enables searching uploaded photos based on subjects like cat, garden, or glass. It also offers possibilities for contextual embeddings and mathematical operations on numerical representations, such as squaring all the vowels.

  4. Don’t make vectors too large. The size of the vector is an important vector. Smaller vectors result in higher output accuracy. Therefore, avoid creating vectors for entire pages or individual words. A good size for standard text is 100 to 300 words. Each FAQ can be created as a separate vector, allowing you to create smart pieces of information.

  5. Store the URL as well. In addition to the vector, you can also store meta-data such as the URL of the page from which the text originated. This is useful for identifying the source when receiving negative feedback, for example. Consider the terms and conditions of an insurance policy: if they are updated, you will also want to update the vectors.

  6. Dimensions come at a cost. Using vector databases with many dimensions can be expensive. As an organisation, you need to weigh the costs of additional dimensions against the accuracy of the results. Which questions do you want to be able to answer? And which ones are less valuable?

The human element remains central

The AI era has arrived. Every service and product can now be supported by a specialised AI chatbot that can answer a wide range of questions. With this advancement, not only does data and knowledge become more accessible, but the customer experience also improves. However, the key to success does not lie in complete automation and surrendering to technological capabilities.

Human input is essential to unlock the full potential of AI. The quality of the data used to train an LLM model and populate a vector database is at the core of this. If AI-generated content is used instead of human-written content, the quality of the output deteriorates significantly. A vector database does not inherently know that a pepper steak is spicy but doesn't crumble – a human needs to interpret that. Human input also plays a crucial role in the ethics within the AI landscape. Ethics and responsibility are inherently linked to the development and implementation of AI technologies. The ability to give meaning to specific words, understand cultural context, and make moral considerations is inherently human. Ultimately, the human touch is what makes or breaks your AI model – success is defined by your users.

Raymond van Muilwijk
About the speaker

Raymond Muilwijk

Technology Officer Belgium & The Netherlands

With a distinct vision on - and experience in - strategy, enterprise & solution architecture, product management and software delivery, Raymond is the right man in the right place at the Center of Excellence. Responsible for iO's technology vision and acting as an accelerator of knowledge and innovation. Knowledge worth sharing, as the beating heart of any organisation.

Want to know more about Large Language Models?

In this whitepaper, we look at the past, present, and future of AI language models. You can also learn tips and tricks to help you to get started with AI.

Herentals-mensen-Pieter-smartphone | iO

Articles sur le même sujet