'Get a taste of AI' #2: Generative AI in practice - How to create business value from day 1

Date
24 octobre 2023

When you understand AI’s full potential you can use it purposefully. But how do you translate the versatile possibilities of AI into the specific context of your everyday business operations?

generative-ai-business-value

In this article, we'll take a closer look at the application of generative AI and explore how to use these insights in a practical way to generate immediate business value. 

Understanding the underlying technology is essential in order to fully exploit the potential of generative AI. Generative AI is powered by a powerful tool: the Large Language Model (LLM). 

"If you want to use AI to tackle complex business problems and create real customer value, it is crucial to understand how LLM functions and how to use it effectively."

Raymond van Muilwijk

Raymond Muilwijk, Technology Officer Belgium & The Netherlands

How exactly does an LLM work?

An LLM predicts language based on trained examples and the given context. Take, for example, the flower Viola Tropio: it does not exist at all, but as a human being you can still say something about it.   

The flower is probably colourful with a delicate petal, something like a violet, with bright colours and a green stem. It needs sunlight and will not be able to withstand fire. Without ever having seen the flower, you can still predict its properties. That's called inference. This is also how an LLM works: it can make new information, conclusions or predictions based on existing knowledge or data.   

generative-ai-business-value

Inference is an important aspect of intelligence, as is reasoning, problem solving, language comprehension, and memory, all of which, for example, ChatGPT can do. But awareness, intuition, emotions, and real creativity are lacking, making an LLM still a great distance from human intelligence.

Want to know more about Large Language Models?

In this whitepaper, we look at the past, present, and future of AI language models. You can also learn tips and tricks to help you to get started with AI.

Herentals-mensen-Pieter-smartphone | iO

What can you do with an LLM?

LLM is very good at understanding language. You can use them to do the following:  

  • Make a summary or get the gist of a story 

  • Instruct an LLM to ask questions about a topic 

  • Answer questions about a given or existing topic 

  • List ideas, perspectives, and possibilities for inspiration 

  • Convert a text to understandable language, such as language level B1 

  • Translate a text or get answers in another language 

  • Process programming languages, identify errors and suggest corrections. 

These are interesting new elements that you can add to your business operations, provided it delivers more accurate results. 

generative-ai-business-value

How can you use a LLM to solve your business problems?

What can you do with a building block that can work with human language and large amounts of data? Broadly speaking, an LLM does two things particularly well: interpreting language and transforming data.   

"LLM does two things particularly well:  interpreting language and transforming data."  
1. From language comprehension to added value 

An LLM understands human input. You can look at as LLM as something like a virtual person:  if you can explain to someone what you are looking for or how something should be done, then you can probably automate it with an LLM. 

Search in a new way 

A practical example of this is a house search application that we are currently making for a client. Searching through a website meant that the user generally searched by region and then used predefined filters such as price and number of square meters. 

The LLM allows the user to start the search in any way: 

  • What exactly do you want? 

  • Can you give examples? 

  • What is essential to you? 

 The LLM interprets the input, can ask follow-up questions and then displays the matching results. This provides considerable added value compared to the usual way of searching for houses. 

The LLM that we use here does not need to be built or programmed: as a technology, it immediately understands the input that users give. It is only a matter of providing relevant data and then fine-tuning where necessary. The LLM can be deployed very quickly.   
 
In the example of a house search (or any other search application), an LLM offers the following new capabilities:  

  1. Conversational  
    Search Users can search for homes using their own choice of words by starting their search with questions and examples, such as "I'm looking for a house with at least 3 bedrooms near a train station" or "Show me houses with large gardens."   

  2. Flexible searches  
    Users can ask simple and complex questions, such as "Show houses with a swimming pool and a garage in Amsterdam " or "I want to rent an apartment in Rotterdam." They can use their own combinations of criteria, such as the number of bedrooms, the location, the type of home, the price range, etc. to obtain personalised results.   

  3. Understanding  
    Context LLM can interpret the context of the searches and ask relevant follow-up questions to clarify specific user needs, such as "What is your maximum budget?" or "Do you have a preference for a particular neighbourhood?"   

  4. Real time adjustments  
    LLM can be linked to an external system so that houses that have been sold and new supply are processed in real time.  
      

2. Making data clear and accessible with language 

Alongside interpreting language, LLMs also understand large amounts of data. For another customer with a wide range of products and services, we use an LLM to help users find answers quickly. 

Previously, users were forced to choose a suitable channel for their question: via a chatbot, the navigation menu, searching by keywords, social media or else calling and selection menus. Then they had to interpret the information themselves. An LLM provides a single interface that can then point to the correct system or give the correct answer directly:  

  • Answering User Questions  
    An LLM allows users to ask questions in everyday language about a range of products and services, such as "Which insurance is best for my car?" or "What should I do if I have stone chips on my windscreen?". The LLM understands the context which helps users faster.   

  • Summarising Information  
    An LLM can extract and enumerate specific passages related to customer demand from sources. A next step is to give an unambiguous answer based on those passages. This keeps the information relevant and also allows the passages to be referred to as a source citation.   

  • Understanding feedback  
    LLMs also have some quite serious limitations. You can combat this by combining an LLM with vectorisation. This allows you to trace how the system arrives at a specific answer and identify where this information comes from that creates nodes to help solve this. For example, you might find a page that hasn't been updated for two years or discover that the model needs additional training on this topic.    

  • Personal assistance  
    An LLM can make personalised recommendations based on user preferences and previous interactions, for example, by saying, "We have an offer that suits you perfectly based on your previous holiday destinations."  

  • Accessibility  
    An LLM can provide an inclusive and accessible way for users to ask questions, regardless of their experience with technology. For example, legal texts such as conditions can still be accessible to people with low literacy, in FAQs and documentation. This can be carried out using chat or voice.

How correct should the answer be?

LLMs can unlock large amounts of data and make it available in human language. It is important to be open about the probability of the LLM providing the correct answer. As Matt Ginsberg of X and The Moonshot Factory, explains in Neil deGrasse Tyson's podcast Startalk: LLMs are very good at the 49/51% principle, not the 100% principle. This means that LLMs give an above-average correct answer, but not always. In that case, adding a human link is wise to select the correct answer that you can use to train the LLM again to give better and better answers.  
 
An application of this works as follows: an LLM interprets the customer question based on the chat interaction or the telephone conversation. Next, the LLM provides a help desk employee with various answers and passages. The employee wastes no time searching in systems and can help the customer faster.    

Focus on added value 

Although generative AI has been in the works for less than a year, it is on almost every top manager’s radar. What Netscape was to the Internet, ChatGPT is to AI. It provides a user-friendly interface for a new technology making it accessible to millions and therefore widely adopted.   

For businesses and organisations, adopting AI is essential if you want to continue to be relevant in the future. This has to be in a responsible way that takes all restrictions and risks into account. AI is not a gimmick but a valuable technology that you can use to improve and optimise your business operations in a very targeted way. The first step towards using AI effectively is knowing exactly what you can do with it. Therefore, keep up with of all the latest developments around generative AI. Our AI dossier is a great place to start. 

Raymond van Muilwijk
About the speaker

Raymond Muilwijk

Technology Officer Belgium & The Netherlands

With a distinct vision on - and experience in - strategy, enterprise & solution architecture, product management and software delivery, Raymond is the right man in the right place at the Center of Excellence. Responsible for iO's technology vision and acting as an accelerator of knowledge and innovation. Knowledge worth sharing, as the beating heart of any organisation.

Articles sur le même sujet