'Get a taste of AI' #1: Innovating with Large Language Models (LLM) as taste-maker

Date
11 October 2023

Anyone who has tried ChatGPT knows that we are at the dawn of a new era. A lot of businesses want to respond to these developments, but often still regard generative AI as a black box. The complexity and lack of transparency can lead to timidity and restraint, while AI as a technology offers a lot of unique opportunities and possibilities. Do you want your business to get ahead of the curve? Then you have to give AI a try: experiment, learn and apply. Experimentation is the only way to learn how and when AI can deliver value to your business operations.

Art1-Header

According to McKinsey's The State of AI in 2023 ChatGPT’s success has put generative AI on every company's radar. You can help your customers better and faster and make your employees more efficient. It’s an accessible tool that your company can deploy quickly in your own IT department.

AI as an ingredient

But before you look at the added value of AI, you need to understand what generative AI is. Making a comparison with cooking helps us with that: generative AI is a new ingredient your organisation can use to solve business problems and create customer value.

"AI is a new ingredient your organisation can use to solve business problems and create customer value."

Raymond van Muilwijk

Raymond Muilwijk, Technology Officer Belgium & The Netherlands

If you are going to work with AI, on the one hand you want to prevent it from becoming a zero impact gimmick, and on the other hand you want to learn how to apply it effectively. Put in culinary terms: you have to make every dish delicious and serve your customers an impeccable product; not a one-size-fits-all mish-mash of ingredients.   

The best way to apply AI correctly in your business operations is to understand its capabilities and limitations.  
 
Generative AI like ChatGPT is powered by an LLM, which stands for large language model. In the near future, LLM as a technology will be a determining factor in the field of innovation, process optimisation and data-driven decision making.  

"In the near future, LLM as a technology will be a decisive factor in the field of innovation, process optimisation and data-driven decision making."

Raymond van Muilwijk

Raymond Muilwijk, Technology Officer Belgium & The Netherlands

What exactly is a large language model (LLM)?

A LLM is a trained language model that can understand, process, and generate natural language by accessing to huge amounts of text data, using advanced machine learning techniques.   

When OpenAI introduced ChatGPT in November 2022, their technology became available to the general public via chat.   
 
According to McKinsey, today nearly 25% of top executives say they use generative AI for work. In addition to chat, there are other options for unlocking an LLM, including voice, API, and interfaces such as apps or virtual assistants. That makes an LLM a smart building block for language that you can deploy in all kinds of ways.   

Art1-Large_image

"LLM is not only about chat but also about voice, API and interfaces such as apps or virtual assistants"

Raymond van Muilwijk

Raymond Muilwijk, Technology Officer Belgium & The Netherlands

The original dataset, fine-tuning and linked systems

A LLM derives its knowledge from an original dataset. This gives the LLM all the information it needs to communicate.  

Then you can modify the original dataset, expand it, and fine-tune it. 
 
You can also provide an LLM with specific data while you’re using it, for example from a linked system. This provides access to the data for a specific task or session, but the LLM does not remember it in the long term. An example of this is a chat AI on the site of an energy supplier that provides current energy prices on request.   
 
Thus, the dietary sources of an LLM consist of:  

  • The original dataset 

  • Fine-tuning 

  • Connected systems 

This allows it to function adaptively and with versatility within business operations, in harmony with the other processes. 

Want to know more about Large Language Models?

In this white paper, we look at the past, present, and future of AI language models. You can also learn tips and tricks to help you to get started with AI.

Herentals-mensen-Pieter-smartphone | iO

The 6 essential ingredients in the AI kitchen

Just like the recipes and techniques we use in the kitchen when preparing dishes, the application of AI has best practices that help achieve the best results. 

Keep the following rules in mind when experimenting with AI: 

  1. AI is an ingredient, not a dish in itself.   
    You can't replace employees with AI because they do much more than just process data and language.   
     
    Ogilvy CEO Rory Sutherland calls this the so-called Doorman Fallacy in his book Alchemy: have you replaced your hotel’s doorman with an automatic door to save costs? Then you’re also losing all the other added value the doorman brings, such as arranging taxis, security and greeting and recognising customers. You may save costs, but your hotel will also lose a lot of status.  

  2. AI has many flavours.   
    AI is a technology that works in all kinds of different ways in practice. You don't have to have one application that can do everything, you can use a specific AI for a specific task. Such as: extract the keywords from this chat or phone call and call a system to retrieve data with it. Then pass that data on to an LLM to make a summary answer here.   
     
    For example, this is how God Mode AutoGPT works: it combines the Auto GPT and Baby AGI models and works with three separate agents for  task creation task prioritisation and task execution. The result is an advanced AI tool that can work on autopilot.    

  3. Start with the dish, then choose the ingredients.   
    Your dish is what makes your customers happy. If you take a certain ingredient as a starting point, it’s easy to lose sight of the customer. Now there is a proliferation of AI tools that are often fun and interesting, but many are just trying to take advantage of the hype.   
     
    Start by analysing how AI works and what it can do and then think about how this can add value to your organisation, the process, or the application. Focus on increasing employee efficiency, reducing costs through automation, increasing revenue or a providing better customer experience for users.   

  4. Know your ingredients.   
    Found the ideal AI tools? Invest in workshops and training for your employees so that they learn to use them effectively. Not every ChatGPT user is an effective prompt engineer immediately, knowing which prompts will deliver the best results.   
     
    Don't be afraid to take it seriously, use the correct terminology such as LLM, vectorisation, custom models. As long as you clearly explain the impact AI will have on their work and the added value it brings, your employees will be enthusiastic adopters.

  5. Don't let AI be the dominant flavour in the dish.   
    In many cases, releasing AI interfaces directly to users is irresponsible; a human connection is still necessary in the short term.   
     
    AI can take on a wide range of supporting tasks, such as creating frequently asked questions and explanations, summarising and simplifying information, or maintaining an internal knowledge base for employees. This allows you to keep control over customer contact and prevent users from being exposed to hallucinatory AI that invents incorrect information or misinterprets user intent.   

  6. Use the best ingredients.   
    The data used to feed an LLM must be qualitative: complete, accurate, up-to-date, understandable, and consistent. Because the same general rules of thumb apply here:  garbage in, garbage out. This concerns not only the LLM training dataset, but ultimately also for the prompts that the user enters. Asking the right questions is the difference between nonsense and real value. Does the user's input match the user intent for that task or the context of the company

Art1-Text_image

Generative AI has limits

For AI to come into its own as an ingredient, it is essential to keep in mind that AI has many limitations. For example, an LLM language does not understand complexity and nuance like humans but mimics language, according to John Searle's Chinese Room argument.   
 
And an LLM like ChatGPT may find it more important to answer than to give a correct answer: so-called AI hallucination. A New York law firm recently experienced this when a LLM included six fictitious case law references in a legal argument.   
 
In addition, it’s possible the dataset on a very specific topic may be too limited which results in an enormous increase in bias: the output can be quite limited and biased, which leads to a distorted view of reality.   

LLMs are non-deterministic: when asked the same question an LLM can give different answers. The same thing happens when you ask the same question to different people. That works well in a lot of situations, but with some sensitive topics or major issues, it can also confuse or frustrate users. Sometimes you just want to hear one demonstrably correct answer.  

'Taste' the concept

LLM is a generic tool that understands human language and data. Its application initially appears to be broad and vague, but it’s worth remembering that computers and the internet as technology were regarded in much the same way in the beginning. Interesting, but what can you do with it? Today, it’s the smartphone and apps that are the remote control of modern, everyday life. AI will have a greater impact, but for now it’s only in its infancy.

"As a technology, LLM is a good place to start because it’s low-threshold and you can create measurable impact quickly"

Raymond van Muilwijk

Raymond Muilwijk, Technology Officer Belgium & The Netherlands

Experiment with AI. Start with a 'Proof of concept' to learn how AI can give your business that little bit extra. Experiment – this will help you to identify where the business value is for you. As a technology, LLM is a good place to start because it’s low-threshold and you can create measurable impact quickly. Learn to use an LLM and don't make assumptions: approach it with an open mind and allow yourself to be surprised. And remember, keep refining and improving the dish so that your customers think: mmm that tastes great.

Raymond van Muilwijk
About the speaker

Raymond Muilwijk

Technology Officer Belgium & The Netherlands

With a distinct vision on - and experience in - strategy, enterprise & solution architecture, product management and software delivery, Raymond is the right man in the right place at the Center of Excellence. Responsible for iO's technology vision and acting as an accelerator of knowledge and innovation. Knowledge worth sharing, as the beating heart of any organisation.

Related articles