AI Robot

Large Language Models and generative AI 

Large Language Models (LLMs) are a type of generative AI that can understand and create text. They can be used to answer questions, write creatively (such as headlines and blog posts), translate text, or create summaries.

How exactly does an LLM work?

An LLM like ChatGPT predicts based on trained examples and the given context. Take for example the flower Viola Tropio: in reality, it does not exist, but we can say something about it as humans.   

Probably colourful with small petals like a pansy, it is brightly coloured and has a green stem. It will need sunlight and cannot stand fire. Without ever having seen the flower, you can still predict its properties. That's called inference. This is also how an LLM works: it can create new information, make conclusions or predictions based on existing knowledge or data.   

As with people, an LLM combines this inference with other skills such as reasoning, problem solving, language comprehension and memory. This enables them to answer questions, generate text and draw conclusions. But unlike humans, an LLM lacks awareness, intuition, emotions and genuine creativity, making it still far from human intelligence. 

What can you do with an LLM?

LLMs are good at understanding language. You can use LLMs to do the following:  

  • Summarise a text  

  • Answer questions on a given or existing topic  

  • Create lists of ideas, perspectives and possibilities for inspiration  

  • Converting a text to understandable language, such as language level B1 

  • Translating a text or getting answers in another language 

  • Processing programming languages, detecting errors and suggesting corrections. 

These are interesting new elements that you can add to your operations, provided, of course, that it also makes for better results. 

abstract futuristic image with robot hands

How correct should the answer be?

LLM can unlock large amounts of data and present it in an understandable language. It is important that the probable accuracy of the answers provided by the LLMs is understood by users. As Matt Ginsberg of X, The Moonshot Factory, explained on Neil deGrasse Tyson's podcast Startalk: LLM is very good at the 49/51% principle, not the 100% principle. This means that LLMs give above-average correct answers, but by no means always. In that case, adding a human link is essential to select the correct answer. With that selection, you can then also train the LLM to give better and better answers.  

One application of this works as follows: an LLM interprets the customer question based on the chat interaction or phone call. The LLM then provides a helpdesk employee with different answers and passages. The employee saves time searching systems and helps the customer faster.
Download our white paper "Everything you need to know about Large Language Models"

Read more - Generative AI and multimedia