LansforsakringarGoteborg

How can you teach children about the dangers of social media?

One in five teenagers spend all their free time on social media, according to a Swedish report. And while these platforms may provide entertainment galore to these children, little is known about the effects of high screen time on their mental health. Länsförsäkringar wanted to change that.

  • Custom AI tool 

  • Based on OpenAI’s large language model GPT-4

  • Will help children learn about mental health 

  • The first model of its kind 

A long-standing commitment to mental health issues

As an insurer, Länsförsäkringar has been actively supporting young people in their fight against mental health struggles, bullying, and the effects of social media — as shown by past campaigns like #ImPerfect and The Lamp That Measures Time Spent on Social Media.  

This time, Länsförsäkringar wanted to dive into screen time and its impact on the mental health of today’s kids. Since smartphones and social media are relatively new occurrences, there is still plenty of room for new insights and experimentation. 

And so, Länsförsäkringar and iO investigated the possibility of extending an existing AI language model — or grounding it in a specific dataset — in collaboration with leading psychologists and children’s specialists.

Want to know more? Contact us
Challenge

Show the impact of high screen time on young people’s mental health

Solution

A custom AI tool based on in-depth research

About Länsförsäkringar

Länsförsäkringar is a Swedish group of 23 independent customer-owned insurance companies — one for each of the counties of Sweden. Länsförsäkringar Bank is a subsidiary of Länsförsäkringar and is one of the largest retail banks in the country. Länsförsäkringar has almost 3.8 million customers and employs over 6.400 people.

thumbnail
Laika13

Introducing Laika

What happens to a child’s self-image and world view when they get 100% screen time? 

That was the main question when we came up with Laika, an AI experiment that simulates extreme exposure to social media.  

This is how it works: Laika was exposed to transcripts of TikTok content using OpenAI’s GPT-4 — the most competent language model on the market at the time. In the early stages, the model only used GPT-4 base model training. But to play a role in helping children discover the impact of screen time, we needed more specific data about social media and behavioural effects of extensive use. 

Our teams created a huge dataset based on existing and extensive behavioural research.  This dataset would then be used to further prime the GPT-4 base model, and by doing so extend the AI language model to something new entirely — Laika. That’s where Lisa Thorell, one of Sweden’s most renowned psychologists, came in. She helped to evaluate, validate, and explore Laika’s output. 

Most of our work on this project consisted in refining the way Laika responds to certain prompts and tweaking the dataset. 

The target group for Laika is twofold: 

  • Journalists will get access to the AI experiment. They will be able to book a time slot with Laika and do interviews with her, to investigate what the tool can do for the population at large; 

  • Schools and teachers will also get access to Laika — to use the experiment in class discussions about the importance of mental health and the impact of screen time. 

Laika13

What’s next for Laika?

The results of Laika don’t lie. The AI experiment was featured in the national press, and we are currently targeting international media. Moreover, around 100 school classes have signed up for an educational session about Laika and the risks of social media. 

If GPT-4's multimodality support grows in the future — i.e., if the model can interpret photo and video content as well as plain text — the input used for Laika will drastically improve. In other words: the persona can use actual social media videos as training data, as opposed to transcripts of videos. This would bring Laika’s conversational skills to another level. 

Good to know: while building Laika, our experts have done a great deal of research regarding compliance. We’re using an Azure OpenAI service hosted in Europe, ensuring stringent data protection laws and a favourable position regarding compliance. 

Related articles

Looking for a reliable partner for your AI project? 

Whatever goals you want to reach, our experts are happy to help.