person in forest with laser

Ethics & compliance

These emerging trends will undoubtedly bring with them a lot of new regulations for the proper and ethical use of AI. The EU is the first out of the blocks here, with the European AI Act. That law was triggered by the rise of popular generative AI systems such as ChatGPT. 

Futuristic image

EU AI Act

The EU AI Act includes rules for the proper and ethical use of artificial intelligence within the European Union.

It went into effect on 1 August 2024 and will be introduced in stages over the next 6 to 36 months. 

A law for AI applications

The law applies to almost all AI applications in various sectors, except for AI deployed only for military purposes, national security, research, or personal projects. Instead of giving rights to individuals, the EU AI Act regulates companies and organisations that use AI professionally. 

AI systems are classified according to the risk they pose: 

  • Unacceptable risk: these applications are strictly prohibited. 

  • High-risk: must meet strict requirements on safety, transparency, and quality. 

  • Limited risk: only need to comply with transparency requirements. 

  • Minimal risk: fall outside regulation. 

General purpose AI will also have transparency requirements, with special rules for open source and powerful models. A European AI Council will also be set up to ensure that countries cooperate well and comply with the law.  

Impact on internal policies 

These rules will not only apply at national and international level. Even internally at companies, this evolution is already noticeable. We see how employees are increasingly being retrained to use AI efficiently in business applications. With Google AI Principles and Microsoft's Guidelines for human-AI interaction, some large companies have already taken the lead.  

For example, Microsoft's human-AI interaction guidelines create a clear framework to ensure that interactions between humans and AI systems are ethical, effective, and user-friendly.  

Microsoft AI & Human Guidelines
Microsoft's Guidelines for human-AI interaction

Here are the key insights: 

Transparency 
AI systems should communicate clearly when interacting with users and indicate what data AI uses. This transparency helps users better understand the AI's capabilities and limitations. 

Accountability 
There should be clear lines of accountability for actions and decisions made by AI. Users should be able to see who is responsible for the AI system and the results it generates, so that there is an easy and obvious way to address problems. 

Fairness 
AI systems should be designed to avoid bias and treat all users equally in AI algorithms and data. 

Privacy 
Protecting user privacy is crucial. AI systems must handle personal data responsibly, use it only for its intended purpose and ensure compliance with data protection rules. 

Safety and security 
AI systems must be built robustly and securely to prevent misuse or damage.  

Empowering users 
The AI system should provide users with clear options and sufficient control to direct or modify its behaviour. 

Inclusiveness and ethics 
AI interactions should be accessible and usable by people with different needs, taking into account different languages, cognitive abilities, and physical capabilities. AI should be guided by ethical principles, including respect for human dignity and the social impact of technology. 

Continuous improvement 
AI systems should be regularly updated and improved based on user feedback and new best practices. This keeps the system effective and tailored to user needs. 

Human-AI interaction: in-, on- and out-of-the-loop

Although large-scale language models and AI are already delivering impressive performance gains, human engagement remains a crucial success factor. As a user, cooperation with AI is needed to generate accurate and relevant results. This is best carried out in several stages: for input, for output and for everything in between. 

The three main types of interaction between humans and AI  

Human-in-the-loop

You work with AI to achieve something. For example, in a design tool, AI can help you brainstorm ideas or concepts, which you can then refine yourself. So you and AI each contribute to the end result, combining your strengths to achieve a better result. 

Human-on-the-loop

AI gives suggestions or suggests options, but the final decision is made by the user. This is common in situations where human judgement is crucial. For example, in medical diagnoses, where a doctor uses AI to analyse data, but ultimately decides on treatment.

Human-out-of-the-loop

Means AI acts independently, with little or no human involvement in real-time decision-making. Here, AI automatically performs tasks for you based on instructions or preferences you have set. For example, a smart thermostat adjusts the temperature in your home without you having to do so manually.

In all these interactions, the goal is to make your tasks easier, faster, or more efficient by using AI's capabilities, while you remain in control.
Futuristic image of person in the woods

Need for an AI ethics committee 

Artificial intelligence carries a lot of ethical risks for companies. It can promote prejudice, lead to invasion of privacy and, in the case of self-driving cars, even cause fatal accidents 

Because AI is built to work on a large scale, the impact if a problem arises is also huge. So you actually need a review body - consisting of ethicists, lawyers, technologists, and business strategists – to monitor the AI your company develops or buys, and to identify the ethical risks and how to mitigate them.  

The function of such an AI ethics committee within an organisation is to establish thought leadership-based guidelines on the use and operation of AI technology and associated data. 

Read more - Using AI for client work