Importance of trust in AI

The round table highlighted that trust in AI is essential for its implementation and adoption. This includes legal regulation, model reliability, and users’ subjective trust. Without this trust, AI adoption cannot materialize.

AI regulation

AI regulation is advancing in many countries. The historical distinction between Europe as pro-regulation and the US as pro-innovation is fading. Regulations, such as the European AI Act, aim to ensure AI reliability and are often based on risk profiles, meaning higher risks require stricter compliance obligations.

L’Oréal’s practices

L’Oréal continues to use AI despite increased mistrust. They have established key principles such as human oversight and transparency, and they invest in talent and training to ensure trust in their AI products. L’Oréal uses AI for scientific dossiers and virtual conversational agents, integrating measures to ensure model transparency and environmental impact.

“We have defined some key principles that we will follow, and each of the applications that we develop will be monitored around these principles. Simple things, like human oversight, transparency, and measuring the environmental footprint of the models.”
Stéphane Lannuzel, Beauty Tech Program Director at L’Oréal

Sodexo’s strategy

To improve trust and AI adoption, Sodexo engaged its executive committee, trained its employees, and established rigorous governance. They focus on user experience and interface to clearly show AI recommendations and their reliability. Employee training and creating communities to share best practices are essential for successful adoption.

“AI is a key enabler to get some operational efficiency in growth impact. To instill a new culture and create trust with end users, we have engaged our executive committee, trained our employees, and established a rigorous governance structure. This approach ensures that AI is deployed responsibly and effectively across our operations.”
Maxime Marembaud, Group Chief Digital & AI Officer at Sodexo

Enablers of reliable AI

Emphasis is placed on investing in technological tools to track AI model lifecycles and in talent to understand and manage these models. Close collaboration with data protection and legal teams is crucial to integrate compliance from the product design stage. Special attention is given to user interface design to explain AI recommendations.

Investment in talent

Companies must invest in talent, particularly data scientists and AI/ML engineers, to understand and refine AI models, even if they are developed by giants like OpenAI, Google, or Microsoft. A deep understanding of the models is necessary to ensure their responsible use.

Systematic governance approach

Strong governance is essential to protect company assets and anticipate business challenges. This includes well-equipped local, regional, and global governance to track and assess AI models throughout their lifecycle, ensuring continuous compliance and reliability.