AI for Industry Summit by Artefact - September 17th, 2024 - Paris

Key learnings from the panel discussion with Fabien Mangeant, Chief Data & AI Officer at Air Liquide, Rodolphe Gelin, Expert AI and Robotics at Renault Group, and Daniel Duclos, Scientific Director Digital Sciences and Technologies Department at Safran.
Moderated by Alexandre Thion de la Chaume, Managing Partner and Global BtoB Lead at Artefact.

Air Liquide’s approach to AI regulation and trust

Fabien Mangeant explained that trust in AI systems is crucial, especially in safety-critical environments like gas production. Air Liquide uses AI in various applications, from operational control to planning. The company is adopting the AI Act’s risk-based approach to categorize AI systems based on their criticality. This framework helps manage the safety and reliability of AI systems that are deployed to customers and workers. Air Liquide had been working on trust in AI before formal regulations, and the new laws help clarify and formalize their practices.

Renault’s integration of trustworthy AI in automotive

Rodolphe Gelin discussed how AI optimizes both vehicle safety features and production processes. AI-driven systems, such as autonomous emergency braking, were implemented before the AI Act. Upon review, Renault found that its processes already met many regulatory requirements. Renault now uses the AI Act to further validate the safety of its AI systems, ensuring that the AI functions reliably when it takes control of a vehicle. Trust in AI is vital for both users and engineers, especially as AI plays an increasing role in vehicle safety.

Safran’s long-term strategy for trusted AI

Daniel Duclos from Safran highlighted the long development cycles in the aeronautics industry, which can span decades for engines and materials. Safran participates in large European programs, such as the Aviation initiative, to incorporate AI into aeronautics. These programs facilitate collaboration with startups and technology providers. Safran has a strong regulatory culture and works closely with the European Union Aviation Safety Agency to adapt existing safety standards to AI. Safran’s goal is to ensure AI systems are safe for both product development and operational processes.

Developing tools for AI governance at Air Liquide

Fabian Mangeant explained how Air Liquide developed a transversal methodology to ensure transparency, robustness, and explainability in AI. The company has integrated AI-specific tools into its machine learning operations (MLOps) pipeline to assess and test the reliability of AI models before deployment. AI governance is part of a broader strategy to scale trustworthy AI across the organization, ensuring compliance with regulations and enhancing internal processes for AI development.

Safran’s promotion of good practices for trustworthy AI

Safran discussed the importance of promoting good practices for AI development. They ensures that engineers use best practices for AI architecture, database management, and model training to design better AI systems. These practices help ensure that AI products meet performance standards and are reliable. Safran is also incorporating AI-related topics into technical audits to ensure that AI systems are thoroughly evaluated during development.

Renault’s AI engineering process and validation

Rodolphe Gelin explained how the company has integrated AI engineering into its development processes. Renault’s centralized Center of Excellence for AI develops industrial applications using AI components. The team uses tools like those from Confiance.ai to validate the robustness and accuracy of AI systems. Renault aims to reduce the need for real-world testing by using tools to assess data quality, eliminate bias, and validate AI models more efficiently. This approach allows Renault to develop AI systems faster and at a lower cost while maintaining safety standards.