The Bridge - Data Leader series

Nathalie Beslay, CEO and co-founder of Naaia, met with Caroline Goulard, journalist and CEO of data visualization companies Dataveyes and Modality, to discuss the new AI Act, set to enter into force after its April 22nd publication in the European Union’s Official Journal. Their 45’ interview, held in The Bridge by Artefact studio, explores every aspect of this Act designed to ensure safety and compliance with fundamental rights, while boosting innovation.

Nathalie Beslay is the CEO and co-founder of Naaia, a provider of compliance and risk management solutions for the artificial intelligence industry. A lawyer since 2004, Nathalie’s practice is dedicated to the health and beauty industry. She is eminently qualified to discuss regulatory issues, having led a project for the Seville Institute for Prospective Technological Studies on the challenges and risks of introducing new technologies in the health sector. She’s also been a technical advisor to the French Secretary of State for Health on bioethical issues and General Counsel for an e-health solutions operator.

How has your background helped you understand the current issues surrounding artificial intelligence?

Whether in healthcare or AI, people struggle with regulatory constraints. The key is to convert constraints into opportunities. Understanding the regulatory framework for healthcare products helped me to understand the stakes and the risks of the new AI regulatory framework, because it’s built in exactly the same way as the ones around health products are built: the underlying objectives are basically the safety of users and product beneficiaries.

So AI is regulated in the same way as a drug is?

The AI Act is designed as a horizontal product regulation. That means that it applies to any type of company, whatever its size, private or public. There are exceptions for certain sectors, such as the military, but it’s a product regulation, similar in concept and type of obligation to other product regulations. 

Tell us more about the AI Act. How can companies derive added value from this regulation?

Behind its 400 pages of complexity, the AI Act is essentially a risk management system, a quality assurance system, documentation, technical documentation and instructions for use. Much has been said about the regulatory threshold – when a regulation is so restrictive it imposes a burden instead of leading to favorable objectives. But studies have shown that if companies are very digital, regulation will help them develop.

For that to happen, companies must embrace the AI Act, take it on board and integrate its obligations into their process by design. The best way is to say, “I’m going to do value-based compliance, I’m going to look at what compliance can bring to my product, and therefore to my customers.”

AI components aren’t always built in-house. Will European players need to buy AI from suppliers that meet European standards?

Because the AI Act is a product regulation, it will impose different obligations on the artificial intelligence supply chain, depending on whether you’re a supplier, distributor or deployer. Different stages will be juxtaposed, depending on whether you manufacture yourself or buy off-the-shelf, or whether you use a model and then turn it into an AI system. Suppliers will need to demonstrate conformity and provide technical documentation, players who customize will have to demonstrate respect for what suppliers have given them, but also that any added layer is compliant, unbiased, respects copyright, is qualitative, and uses representative data.

What do you say to those who fear this will put Europe behind the United States or China?

The original text of the AI Act has evolved, in part due to the voices of those who were vocally anti-regulation. Today, the R&D period is a respite for controls, and open source models are exempt except when they present a so-called systemic risk, i.e., when the level of risk is too high to benefit from the exemption. I’d like to tell the doubters that European AI means taking these issues into account and integrating these rules into their marketing processes. Above all, I want to tell them that when you have as much capacity, power, intelligence and vision as our French and European players have, then building and integrating compliance is not at all complicated.

It’s important to remember that American and Chinese AI legislation is also in development and that everyone has the same goal: to preserve and refocus the human element in using these new technologies – or at least try to do so, as well as possible.

What about compliance for companies in Europe that have AI systems that they’ve already started to test or use?

The best response is to review the timeline. The AI Act will be officially voted on April 22 in the European Parliament. In May, the law will be published in the Official Journal, and then the timetable will start. In six months’ time, prohibited AI must be removed from platforms or brought into compliance. At one year, publishers of general-purpose artificial intelligence models will have to comply with the regulations. In two years, all other AI systems will have to meet all the obligations, with slightly longer deadlines for AIs whose compliance will be more complex. In other words, every time an AI involves a product or equipment that is itself subject to regulation – medical devices, gas transport, certain machine tools, for example – the legislator will allow 36 months to comply, because these regulations need to be accumulated and ordered.

So, the first priority is to map, identify and screen for prohibited AIs, to get them out or bring them into compliance. The goal is to achieve compliance through value by asking ourselves what compliance is for, making quality products that are safe, understandable and explainable, that serve a purpose, and that try to consume as little energy as possible.

Can you give us examples of AIs that will soon be prohibited?

The AI Act takes a risk-based approach. Certain AI systems were deemed to present an “unacceptable” risk, so the legislator listed these as prohibited. There are eight of them, one of which is artificial intelligence systems that could enable subliminal manipulation. The list also includes generalized social rating systems, i.e., qualifying and categorizing individuals according to economic level or social position, but on a massive scale. Emotional recognition systems based on biometrics are also prohibited in the workplace and educational settings, except for security reasons. For example, in production facilities, we may need to try to measure people’s emotional states in order to maintain their safety on a production line or to ensure security against possible intruders. Another prohibited AIS is facial recognition systems used to combat crime. This type of video surveillance is reserved for certain authorities and for certain offenses. When it is allowed for private organizations, it’s for certain offenses that have been deemed very serious, such as terrorism, crime, assault and rape, and it’s as specific as that in the AI Act.

As an example, if OpenAI doesn’t document its models and “play the game,” will OpenAI be banned in Europe in a year?

In that case, yes, people would have to stop using it, and OpenAI would be liable to a substantial penalty. But I don’t believe in this scenario. I think players will comply with the regulations. It won’t be easy – it’s going be costly and take a lot of planning. Compliance means setting a budget to meet the requirements, getting up to speed, mobilizing experts, training teams and putting compliance into the company’s roadmap. It’s not easy, but it’s feasible, and given the stakes involved, it’s got to be done.

How do we go beyond compliance and turn these regulations into a competitive advantage?

When we apply the rules, we look at the extent to which the rule can serve the product. For example, when you have an obligation of traceability, of explicability throughout the chain, and you use subcontractors, going to these subcontractors for quality can mean improving the cost or installing standards, and that’s obviously to the benefit of the product. So how do we transform this constraint into an opportunity? We look at the extent to which the rule is going to have the necessary response of quality and safety.

Certain industries have understood this, such as food. The Nutri-Score is a rule that helps products that play the game. It builds consumer trust. Of course, the Nutri-Score has been a constraint in the value chain, but who can complain about it today? Everyone agrees with this type of rule. Aeronautics is another industry that knows compliance and rules are good for their products. In industries for which safety is critical, they’ve turned constraints into an asset of trust, an opportunity to build loyalty, understanding, and a bond with their customers. It’s the same for artificial intelligence: it’s a good thing to have these regulations… and perhaps even an opportunity to question whether we really need a given AI, if the effort is too high relative to the benefit.

What are the risks for a company that delays implementation of the AI Act?

General MacArthur famously said, “Lost battles can be summed up in two words: too late.” It could be too late because the European legislator has set up a coercive system with authorities and sanctions. Beyond the fear of legal repercussions, it can be too late in the market access chain. In terms of AI, there are few players today who don’t say “trust” or “responsible AI” or “aligned AI”, as if it were intrinsic. You can’t make AI without trust, so it’s bound to make a difference in the marketplace. If you don’t have a compliance process built in from the start, you’re surely going to lose out on the entire value chain. 

How does your new project, Naaia, support companies in these challenges?

Naaia helps companies focus on their core business, on innovation and on their teams. Our tool provides the keys to achieving compliance as quickly and effectively as possible. It includes a qualification algorithm that creates a repository within the company to identify and qualify all artificial intelligence systems, then generates an action plan. This action plan will be drawn up within the perimeter of each AIS, but it will also pool efforts across all regulations to help companies achieve compliance fast and effectively, by giving them lists of concrete actions with templates for documentation and training tools for teams to get moving. We do this by limiting constraints and costs, by limiting the use of any expert not close enough to the teams and the products, and by trying to internalize this function in order to digest and metabolize it. The right word for compliance is “metabolize”, so that when we put the product on the market, we’re as unconcerned and unhindered as possible.

What kind of customers are you working with today?

Today, it’s mainly early adopters and massive adopters of AI, C40 groups that have a big pipe and a lot of AI already deployed. And more specialized players whose core business is AI, who are absolutely obliged to adopt compliance practices right away, because it’s a matter of life and death: they won’t be able to go to market if they’re not compliant. Later, we’ll gradually include smaller companies. They have fewer resources to deal with compliance issues, but they’ll have fewer options in the future. Just as there is a CRM to manage customer relations, an ERP to manage production and an HRIS to manage HR, we believe we need a tool to manage AI centrally, and to help everyone make proper use of AI and manage responsibilities. In this respect, Naaia is an AIMS, an Artificial Intelligence Management System that will become a commodity – that’s our vision for tomorrow.

Visit thebridge.artefact.com, the media platform that democratizes data & AI knowledge, in videos & podcasts.