29 January 2021
Formulating a coherent AI strategy, and deploying value-adding and efficient use cases is a struggle for many businesses. Alexandre Thion de la Chaume, Partner, Data Consulting at Artefact, explains how these processes can be streamlined through the AI Factory model.
Artificial Intelligence (AI) is seen as the major lever of competitive advantage. The data doesn’t lie: there’s been an almost 25% year-on-year increase in business use of AI, with 63% of executives agreeing it has led to revenue increases. The global pandemic has only put this into sharper focus. The businesses that thrive and survive will be those able to adopt the right AI solutions and deploy and scale them quickly and efficiently.
Yet, as with all game-changers, AI initiatives raise new challenges. Implementation comes with many questions – chief among them, how can you adopt the right data approach to deploy AI initiatives rapidly and efficiently, without failure and sustainably over the long term? The ‘AI Factory’ approach has been developed for precisely this reason.
The AI Factory is an organisational operating model – combining different talents, capabilities and processes in a systematised way – to deliver success in AI deployment and scalability. It has been effectively used by industry leaders like Carrefour and ENGIE to deliver transformative AI projects across their businesses. Yet setting up an effective AI Factory from scratch can be daunting. You need expert teams and a clear vision to make the process work.
Planning makes perfect
The vital first step is to define a vision and use cases for your AI Factory. This will be your data strategy. Use cases offering the highest business potential of transforming the company must be identified. Whether it’s supply chain optimisation or compliance management, opportunities exist at all levels.
The company’s AI vision should also be considered. It’s important to have the ability to imagine how it could develop, to plan for it and to reach a clear-sighted idea of the future. From a preliminary overall view, draw a refined version applicable to data and AI.
Next, concrete business opportunities must be assessed, through the identification and sorting of use cases. This is done by assessing business impact and implementation complexity. A focus on mindsets is important throughout, to manage change on a large scale and involve everyone, from company leadership to front-line team members.
The four pillars of the AI Factory
Once the company’s data strategy and AI vision are defined, you should have a prioritised list of use cases to implement. But how can you start working on them? An effective AI Factory implementation is founded on four distinct pillars:
One single governance
To be efficient, governance must be high-level, dedicated and tailored. A highest-instance AI Factory Board – comprising key C-suite data leaders – is extremely important in providing overall sponsorship and direction as it shares the AI vision and aligns teams and the roadmap with it. At the programme management level, an AI Factory Director role should be established, involving business, operations, legal, security and IT data experts. Their role should be to review, arbitrate and validate progress.
Finally, at the operational level, there need to be agile teams. Feature Teams are responsible for the delivery of use cases with AI products. They’re close-knit units working collaboratively to ensure permanent information flow and transparency. Most importantly, they should be multidisciplinary, combining skills and expertise from across the business. They are achievement-oriented, each one created with a single objective: to deliver one use case measured by a unique goal.
Organised, diverse and expert teams
To drive efficiencies, structured organisations should gather business, data, software and digital tech skills in hybrid teams based on agile methods. Agility ensures a flexible and adaptive way of working and avoids issues linked to a silo approach, such as isolated departments within the same structure or overly rigid procedures. This requires a good blend of business and technical profiles, to ensure that what is developed on the technical side always has a useful purpose that addresses business needs.
Scalability is an important overall characteristic of a team’s makeup. The idea is that its structure can be easily duplicated, like Lego bricks. With a fully scalable model, more teams can be added to address additional use cases.
Advanced AI technologies
Of course, effective AI deployment needs a foundation of AI-enabling technologies. An AI Factory uses a combination of open-source, proprietary and cloud solutions. They should be standardised across the whole data pipeline – from ingestion to visualisation – from beginning to end, according to best practices.
Systematic and proven methodologies
Systematisation is needed to make sure a series of steps are always taken in a specific order, each with its own defined objective. The benefits are twofold. First, this gives an overall structure of common references throughout, creating a backbone that guarantees consistency. Second, this makes methodologies replicable and scalable, considerably accelerating the deployment of the industrialisation phase.
MLOps: Keeping the factory running
Alongside a set use case methodology, MLOps (Machine Learning Operations) practices must be deployed to close the gap between the concept phase and production. Inspired by the DevOps process, this should combine software development and IT operations to shorten the development life cycle.
The purpose of MLOps is to tackle challenges that traditional coded systems do not have. The first challenge is collaboration between teams: different units are often siloed and own different parts of the process. This stifles the unity needed to go into production.
The second is pipeline management, as ML pipelines are more complex than traditional ones. They have specific characteristics, including bricks that must be tested and monitored throughout production.
The final obstacle is that ML models usually need several iterations – when put into production in a manual, ad-hoc way, they become rigid and difficult to update.
Instead, an MLOps approach should embed all ML assets in a Continuous Integration and Continuous Delivery pipeline (CICD) to secure fast and seamless rollouts. All data, features and models should be tested before every new release to prevent quality or performance drift. All stakeholders should work on the same canvas and apply software engineering best practices to data science projects – versioning, deployment environments, testing.
Ultimately, MLOps is the discipline of consistently managing ML projects in a way that’s unified with all other production elements. It secures an efficient technical delivery from use case early stage (first models) to use case industrialisation.
A framework for success
AI holds tremendous promise, but also great risk for organisations unable to deploy it properly. The real benefit of the AI Factory model is that it establishes a core framework for swift and successful implementation. Processes, teams and tools are transferable and repeatable by nature, meaning a company can remain agile in pursuing its AI vision. Once the process is established and supported by MLOps, a business has what it needs to become an AI powerhouse.