AI for Finance Summit by Artefact - September 17th, 2024 - Paris
Key learnings from the panel discussion with Hugues Even, Group Chief Data Officer at BNP Paribas, Matteo Dora, CTO at Giskard, Guillaume Bour, Head of Enterprise EMEA at Mistral AI, Anne-Laure Giret, Head of Google Cloud AI GTM, EMEA South at Google Cloud, and Hanan Ouazan, Partner & Generative AI Lead at Artefact.
AI Language Models enhancing customer service
This discussion highlights the transformative role of AI language models in customer service, focusing on their real-world applications and challenges in various industries, particularly banking. The conversation emphasizes the deployment of AI to automate and enhance customer interactions, such as through email and voice channels, with tools that understand context, prioritize tasks, and suggest responses, thereby improving efficiency and customer satisfaction.
AI assisting in email and voice processing
Hugues shares how AI assists in processing emails by categorizing, prioritizing, and routing them, while also suggesting responses for review. For voice interactions, AI generates post-call summaries, action items, and insights that can be used for future meetings. These tools help banks to improve service quality by analyzing the content and tone of interactions, moving beyond traditional customer feedback methods.
AI streamlining key banking processes
The conversation also addresses AI’s role in key banking processes, such as loan applications and client onboarding. AI, using language models and computer vision, helps verify documents and cross-check information to streamline these processes. The participants highlight how generative AI’s role is evolving from simple, scripted chatbots to advanced conversational assistants that handle a wide range of inquiries and offer more personalized service.
Transition to conversational assistants powered by LLMs
A key takeaway from the discussion is the transition to conversational assistants powered by large language models (LLMs), which offer more dynamic responses than previous chatbots. However, these models need to be fine-tuned to specific contexts and industries, requiring adjustments like low-rank adaptation. The discussion underscores the importance of maintaining control over these AI systems, especially with sensitive data, and highlights collaborations with AI providers like Mistral to ensure data security and performance.
Challenges of scaling AI in production
The scaling of AI in production poses challenges, such as data governance, performance, and security. Maintaining AI models’ efficiency and explainability while ensuring cybersecurity are essential for large-scale deployment. The discussion notes the need for continuous model updates to meet evolving customer and regulatory demands, and that user education on AI tools is critical for success.
Optimizing model size for cost-efficiency
To address AI performance challenges, Guillaume Bour explains that reducing model size while maintaining accuracy and cost-efficiency is key to scaling. They also mention that combining small models with orchestrated workflows can optimize performance, reducing costs without sacrificing quality. Observability and monitoring systems are crucial to maintain these models in real-time production environments.
Mitigating AI hallucination through rigorous testing
Lastly, the issue of AI “hallucination”—providing incorrect information—is discussed. Rigorous testing is essential to prevent such errors in customer interactions. Collaborations between companies like BNP and Discard ensure AI systems are thoroughly tested before deployment to mitigate risks. Post-deployment monitoring is also critical to identify and address issues over time, as changes in customer demands, regulations, and the external environment affect AI performance.