Key learnings from the keynote of Stephan Hadinger, CTO at AWS, at the Adopt AI Summit by Artefact - June 5, 2024

About Stephan Hadinger: He is an alumnus of the prestigious Ecole Polytechnique. Before joining AWS, he served as Chief Architect for cloud computing and led various initiatives in APIs, security solutions, and IT infrastructure for both B2B and B2C organizations. He is known for his strategic vision and practical approach to solving complex technological challenges, making AWS a trusted partner for many enterprises.

AI and security

Hadinger discussed AWS’s extensive experience with AI, the specific challenges posed by generative AI, and the measures AWS has implemented to ensure security and ethical use of AI.

AWS’s experience with AI

He highlighted that Amazon has been leveraging AI for over 20 years, with more than 100,000 customers using AWS AI services worldwide. This extensive use has provided AWS with valuable insights and the ability to continually improve its AI offerings. He underscored the probabilistic nature of generative AI, which can produce different outputs from the same inputs, posing unique challenges for ensuring model accuracy and security.

Addressing key concerns: accuracy, security, and cost

He explained that the probabilistic behavior of generative AI can lead to hallucinations, particularly problematic in fields like HR and legal. AWS addresses this by providing a range of AI models, ensuring that customers can choose the one that best fits their specific use case.

Data protection and confidential computing

He assured customers that their data remains private and encrypted, with AWS employing advanced security measures to prevent unauthorized access. This system not only enhances security but also protects customers from extraterritorial laws by ensuring that AWS cannot provide meaningful data to external parties without customer consent.

Ethical AI and reducing hallucinations

One approach to mitigating hallucinations is Retrieval-Augmented Generation (RAG), which combines generative AI with actual data to produce reliable outputs. By grounding AI responses in specific documents, RAG reduces the risk of hallucination and ensures more accurate and verifiable results.

Ensuring non-toxic and ethical outputs

To prevent AI models from producing toxic or offensive content, AWS has implemented safeguards at both the input and output stages. These safeguards ensure that AI models do not generate inappropriate or harmful responses. Hadinger provided examples of how these measures work, such as filtering questions that might lead to unethical or offensive answers while allowing contextually appropriate inquiries in specific business scenarios.

Intellectual property and AI infringements

AWS’s models are designed to avoid such infringements, and the company offers indemnification to customers in cases where IP issues arise. This commitment to protecting customers underscores AWS’s focus on providing trustworthy and reliable AI services.

“Your data is private. It’s encrypted every time and we never use it. We have built a system that prevents any AWS employee from accessing your data or connecting to the server. This technical impossibility ensures your data’s security and protects against extraterritorial laws.”
Stephan Hadinger, CTO at Amazon Web Services (AWS)

Generative AI for security applications

Hadinger shared a compelling example of using generative AI for security through Amazon One, an authentication service that recognizes users by the palm of their hand. To achieve high accuracy, AWS used generative AI to create over three million artificial hand images, which were then used to train the security model. This innovative approach highlights the complementary roles of generative AI and traditional AI in enhancing security systems.

Artefact Newsletter

Interested in Data Consulting | Data & Digital Marketing | Digital Commerce ?
Read our monthly newsletter to get actionable advice, insights, business cases, from all our data experts around the world!