Skip to content

Contact sales

By filling out this form and clicking submit, you acknowledge our privacy policy.

AWS re:Inforce: Take these generative AI and cloud security measures

At AWS re:Inforce 2024, Chris Betz, CISO of AWS, and Steve Schmidt, CSO of Amazon, explained their approaches to cloud security and secure generative AI use.

Jun 11, 2024 • 6 Minute Read

Please set an alt value for this image...
  • AWS
  • Cloud
  • IT Ops
  • Software Development
  • Data
  • Security
  • News
  • AI & Machine Learning

The theme of AWS re:Inforce 2024 is security in the age of generative AI. (Shocking, right?) And while there was talk of AI, the bulk of the keynote focused on the people behind the technology and the importance of trust and culture in security and innovation. 

Chris Betz, CISO of AWS; Steve Schmidt, CSO of Amazon; and Ash Edmondson, Eli Lilly AVP of Security Architecture and Engineering, kicked off the conference, explaining how AWS secures the cloud, why organizations need a culture of security, and how to secure generative AI at scale. 

They also shared AWS product announcements, including new capabilities to advance Zero Trust and generative AI security. Here are the key takeaways from their talk.

Table of contents

AWS relies on culture to shape their cloud security strategy

Chris began by talking about how AWS secures the cloud. He identified three pieces that contribute to their robust security culture. 

First, they have dedicated time each week to security. “Every Friday, AWS leadership meet with individual service teams and discuss important security issues,” he said. 

These weekly meetings shape the roadmap, help hold leaders accountable for security, and have a deep cultural impact. “Where we spend our time says a lot about what matters,” said Chris. 

They also nurture a culture of escalation that views escalations not as failures or shortcomings but as necessary steps to prevent issues that could impact security for customers. “When there’s a security issue, we’re empowered and encouraged to escalate,” explained Chris. “We’re expected to act fast and decisively. Escalations allow us to do both.”

Another part of their culture? Unified teams. When an issue arises, it’s not uncommon for teams to play ping pong with tickets, passing issues from team to team and increasing the time to resolution. At AWS, all tickets are escalated to the security team, who does whatever is necessary to get the issue resolved, lowering the mean time to resolution across the organization.

While these practices play a heavy role in AWS’s overall security culture, implementing them in your organization won’t instantaneously drive results. “Culture doesn’t develop overnight. Building a security culture requires constant investment and focus,” said Chris. 

Rust on the rise: The programming language for security

Chris also noted the rise of the Rust programming language for security. “Rust is the fastest-growing development language at AWS,” said Chris. “We’re actually rewriting critical code in Rust.”

As a memory-safe language, Rust eliminates memory-related and threatening issues that may lead to security vulnerabilities. Thanks to its minimal runtime, it also reduces overall attack surface and enables teams to catch potential issues earlier, making development faster.

Chris mentioned AWS Libcrypto for Rust, their open-source cryptographic library, to help teams meet government cryptographic requirements.

“We’re excited about the future of Rust,” he said.

Using automated reasoning to verify correctness

AWS’s investment in automated reasoning was another key topic. Chris explained the importance of testing and fixing to ensure systems behave in the expected way. 

Positive testing ensures the system behaves in the expected way, while negative testing examines unexpected conditions to identify potential vulnerabilities. But there’s only so many inputs you can test. 

That’s where automated reasoning comes into play. “Automated reasoning enables us to see what behaviors a system is capable of and then identify unwanted behaviors to fix them,” said Chris.

In other words, it allows you to test and fix for infinite inputs, using logic to analyze and verify correctness. Automated reasoning helps verify the correctness of cryptographic protocols and authorization logic, as well as verify security mechanisms and aid policy and network control.

Insights from Eli Lilly: Cloud security relies on trust

Ash took the stage to explain the principles behind pharmaceutical company Eli Lilly’s cloud security. The main theme? Trust.

She compared trust in the cloud to leaving her dog, Clyde, at home while she travels. She explained that she has an excellent dog sitter, and she can check in on Clyde anytime with her home cameras. But that still doesn’t alleviate all of her worries. 

Cloud security is the same. “Trusting a cloud provider . . . means finding the balance between control and delegation,” she said.

At Eli Lilly, this includes robust security measures, a culture of transparency and collaboration, and seeking out expert advice. This trust is key to delivering for their customers. “By prioritizing trust at every stage, we empower teams to innovate.”

How to balance generative AI security with innovation

Of course, the keynote wouldn’t have been complete without some focus on generative AI. How can teams use machine learning and generative AI securely? At the end of the day, it circles back to the culture of security. 

“Security fundamentals are evergreen,” said Steve. When teams across your organization possess foundational security skills, they can adopt emerging technologies and innovate faster without sacrificing governance, compliance, resilience, and risk control.

He then shared some of his tips from Amazon on creating a culture of security around generative AI:

  • Create generative AI security standards: Set guidelines on how to handle confidential data, models, and prompts. This ensures security is part of the process from the start.

  • Create a threat modeling guide for genAI applications: Help teams work through the risks of generative AI applications and how to mitigate them systematically.

  • Share internal testing and tools: Share test knowledge in one place so everyone can learn from each other. 

  • Conduct security reviews: Create specific security review tasks for AI-enabled applications. Continually verify their security.

Generative AI security requires an iterative review and audit process. “In AI, the model is the code. Responses change over time as users interact with them. You’re never done with AppSec reviews,” said Steve. 

The challenges of Zero Trust and how to solve for them

“We know that organizations with robust Zero Trust architectures are better protected against cybersecurity incidents,” said Chris.

But he also acknowledged the challenges of maintaining a Zero Trust approach:

  • The need for strong identity and access management

  • Complex network segmentation

  • Hybrid environments

  • The expanding application landscape and workforce mobility

The constantly evolving threat landscape, need for seamless integration across tools, and complexity of maintaining accurate and updated configurations as environments change create additional roadblocks. The new AWS features announced aim to address these challenges. 

New AWS product announcements

An AWS keynote wouldn’t be complete without new product announcements. Chris announced several new security-focused features.

Build a culture of security

“At the end of the day, culture is at the root of it all. It’s culture that drives us to design systems that are secure by design, not bolted on as an afterthought,” said Chris.

Pluralsight is an AWS Partner—learn more about how we help teams develop critical cloud, AI, and security skills.

Explore our AWS re:Inforce 2024 coverage: