Today, Amazon announced the launch of a private AI bug bounty program for select Amazon AI models and applications, including the Amazon Nova foundation models. The goal is to engage with security researchers and experts at leading universities in identifying and remediating potential security issues. This invite-only track complements Amazon’s existing public bug bounty program, which remains open to all researchers and has already surfaced over 30 validated findings that we addressed in Amazon AI applications, resulting in over $55,000 in rewards.
"We believe the best way to make our models stronger and more secure is to partner with the broader community," said Rohit Prasad, SVP of Artificial General Intelligence at Amazon. "By opening up Nova to external testing, we're reinforcing our commitment to safety, transparency, and continuous improvement."
This program focuses on bridging the gap between academic research and professional security practices. It kicks off today with a live event at Amazon’s Austin office, which will bring together top university teams from the Amazon Nova AI Challenge and professional security researchers to collaborate on real-world AI security challenges. The goal is to strengthen the security of Amazon AI models and applications, including the Nova models, while inspiring and developing the next generation of AI-era security researchers.
Security researchers act as critical external testers who probe and challenge AI systems, identifying potential vulnerabilities, biases, and unexpected behaviors that might not be apparent during initial development and testing. Through this program, researchers will test the Nova models across critical areas including cybersecurity issues and Chemical, Biological, Radiological, and Nuclear (CBRN) threat detection. Qualified participants can earn monetary rewards, ranging from $200 to $25,000.
“Security researchers are the ultimate real-world validators that our AI models and applications are holding up under creative scrutiny,” said Hudson Thrift, CISO of Amazon Stores. “I’m very excited about this new program and look forward to partnering with the security and academic research communities to make our AI systems even more secure.”
Program focus areas
Participants are encouraged to explore:
- Prompt injection and jailbreaks with security impact
- Model vulnerabilities with real-world exploitation potential
- Ways the model might unintentionally assist with harmful activities, including security issues and Chemical, Biological, Radiological, and Nuclear (CBRN) events
Eligibility and participation
The live event, hosted by Amazon Bug Bounty this November in Austin, marks the start of the program. Broader participation in the private, continuous live program will be available by invitation in early 2026 to security researchers and select academic teams. Researchers and Amazon customers outside the private program can still report potential security issues in Amazon AI applications through Amazon’s public bug bounty program by selecting “Gen AI Apps” under .amazon.
Why this matters
As Nova models power a growing ecosystem across Alexa, AWS customers through Amazon Bedrock, and other Amazon products, ensuring their security remains an essential focus. This new bug bounty program reflects Amazon’s belief that AI safety advances fastest when academia and the professional security community collaborate. By creating opportunities for hands-on learning and discovery, Amazon is helping raise a new generation of researchers equipped to secure the systems that will define the next era of AI.
How to apply
Researchers interested in participating should visit Amazon Science for the latest updates.