Since November 2025, ten top university teams from around the world have competed in the inaugural Amazon Nova AI Challenge: Trusted AI Track, focused on strengthening security in AI coding assistants and developing new automated methods to ‘red-team’ and test them. After months of intense competition, eight teams have advanced to the finals, demonstrating outstanding innovation in securing AI-powered code generation. Finals will take place June 26–27, with judges convening in Santa Clara, California, while teams participate remotely in a tournament-style competition designed to push the boundaries of secure, AI-assisted software development. In each tournament attacking and defending teams are matched up against each other, and in each match the attacker engages in a limited number of conversations with the defender in order to try and solicit malicious code, vulnerable code, or assistance with malicious cyberactivity. In addition to their success in defending, defending models are also evaluated for their utility in supporting coding tasks. Attacking systems are evaluated of attack success and the diversity of their attacks. In the finals event, in addition to a finals tournament model defenses and attacks will also be evaluated by expert human red-teamers.
The finalists
The finalists were selected based on tournament performance, research papers, and presentations of their innovations.
Defender teams (model developers)
- Team PurpCorn-PLAN: University of Illinois Urbana-Champaign (UIUC)
- Team Lioncoders: Columbia University
- Team AlquistCoder: Czech Technical University
- Team Purpl3pwn3rs: Carnegie Mellon University (CMU)
Attacker teams (security testers)
- Team PurCL: Purdue University
- Team SaFoLab: University of Wisconsin
- Team RedTWIZ: NOVA University, Portugal
- Team ASTRO: University of Texas at Dallas
"Since November, all of the teams been developing increasingly innovative ways to make AI-assisted coding more secure and reliable," said Michael Johnston, lead of the Amazon Nova AI Challenge. "The quality of work has been exceptional, making the finalist selection highly competitive."
Defender teams are working to build robust security features into code-generating models, while the attacker teams are developing sophisticated techniques to test these models and identify potential vulnerabilities. Together, they're helping to shape the future of secure AI development.
The finals format
In the finals, teams will compete remotely in an offline tournament evaluated by a panel of judges from Amazon's artificial general intelligence (AGI) team, Amazon Security, AWS Responsible AI, and AWS Q for Developers. The finals will test the teams’ solutions against real-world scenarios in a controlled competition environment.
“The challenge sits at the intersection of AI capability and security, two critical areas for the responsible advancement of generative AI," said Dr. Gang Wang, one of the faculty advisors for the UIUC team. "Our students have worked tirelessly to develop new approaches that enhance security without compromising the user experience."
After the finals, all teams will gather in Seattle on July 22-24 for the Amazon Nova AI Challenge Summit, where winners will be announced and teams will present and share their research findings.
Advancing the field
The challenge brings together Amazon’s AI expertise with top academic talent to accelerate secure innovation in generative AI. Research produced through the competition aims to contribute to the development of safer, more reliable AI systems for all.
"What makes this challenge especially valuable is that it combines technical innovation with real-world application," said Dr. Xiangyu Zhang of Purdue University. "Our students aren’t just competing, they’re helping to solve real-world AI security challenges."
The Amazon Nova AI Challenge is part of Amazon’s broader commitment to responsible AI development and academic collaboration. For more information and updates, visit amazon.science/nova-ai-challenge.