Generative AI is transforming every industry, with software development at the forefront of this revolution, redefining how code is conceived, written, deployed, and optimized. As AI-powered tools move beyond simple code generation to agents capable of complex application development, a critical industry challenge emerges: how do we ensure these powerful AI systems are not only highly capable but also inherently trustworthy and reliable?
Building AI agents that can plan, execute, and validate changes across entire codebases and user-facing applications requires an unwavering focus on security, accuracy, and safety. This is especially true in application development, where AI-driven decisions can have a direct impact on product quality, customer experience, and system integrity.
Today, Amazon is launching the next chapter of the Amazon Nova AI Challenge, an annual university competition dedicated to accelerating the field of artificial intelligence. Created to recognize and advance students from around the globe who are shaping the future of artificial intelligence, the challenge provides student teams with the opportunity to work on the latest challenges in the field of AI and build innovative solutions.
Year 2 of the Amazon Nova AI Challenge focuses specifically on advancing trusted, useful agentic AI for software engineering. This year’s challenge centers on multi-step, agentic application development, where teams build agents to tackle development tasks that mirror real engineering workflows. The new challenge requires teams to navigate the delicate balance between increasing utility — improving model performance to robustly handle more complex tasks — and maintaining safety. It calls for simultaneous progress on both fronts, recognizing that as models handle increasingly complex tasks, new safety risks can emerge. True advancement lies in improving a model’s effectiveness and its safeguards in tandem.
“Generative AI for software development has rapidly moved from code generation to systems that plan, build, and test changes across entire codebases and user-facing applications,” said Rohit Prasad, senior vice president and head scientist, Amazon AGI. “The focus of this year’s Nova Challenge reflects that shift. We’re inviting students to raise the bar on what these systems can do, while making sure they do it responsibly.”
How it works
Ten university teams will be selected from the 2026 Amazon Nova AI Challenge application pool to participate as either developer teams (building defenses and reliability into agentic coding systems) or as red teams (probing systems to reveal failures and security weaknesses). Evaluations will measure task utility and safety together, rewarding systems that complete complex changes and feature development while upholding clear guardrails.
What’s new this year
- Agentic development: Multi-step agent-based development that goes beyond single prompt code generation.
- Both utility and safety matter: Teams advance by improving their systems in both task performance and safety.
- Real-world evaluations: Benchmarks and tests are designed to reflect day-to-day engineering work.
Timeline and eligibility
Applications open November 10, 2025. Selected teams will compete across the academic year, with program resources, evaluations, and live tournaments shared after teams are selected.
Stay tuned: Detailed rules, eligibility, application procedures, and deadlines will be added on amazon.science/nova-ai-challenge in the coming days.
 
     
     
    