Overview
Join leading researchers from academia and industry for an intensive, one-day symposium on the frontier of Trusted AI. Hosted by the Amazon AGI team behind the Nova models, this invitation-only gathering brings together distinguished scholars to address a central challenge: building powerful AI systems that are inherently safe and reliable.
Key dates
Abstract submission deadline: December 19, 2025
Notification of acceptance: January 7, 2026.
RSVP deadline: Due to high demand, invitations are issued selectively, and only confirmed attendees will receive detailed event logistics via email by January 16, 2026.
Event date: January 21, 2026
Notification of acceptance: January 7, 2026.
RSVP deadline: Due to high demand, invitations are issued selectively, and only confirmed attendees will receive detailed event logistics via email by January 16, 2026.
Event date: January 21, 2026
Call for posters: Closed
Deadline is December 19, 2025 and must be submitted here.
All submission is non-archival. Double submission is acceptable
What to submit
Topics of interest
We welcome submissions across all aspects of Trusted AI, including but not limited to:
All submission is non-archival. Double submission is acceptable
What to submit
- Format your submission following this NeurIPS 2025 LaTex template; submit as PDF.
- Extended abstract (1-2 pages) describing your research, methodology, and key findings
- Abstracts should follow a standard academic format (introduction, methods, results, conclusions)
- Include author names, affiliations, and email
- Indicate if the work uses or evaluates Amazon Nova models
Topics of interest
We welcome submissions across all aspects of Trusted AI, including but not limited to:
- AI governance & policy: Frameworks, compliance, and regulatory approaches
- Evaluation & benchmarking in RAI: Metrics, testing methodologies, and assessment frameworks
- Risk mitigation: Techniques for reducing AI harms and unintended behaviors
- Security & safety: Adversarial robustness, secure deployment, and threat modeling
- Agent safety & reliability: Safety in agentic, autonomous, and multi-agent systems
- Explainability & interpretability: Making AI decisions transparent and understandable
- Reasoning & verification: Formal methods and logical reasoning in AI systems
- Observability & monitoring: Runtime monitoring and system transparency
- Red-teaming & stress testing: Adversarial testing and failure mode discovery
- Uncertainty quantification: Calibration, confidence estimation, and risk assessment
- Multi-modal safety: Trust across vision, language, and other modalities
- Multi-lingual considerations: Safety and fairness across languages and cultures
- Content moderation: Detection and filtering of harmful content
- Watermarking & provenance: Digital fingerprinting and attribution
- Deception: Detection of deceptive behaviors by models
Keynote speakers
Aaron Roth, Amazon Scholar and Professor at University of Pennsylvania
Elias Bareinboim, Director of Causal AI Lab, Columbia University
Elias Bareinboim, Director of Causal AI Lab, Columbia University
Participant benefits
Selected presenters will receive:
Networking opportunities: Connect with Amazon scientists and leading academics
Research collaboration: Potential pathways to ongoing partnerships with Amazon
Job opportunities: Engage with Amazon recruiters and hiring managers
Visibility: Showcase your work to key decision-makers in Trusted AI
Amazon Nova credits: Complimentary credits to explore and experiment with Nova models
Networking opportunities: Connect with Amazon scientists and leading academics
Research collaboration: Potential pathways to ongoing partnerships with Amazon
Job opportunities: Engage with Amazon recruiters and hiring managers
Visibility: Showcase your work to key decision-makers in Trusted AI
Amazon Nova credits: Complimentary credits to explore and experiment with Nova models
Contact us
You can reach us at trusted-ai-symposium@amazon.com.