Amazon Research Awards (ARA) provides unrestricted funds and AWS Promotional Credits to academic researchers investigating various research topics in multiple disciplines. This cycle, ARA received many excellent research proposals from across the world and today is publicly announcing 63 award recipients who represent 41 universities in 8 countries.
This announcement includes awards funded under five call for proposals during the spring 2025 cycle: AI for Information Security, Amazon Ads, AWS AI: Agentic AI, Build on Trainium and Think Big. Proposals were reviewed for the quality of their scientific content and their potential to impact both the research community and society. Additionally, Amazon encourages the publication of research results, presentations of research at Amazon offices worldwide, and the release of related code under open-source licenses.
Recipients have access to more than 700 Amazon public datasets and can utilize AWS AI/ML services and tools through their AWS Promotional Credits. Recipients also are assigned an Amazon research contact who offers consultation and advice, along with opportunities to participate in Amazon events and training sessions.
"Amazon Research Awards are enabling incredibly impactful work to improve human health—from revolutionizing and democratizing structural biology tools, which can accelerate discovery of candidate molecules for new drugs to help patients, to predicting the etiology of a stroke in order to start the appropriate therapies, or interpreting digital phenotyping data to help with mental health services," said Christine Silvers, AWS Principal Healthcare Advisor. "These are just three examples of projects that recipients have received Amazon Research Awards for. The potential for improving healthcare amongst all of the spring 2025 plus past and future awardees is staggering and inspiring.“
"Academic AI researchers face a fundamental challenge: advancing machine learning research and educating the next generation requires access to cutting-edge infrastructure that's both powerful and affordable," said Yida Wang, AWS AI Principal Applied Scientist. "The Build on Trainium program directly addresses this barrier. We are working with leading AI research universities such as, UC Berkeley, Stanford, CMU, MIT, UIUC, UCLA, and many others. At CMU, researchers achieved significant improvements over state-of-the-art FlashAttention in just one week. At MIT, researchers trained 3D medical imaging models with 50% higher throughput and lower cost, reducing training time from months to weeks. Build on Trainium represents AWS's commitment to democratizing AI research through collaborative partnership with academia—fostering an environment where researchers experiment freely, students learn on production-scale infrastructure, and academic innovations shape the future of machine learning for everyone."
The tables below list, in alphabetical order by last name, the spring 2025 cycle call-for-proposal recipients, sorted by research area.
AI for Information Security
Recipient |
University |
Research title |
University Of California, Berkeley |
Design and Verification of High-Assurance Key Management Services for Stateful Confidential Computing |
|
University Of California, Irvine |
Precise and Analyst-friendly Attack Provenance on Audit Logs with LLM |
|
University of Virginia |
Weakly-Supervised RLHF: Modeling Ambiguity and Uncertainty in Human Preferences |
|
University of Southern California |
Safe and Secure API Discovery for Agentic AI |
|
Northeastern University |
Understanding How LLMs Hack: Interpretable Vulnerability Detection and Remediation |
|
University Of California, Berkeley |
Design and Verification of High-Assurance Key Management Services for Stateful Confidential Computing |
|
University of Southern California |
Safe and Secure API Discovery for Agentic AI |
|
Northeastern University |
Understanding How LLMs Hack: Interpretable Vulnerability Detection and Remediation |
Amazon Ads
Recipient |
University |
Research title |
University of Illinois at Urbana–Champaign |
Adversarial Misuse of Large Language Models in Digital Advertising: Benchmarking and Mitigation |
|
University of Virginia |
Adversarial Misuse of Large Language Models in Digital Advertising: Benchmarking and Mitigation |
AWS Agentic AI
Recipient |
University |
Research title |
Massachusetts Institute of Technology |
AutoDA-Sim: A Multi-Agent Framework for Safe, Aesthetic, and Aerodynamic Vehicle Design |
|
University of Maryland, Baltimore County |
Physics Co-Pilot: An LLM-Orchestrated Scientific Assistant for Physics Research |
|
Carnegie Mellon University |
Fine Grained Planning Evaluation for VLM Web Agents |
|
Stony Brook University |
Efficient and Effective Long-Horizon Reasoning for Interactive LLM Agents |
|
Massachusetts Institute of Technology |
Contextual Harm Mitigation and Automated Backtracking in Computer Use Agents |
|
Purdue University, West Lafayette |
Open-World Probabilistic Theory of Mind |
|
Dartmouth College |
Empowering Power Systems and Market Operations with Behavioral Generative Agents |
|
Technical University of Munich |
Functional Bug-Aware Software Testing via Intelligent Computer Use Agents |
|
University of Edinburgh |
Diffusion-inspired chain-of-thought self-revision |
|
Carnegie Mellon University |
Fine Grained Planning Evaluation for VLM Web Agents |
|
Monash University |
Functional Bug-Aware Software Testing via Intelligent Computer Use Agents |
|
University of Washington, Seattle |
Leveraging Multiple Representations in Multi-Agent Mobile App Interface Understanding and Task Execution |
|
University of Pennsylvania |
Efficient and Safe Protocols for Collaborative Agentic AI |
|
University of California, Berkeley |
Multi-Agent AI Alignment |
|
The Chinese University of Hong Kong |
WebAGI: VLM-Driven Framework for Robust Web Automation and Planning in Agentic AI |
|
Boston University |
Formidable yet Solvable: Scientific Computing Tasks for Agentic AI |
|
University of Montreal |
Foundation Agents and Protocol for Collaborative Agentic AI |
|
University of Southern California |
Improving the Efficiency of Web Agents |
|
Cornell University |
Artificial Collective Intelligence: The Structure and Dynamics of LLM Communities |
|
University of Texas at Austin |
Collaborative Continual Learning in Multimodal Multi-Agent Systems |
|
University of California, San Diego |
ReaL-Agent: A Retrieval-and-Reasoning Agent for Deep, Cross-Modality Retrieval |
|
Delft University of Technology |
Heterogeneous Multi-Agent Collaboration For Built-in Resilience |
|
Carnegie Mellon University |
OpenAgentSafety: Measuring and Mitigating Safety Harms of LLM-based AI Agent Interactions |
|
Cornell University |
Contextual Security for Multi-Agent Systems |
|
Harvard University |
Lifelong learning in agentic AI through gated memory modules |
|
Harvard University |
Interpreting Digital Phenotyping Data with LLM-Based Agentic Assistants for Mental Health Services |
|
College of William & Mary |
Structure Matters: Task-Optimized Topologies for LLM Agents |
|
University of California, San Diego |
Agentic World Representation |
|
University of Minnesota, Twin Cities |
NetGenius: Agentic AI for Next-Generation Wireless Network Autonomous Configurations and Intelligent Operations |
Build on Trainium
Recipient |
University |
Research title |
Cornell University |
VERA: Automated Testing for Improving the Reliability of Neuron Compiler Toolchain |
|
Cornell University |
Fast Adaptation of Multi-Modal Foundation Models for Robotic Perception and Control |
|
Lieber Institute for Brain Development |
Optimizing and scaling pretraining and preference-based fine-tuning of Large Chemical Models |
|
University of California, Irvine |
Automatic Kernel Synthesis and Tuning for AWS Trainium via Profile-Guided Graph Topology Optimization |
|
Waseda University |
Accelerating Vision-Language Autonomous Driving with AWS Trainium |
|
University of California, Merced |
Efficient Sparse Training with Adaptive Expert Parallelism on AWS Trainium |
|
University of British Columbia |
Efficient MoE LLMs via Pruning and Matryoshka Quantization on AWS Trainium |
|
Waseda University |
Accelerating Vision-Language Autonomous Driving with AWS Trainium |
|
University of California, Merced |
Accelerating Large Language and Reasoning Model Workloads with AWS Trainium |
|
Tokyo City University |
LLM for Software Modeling Brain in Multi Language |
|
University of Massachusetts, Amherst |
Overcoming Fundamental Reasoning Limitations of LLMs by Always Thinking before Writing |
|
Purdue University, West Lafayette |
Towards Communication-Efficient Distributed Training of Large Foundation Models by Dataflow-aware Optimizations |
|
Lieber Institute for Brain Development |
Optimizing and scaling pretraining and preference-based fine-tuning of Large Chemical Models |
|
Kingston University London |
Efficient Architectures for Genomic Variant Interpretation: Language Models for Non-Coding DNA Variant Analysis |
|
Kingston University London |
Efficient Architectures for Genomic Variant Interpretation: Language Models for Non-Coding DNA Variant Analysis |
|
University of California, Berkeley |
Learning Host–Microbial Genetic Element Interactions with Genomic Language Models |
|
University of California, Irvine |
Automatic Kernel Synthesis and Tuning for AWS Trainium via Profile-Guided Graph Topology Optimization |
|
University of California, Berkeley |
Learning Host–Microbial Genetic Element Interactions with Genomic Language Models |
|
Indiana University Bloomington |
AI-Powered Travel Pattern Detection in VR for Occupant Behavior Analysis Using AWS Trainium |
|
University of Illinois Urbana-Champaign |
Trainium-native MoE: Developing kernel and system optimizations for efficient and scalable MoE training |
Think Big
Recipient |
University |
Research title |
University of North Carolina at Chapel Hill |
Leveraging Molecular Dynamics to Empower Protein AI Models |
|
Yale School of Medicine |
AI-powered prediction of ischemic stroke etiologies using multi-modal data |
|
Harvard Medical School |
SBCloud – A Transformative Model for Scalable Structural Biology Research |