Vancouver, Canada

3 important themes from Amazon's 2019 NeurIPS papers

Time series forecasting, bandit problems, and optimization are integral to Amazon's efforts to deliver better value for its customers.

Last year, the first 2,000-2,500 publicly released tickets to the Conference on Neural Information Processing Systems, or NeurIPS, sold out in 12 minutes.

This year, the conference organizers moved to a lottery system, allowing aspiring attendees to register in advance and randomly selecting invitees from the pool of registrants. But they also bumped the number of public-release tickets up from around 2,000 to 3,500, testifying to the conference’s continued popularity.

At NeurIPS this year, there are 26 papers with Amazon coauthors. They cover a wide range of topics, but surveying their titles, Alex Smola, a vice president and distinguished scientist in the Amazon Web Services organization, discerns three prominent themes, all tied to Amazon’s efforts to deliver better value for its customers.

Those three themes are time series forecasting (and causality), bandit problems, and optimization.

1. Time series forecasting

Time series forecasting involves measuring some quantity over time — such as the number of deliveries in a particular region in the past six months, or the number of cloud servers required to support a particular site over the past two years — and attempting to project that quantity into the future.

“That’s something that is very dear to Amazon’s heart,” Smola says. “For anything that Amazon does, it’s really beneficial to have a good estimate of what our customers will expect from us ahead of time. Only by being able to do that will we be able to satisfy customers’ demands, be it for products or services.”

A sequence of basis time series, forecast into the near future and summed together to approximate a new time series.
The paper “Think Globally, Act Locally” examines data sets with many correlated time series, such as the demand curves for millions of products sold online. The researchers describe a method for constructing a much smaller set of “basis time series”; the time series for any given product can be approximated by a weighted sum of the bases.
Courtesy of the researchers

The basic mathematical framework for time series forecasting is a century old, but the scale of modern forecasting problems calls for new analytic techniques, Smola says.

“Problems are nowadays highly multivariate,” Smola says. “If you look at the many millions of products that we offer, you want to be able to predict fairly well what will sell, where and to whom.

“You need to make reasonable assumptions on how this very large problem can be decomposed into smaller, more tractable pieces. You make structural approximations, and sometimes those structural approximations are what leads to very different algorithms.

“So you might, for instance, have a global model, and then you have local models that address the specific items or address the specific sales. If you look at ‘Think Globally, Act Locally’” — a NeurIPS paper whose first author is Rajat Sen, an applied scientist in the Amazon Search group — “it’s already in the title. Or look at ‘High-Dimensional Multivariate Forecasting with Low-Rank Gaussian Copula Processes’. In this case, you have a global structure, but it’s only in a small subspace where interesting things happen.”

Side-by-side images depict correlations between taxi traffic at different points in Manhattan at different times of day
The paper "High-Dimensional Multivariate Forecasting with Low-Rank Gaussian Copula Processes" describes a method for predicting correlations among many parallel time series. In one example, the researchers forecast correlations between the taxi traffic at different points in New York City at different times of day. Red lines indicate strong correlations; blue lines indicate strong negative correlations. Weekend midday traffic patterns (left) show negative correlations between locations near the Empire State Building, suggesting that taxis tend to prefer different routes depending on traffic conditions. Weekend evening traffic patterns show positive correlations between the vicinity of the Empire State Building and areas with high concentrations of hotels.
Courtesy of the researchers

An aspect of forecasting that has recently been drawing more attention, Smola says, is causality. Where traditional machine learning models merely infer statistical correlations between data points, “it is ultimately the causal relationship that matters,” Smola says.

“I think that causality is one of the most interesting conceptual developments affecting modern machine learning,” says Bernhard Schölkopf, like Smola a vice president and distinguished scientist in Amazon Web Services. “This is the main topic that I have been interested in for the last decade.”

Two of Schölkopf’s NeurIPS papers — “Perceiving the Arrow of Time in Autoregressive Motion” and “Selecting Causal Brain Features with a Single Conditional Independence Test per Feature” — address questions of causality, as does “Causal Regularization”, a paper by Dominik Janzing, a senior research scientist in Smola’s group.

“Normal machine learning builds on correlations of other statistical dependences,” Schölkopf explains. “This is fine as long as the source of the data doesn't change. For example, if in the training set of an image recognition system, all cows are standing on green pasture, then it is fine for an ML system to use the green as a useful feature in recognizing cows, as long as the test set looks the same. If in the test set, the cows are standing on the beach, then such a purely statistical system can fail.

“More generally: causal learning and inference attempts to understand how systems respond to interventions and other changes, and not just how to predict data that looks more or less the same as the training data.”

2. Bandit problems

The second major theme that Smola discerns in Amazon scientists’ NeurIPS papers is a concern with bandit problems, a phrase that shows up in the titles of Amazon papers such as “MaxGap Bandit: Adaptive Algorithms for Approximate Ranking” and “Low-Rank Bandit Methods for High-Dimensional Dynamic Pricing”. Bandit problems take their name from one-armed bandits, or slot machines.

“It used to be that those bandits were all mechanical, so there would be slight variations between them, and some would have maybe a slightly a higher return than others,” Smola explains. “I walk into a den of iniquity, and I want to find the one-armed bandit where I will lose the least money or maybe make some money. And the only feedback I have is that I pull arms, and I get money or lose money. These are very unreliable, noisy events.”

Bandit problems present what’s known as an explore-exploit trade-off. The gambler must simultaneously explore the environment — determine which machines pay out the most — and exploit the resulting knowledge — concentrate as much money as possible on the high-return machines. Early work on bandit problems concerned identifying the high-return machines with minimal outlays.

“That problem was solved about 20 years ago,” Smola says. “What hasn’t been solved — and this is where things get a lot more interesting — is once you start adding context. Imagine that I get to show you various results as you’re searching for your next ugly Christmas sweater. The unfortunate thing is that the creativity of sweater designers is larger than what you can fit on a page. Now the context is essentially, what time, where from, which user, all those things. We want to find and recommend the ugly Christmas sweater that works specifically for you. This is an example where context is immediately relevant.”

It’s really beneficial to have a good estimate of what our customers will expect from us ahead of time. Only by being able to do that will we be able to satisfy customers’ demands.
Alex Smola, VP and distinguished scientist, Amazon

In the bandit-problem framework, in other words, the high-payout machines change with every new interaction. But there may be external signals that indicate how they’re changing.

Distributed computing, which is inescapable for today’s large websites, changes the structure of the bandit problem, too.

“Say you go to a restaurant, and the cook wants to improve the menu,” Smola says. “You can try out lots of new menu items, and that’s a good way to improve the menu overall. But if you start offering a lot of undercooked dishes because you’re experimenting, then at some point your loyal customers will stay away.

“Now imagine you have 100 restaurants, and they all do the same thing at the same time. They can’t necessarily communicate at the per-second level; maybe every day or every week they chat with each other. Now this entire exploration problem becomes a little more challenging, because if two restaurants try out the same undercooked dish, you make the customer less happy than you could have.

“So how does this map back into Amazon land? Well, if you have many servers doing this recommendation, the explore-exploit trade-off might be too aggressive if every one of them works on their own.”

3. Optimization

Finally, Smola says, “There is a third category of results that has to do with making algorithms faster. If you look at ‘Primal-Dual Block Frank-Wolfe’, ‘Communication-Efficient Distributed SGD with Sketching’, ‘Qsparse-Local-SGD’ — those are the workhorses that run underneath all of this. Making them more efficient is obviously something that we care about, so we can respond to customer requests faster, train algorithms faster.”

Bird’s-eye view

NeurIPS is a huge conference, with more than 1,400 accepted papers that cover a bewildering variety of topics. Beyond the Amazon papers, Caltech professor and Amazon fellow Pietro Perona identifies three research areas as growing in popularity.

“One is understanding how deep networks work, so that we can better design architectures and optimization algorithms to train models,” Perona says. “Another is low-shot learning. Machines are still much less efficient than humans at learning, in that they need more training examples to achieve the same performance. And finally, AI and society — identifying opportunities for social good, sustainable development, and the like.”

NeurIPS is being held this year at the Vancouver Convention Center, and the main conference runs from Dec. 8 to Dec. 12. The Women in Machine Learning Workshop, for which Amazon is a gold-level sponsor, takes place on Dec. 9; the Third Conversational AI workshop, whose organizers include Alexa AI principal scientist Dilek Hakkani-Tür, will be held on Dec. 14.

Amazon's involvement at NeurIPS

Paper and presentation schedule

Tuesday, 12/10 | 10:45-12:45pm | East Exhibition Hall B&C

A Meta-MDP Approach to Exploration for Lifelong Reinforcement Learning | #192
Francisco Garcia (UMass Amherst/Amazon) · Philip Thomas (UMass Amherst)

Blocking Bandits | #17
Soumya Basu (UT Austin) · Rajat Sen (UT Austin/Amazon) · Sujay Sanghavi (UT Austin/Amazon) · Sanjay Shakkottai (UT Austin)

Causal Regularization | #180
Dominik Janzing (Amazon)

Communication-Efficient Distributed SGD with Sketching | #81
Nikita Ivkin (Amazon) · Daniel Rothchild (University of California, Berkeley) · Md Enayat Ullah (Johns Hopkins University) · Vladimir Braverman (Johns Hopkins University) · Ion Stoica (UC Berkeley) · Raman Arora (Johns Hopkins University)

Learning Distributions Generated by One-Layer ReLU Networks | #49
Shanshan Wu (UT Austin) ·Alexandros G. Dimakis (UT Austin) · Sujay Sanghavi (UT Austin/Amazon)

Tuesday, 12/10 | 5:30-7:30pm | East Exhibition Hall B&C

Efficient Communication in Multi-Agent Reinforcement Learning via Variance Based Control | #195
Sai Qian Zhang (Harvard University) · Qi Zhang (Amazon) · Jieyu Lin (University of Toronto)

Extreme Classification in Log Memory using Count-Min Sketch: A Case Study of Amazon Search with 50M Products | #37
Tharun Kumar Reddy Medini (Rice University) · Qixuan Huang (Rice University) · Yiqiu Wang (Massachusetts Institute of Technology) · Vijai Mohan (Amazon) · Anshumali Shrivastava (Rice University/Amazon)

Iterative Least Trimmed Squares for Mixed Linear Regression | #50
Yanyao Shen (UT Austin) · Sujay Sanghavi (UT Austin/Amazon)

Meta-Surrogate Benchmarking for Hyperparameter Optimization | #6
Aaron Klein (Amazon) · Zhenwen Dai (Spotify) · Frank Hutter (University of Freiburg) · Neil Lawrence (University of Cambridge) · Javier Gonzalez (Amazon)

Qsparse-local-SGD: Distributed SGD with Quantization, Sparsification and Local Computations | #32
Debraj Basu (Adobe) · Deepesh Data (UCLA) · Can Karakus (Amazon) · Suhas Diggavi (UCLA)

Selecting Causal Brain Features with a Single Conditional Independence Test per Feature | #139
Atalanti Mastakouri (Max Planck Institute for Intelligent Systems) · Bernhard Schölkopf (MPI for Intelligent Systems/Amazon) · Dominik Janzing (Amazon)

Wednesday, 12/11 | 10:45-12:45pm | East Exhibition Hall B&C

On Single Source Robustness in Deep Fusion Models | #93
Taewan Kim (Amazon) · Joydeep Ghosh (UT Austin)

Perceiving the Arrow of Time in Autoregressive Motion | #155
Kristof Meding (University Tübingen) · Dominik Janzing (Amazon) · Bernhard Schölkopf (MPI for Intelligent Systems/Amazon) · Felix A. Wichmann (University of Tübingen)

Wednesday, 12/11 | 5:00-7:00pm | East Exhibition Hall B&C

Compositional De-Attention Networks | #127
Yi Tay (Nanyang Technological University) · Anh Tuan Luu (MIT) · Aston Zhang (Amazon) · Shuohang Wang (Singapore Management University) · Siu Cheung Hui (Nanyang Technological University)

Low-Rank Bandit Methods for High-Dimensional Dynamic Pricing | #3
Jonas Mueller (Amazon) · Vasilis Syrgkanis (Microsoft Research) · Matt Taddy (Amazon)

MaxGap Bandit: Adaptive Algorithms for Approximate Ranking | #4
Sumeet Katariya (Amazon/University of Wisconsin-Madison) · Ardhendu Tripathy (UW Madison) · Robert Nowak (UW Madison)

Primal-Dual Block Generalized Frank-Wolfe | #165
Qi Lei (UT Austin) · Jiacheng Zhuo (UT Austin) · Constantine Caramanis (UT Austin) · Inderjit S Dhillon (Amazon/UT Austin) · Alexandros Dimakis (UT Austin)

Towards Optimal Off-Policy Evaluation for Reinforcement Learning with Marginalized Importance Sampling | #208
Tengyang Xie (University of Illinois at Urbana-Champaign) · Yifei Ma (Amazon) · Yu-Xiang Wang (UC Santa Barbara)

Thursday, 12/12 | 10:45-12:45pm | East Exhibition Hall B&C

AutoAssist: A Framework to Accelerate Training of Deep Neural Networks | #155
Jiong Zhang (UT Austin) · Hsiang-Fu Yu (Amazon) · Inderjit S Dhillon (UT Austin/Amazon)

Exponentially Convergent Stochastic k-PCA without Variance Reduction | #200 (oral, 10:05-10:20 W Ballroom C)
Cheng Tang (Amazon)

Failing Loudly: An Empirical Study of Methods for Detecting Dataset Shift | #54
Stephan Rabanser (Technical University of Munich/Amazon) · Stephan Günnemann (Technical University of Munich) · Zachary Lipton (Carnegie Mellon University/Amazon)

High-Dimensional Multivariate Forecasting with Low-Rank Gaussian Copula Processes | #107
David Salinas (Naverlabs) · Michael Bohlke-Schneider (Amazon) · Laurent Callot (Amazon) · Jan Gasthaus (Amazon) · Roberto Medico (Ghent University)

Learning Search Spaces for Bayesian Optimization: Another View of Hyperparameter Transfer Learning | #30
Valerio Perrone (Amazon) · Huibin Shen (Amazon) · Matthias Seeger (Amazon) · Cedric Archambeau (Amazon) · Rodolphe Jenatton (Amazon)

Mo’States Mo’Problems: Emergency Stop Mechanisms from Observation | #227
Samuel Ainsworth (University of Washington) · Matt Barnes (University of Washington) · Siddhartha Srinivasa (University of Washington/Amazon)

Think Globally, Act Locally: A Deep Neural Network Approach to High-Dimensional Time Series Forecasting | #113
Rajat Sen (Amazon) · Hsiang-Fu Yu (Amazon) · Inderjit S Dhillon (UT Austin/Amazon)

Thursday, 12/12 | 5:00-7:00pm | East Exhibition Hall B&C

Dynamic Local Regret for Non-Convex Online Forecasting | #20
Sergul Aydore (Stevens Institute of Technology) · Tianhao Zhu (Stevens Institute of Technology) · Dean Foster (Amazon)

Interaction Hard Thresholding: Consistent Sparse Quadratic Regression in Sub-quadratic Time and Space | #47
Suo Yang (UT Austin), Yanyao Shen (UT Austin), Sujay Sanghavi (UT Austin/Amazon)

Inverting Deep Generative Models, One Layer at a Time |#48
Qi Lei (University of Texas at Austin) · Ajil Jalal (UT Austin) · Inderjit S Dhillon (UT Austin/Amazon) · Alexandros Dimakis (UT Austin)

Provable Non-linear Inductive Matrix Completion| #215
Kai Zhong (Amazon) · Zhao Song (UT Austin) · Prateek Jain (Microsoft Research) · Inderjit S Dhillon (UT Austin/Amazon)

Amazon researchers on NeurIPS committees and boards

  • Bernhard Schölkopf – Advisory Board
  • Michael I. Jordan – Advisory Board
  • Thorsten Joachims – senior area chair
  • Anshumali Shrivastava – area chair
  • Cedric Archambeau – area chair
  • Peter Gehler – area chair
  • Sujay Sanghavi – committee member

Workshops

Learning with Rich Experience: Integration of Learning Paradigms

Paper: "Meta-Q-Learning" | Rasool Fakoor, Pratik Chaudhari, Stefano Soatto, Alexander J. Smola

Human-Centric Machine Learning

Paper: "Learning Fair and Transferable Representations" | Luco Oneto, Michele Donini, Andreas Maurer, Massimiliano Pontil

Bayesian Deep Learning

Paper: "Online Bayesian Learning for E-Commerce Query Reformulation" | Gaurush Hiranandani, Sumeet Katariya, Nikhil Rao, Karthik Subbian

Meta-Learning

Paper: "Constrained Bayesian Optimization with Max-Value Entropy Search" | Valerio Perrone, Iaroslav Shcherbatyi, Rodolphe Jenatton, Cedric Archambeau, Matthias Seeger

Paper: "A Quantile-Based Approach to Hyperparameter Transfer Learning" | David Salinas, Huibin Shen, Valerio Perrone

Paper: "A Baseline for Few-Shot Image Classification" | Guneet Singh Dhillon, Pratik Chaudhari, Avinash Ravichandran, Stefano Soatto

Conversational AI

Organizer: Dilek Hakkani-Tür

Paper: "The Eighth Dialog System Technology Challenge" | Seokhwan Kim, Michel Galley, Chulaka Gunasekara, Sungjin Lee, Adam Atkinson, Baolin Peng, Hannes Schulz, Jianfeng Gao, Jinchao Li, Mahmoud Adada, Minlie Huang, Luis Lastras, Jonathan K. Kummerfeld, Walter S. Lasecki, Chiori Hori, Anoop Cherian, Tim K. Marks, Abhinav Rastogi, Xiaoxue Zang, Srinivas Sunkara, Raghav Gupta

Paper: “Just Ask: An Interactive Learning Framework for Vision and Language Navigation” | Ta-Chung Chi, Minmin Shen, Mihail Eric, Seokhwan Kim, Dilek Hakkani-Tur

Paper: “MA-DST: Multi-Attention-Based Scalable Dialog State Tracking” | Adarsh Kumar, Peter Ku, Anuj Kumar Goyal, Angeliki Metallinou, Dilek Hakkani-Tür

Paper: “Investigation of Error Simulation Techniques for Learning Dialog Policies for Conversational Error Recovery” | Maryam Fazel-Zarandi, Longshaokan Wang, Aditya Tiwari, Spyros Matsoukas

Paper: “Towards Personalized Dialog Policies for Conversational Skill Discovery”| Maryam Fazel-Zarandi, Sampat Biswas, Ryan Summers, Ahmed Elmalt, Andy McCraw, Michael McPhillips, John Peach

Paper: “Conversation Quality Evaluation via User Satisfaction Estimation” | Praveen Kumar Bodigutla, Spyros Matsoukas, Lazaros Polymenakos

Paper: “Multi-domain Dialogue State Tracking as Dynamic Knowledge Graph Enhanced Question Answering” | Li Zhou, Kevin Small

Science Meets Engineering of Deep Learning

Paper: "X-BERT: eXtreme Multi-label Text Classification using Bidirectional Encoder from Transformers" Wei-Cheng Chang, Hsiang-Fu Yu, Kai Zhong, Yiming Yang, Inderjit S. Dhillon

Machine Learning with Guarantees

Organizers: Ben London, Thorsten Joachims
Program Committee: Kevin Small, Shiva Kasiviswanathan, Ted Sandler

MLSys: Workshop on Systems for ML

Paper: "Block-Distributed Gradient Boosted Trees" | Theodore Vasiloudis, Hyunsu Cho, Henrik Boström

Women in Machine Learning

Gold sponsor: Amazon

Research areas

Related content

US, MA, Boston
AI is the most transformational technology of our time, capable of tackling some of humanity’s most challenging problems. That is why Amazon is investing in generative AI (GenAI) and the responsible development and deployment of large language models (LLMs) across all of our businesses. Come build the future of human-technology interaction with us. We are looking for an Applied Scientist with strong technical skills which includes coding and natural language processing experience in dataset construction, training and evaluating models, and automatic processing of large datasets. You will play a critical role in driving innovation and advancing the state-of-the-art in natural language processing and machine learning. You will work closely with cross-functional teams, including product managers, language engineers, and other scientists. Key job responsibilities Specifically, the Applied Scientist will: • Ensure quality of speech/language/other data throughout all stages of acquisition and processing, including data sourcing/collection, ground truth generation, normalization, transformation, cross-lingual alignment/mapping, etc. • Clean, analyze and select speech/language/other data to achieve goals • Build and test models that elevate the customer experience • Collaborate with colleagues from science, engineering and business backgrounds • Present proposals and results in a clear manner backed by data and coupled with actionable conclusions • Work with engineers to develop efficient data querying infrastructure for both offline and online use cases
US, CA, San Francisco
The Artificial General Intelligence (AGI) team is looking for a passionate, talented, and inventive Member of Technical Staff with a strong deep learning background, to build industry-leading Generative Artificial Intelligence (GenAI) technology with Large Language Models (LLMs) and multimodal systems. Key job responsibilities As a Member of Technical Staff with the AGI team, you will lead the development of algorithms and modeling techniques, to advance the state of the art with LLMs. You will lead the foundational model development in an applied research role, including model training, dataset design, and pre- and post-training optimization. Your work will directly impact our customers in the form of products and services that make use of GenAI technology. You will leverage Amazon’s heterogeneous data sources and large-scale computing resources to accelerate advances in LLMs. About the team The AGI team has a mission to push the envelope in GenAI with LLMs and multimodal systems, in order to provide the best-possible experience for our customers.
US, CA, Sunnyvale
As a Principal Scientist in the Artificial General Intelligence (AGI) organization, you are a trusted part of the technical leadership. You bring business and industry context to science and technology decisions. You set the standard for scientific excellence and make decisions that affect the way we build and integrate algorithms. You solicit differing views across the organization and are willing to change your mind as you learn more. Your artifacts are exemplary and often used as reference across organization. You are a hands-on scientific leader. Your solutions are exemplary in terms of algorithm design, clarity, model structure, efficiency, and extensibility. You tackle intrinsically hard problems, acquiring expertise as needed. You decompose complex problems into straightforward solutions. You amplify your impact by leading scientific reviews within your organization or at your location. You scrutinize and review experimental design, modeling, verification and other research procedures. You probe assumptions, illuminate pitfalls, and foster shared understanding. You align teams toward coherent strategies. You educate, keeping the scientific community up to date on advanced techniques, state of the art approaches, the latest technologies, and trends. You help managers guide the career growth of other scientists by mentoring and play a significant role in hiring and developing scientists and leads. You will play a critical role in driving the development of Generative AI (GenAI) technologies that can handle Amazon-scale use cases and have a significant impact on our customers' experiences. Key job responsibilities You will be responsible for defining key research directions, adopting or inventing new machine learning techniques, conducting rigorous experiments, publishing results, and ensuring that research is translated into practice. You will develop long-term strategies, persuade teams to adopt those strategies, propose goals and deliver on them. You will also participate in organizational planning, hiring, mentorship and leadership development. You will be technically exceptional with a passion for building scalable science and engineering solutions. You will serve as a key scientific resource in full-cycle development (conception, design, implementation, testing to documentation, delivery, and maintenance).
US, NY, New York
Do you want to leverage your expertise in translating innovative science into impactful products to improve the lives and work of over a million people worldwide? If so, People eXperience Technology Central Science (PXTCS) would love to discuss how you can make that a reality. PXTCS is an interdisciplinary team that uses economics, behavioral science, statistics, and machine learning to identify products, mechanisms, and process improvements that enhance Amazonians' well-being and their ability to deliver value for Amazon's customers. We collaborate with HR teams across Amazon to make Amazon PXT the most scientific human resources organization in the world. In this role, you will spearhead science design and technical implementation innovations across our predictive modeling and forecasting work-streams. You'll enhance existing models and create new ones, empowering leaders throughout Amazon to make data-driven business decisions. You'll collaborate with scientists and engineers to deliver solutions while working closely with business stakeholders to address their specific needs. Your work will span various business domains (corporate, operations, safety) and analysis levels (individual, group, organizational), utilizing a range of modeling approaches (linear, tree-based, deep neural networks, and LLM-based). You'll develop end-to-end ML solutions from problem formulation to deployment, maintaining high scientific standards and technical excellence throughout the process. As a Sr. Applied Scientist, you'll also contribute to the team's science strategy, keeping pace with emerging AI/ML trends. You'll mentor junior scientists, fostering their growth by identifying high-impact opportunities. Your guidance will span different analysis levels and modeling approaches, enabling stakeholders to make informed, strategic decisions. If you excel at building advanced scientific solutions and are passionate about developing technologies that drive organizational change in the AI era, join us as we work hard, have fun, and make history.
US, CA, Sunnyvale
Prime Video is a first-stop entertainment destination offering customers a vast collection of premium programming in one app available across thousands of devices. Prime members can customize their viewing experience and find their favorite movies, series, documentaries, and live sports – including Amazon MGM Studios-produced series and movies; licensed fan favorites; and programming from Prime Video subscriptions such as Apple TV+, HBO Max, Peacock, Crunchyroll and MGM+. All customers, regardless of whether they have a Prime membership or not, can rent or buy titles via the Prime Video Store, and can enjoy even more content for free with ads. Are you interested in shaping the future of entertainment? Prime Video's technology teams are creating best-in-class digital video experience. As a Prime Video team member, you’ll have end-to-end ownership of the product, user experience, design, and technology required to deliver state-of-the-art experiences for our customers. You’ll get to work on projects that are fast-paced, challenging, and varied. You’ll also be able to experiment with new possibilities, take risks, and collaborate with remarkable people. We’ll look for you to bring your diverse perspectives, ideas, and skill-sets to make Prime Video even better for our customers. With global opportunities for talented technologists, you can decide where a career Prime Video Tech takes you! Key job responsibilities As an Applied Scientist at Prime Video, you will have end-to-end ownership of the product, related research and experimentation, applying advanced machine learning techniques in computer vision (CV), Generative AI, multimedia understanding and so on. You’ll work on diverse projects that enhance Prime Video’s content localization, image/video understanding, and content personalization, driving impactful innovations for our global audience. Other responsibilities include: - Research and develop generative models for controllable synthesis across images, video, vector graphics, and multimedia - Innovate in advanced diffusion and flow-based methods (e.g., inverse flow matching, parameter efficient training, guided sampling, test-time adaptation) to improve efficiency, controllability, and scalability. - Advance visual grounding, depth and 3D estimation, segmentation, and matting for integration into pre-visualization, compositing, VFX, and post-production pipelines. - Design multimodal GenAI workflows including visual-language model tooling, structured prompt orchestration, agentic pipelines. A day in the life Prime Video is pioneering the use of Generative AI to empower the next generation of creatives. Our mission is to make world-class media creation accessible, scalable, and efficient. We are seeking an Applied Scientist to advance the state of the art in Generative AI and to deliver these innovations as production-ready systems at Amazon scale. Your work will give creators unprecedented freedom and control while driving new efficiencies across Prime Video’s global content and marketing pipelines. This is a newly formed team within Prime Video Science!
US, WA, Seattle
Are you fascinated by the power of Large Language Models (LLM) and applying Generative AI to solve complex challenges within one of Amazon's most significant businesses? Amazon Selection and Catalog Systems (ASCS) builds the systems that host and run the world's largest e-Commerce products catalog, it powers the online buying experience for customers worldwide so they can find, discover and buy anything they want. Amazon's customers rely on the completeness, consistency and correctness of Amazon's product data to make well-informed purchase decisions. We develop LLM applications that make Catalog the best-in-class source of product information for all products worldwide. This problem is challenging due to sheer scale (billions of products in the catalog), diversity (products ranging from electronics to groceries) and multitude of input sources (millions of sellers contributing product data with different quality). We are seeking a passionate, talented, and inventive individual to join the Catalog AI team and help build industry-leading technologies that customers will love. You will apply machine learning and large language model techniques, such as fine-tuning, reinforcement learning, and prompt optimization, to solve real customer problems. You will work closely with scientists and engineers to experiment with new methods, run large-scale evaluations, and bring research ideas into production. Key job responsibilities * Design and implement LLM-based solutions to improve catalog data quality and completeness * Conduct experiments and A/B tests to validate model improvements and measure business impact * Optimize large language models for quality and cost on catalog-specific tasks * Collaborate with engineering teams to deploy models at scale serving billions of products
US, CA, San Francisco
Join the next revolution in robotics at Amazon's Frontier AI & Robotics team, where you'll work alongside world-renowned AI pioneers to push the boundaries of what's possible in robotic intelligence. As an Applied Scientist, you'll be at the forefront of developing breakthrough foundation models that enable robots to perceive, understand, and interact with the world in unprecedented ways. You'll drive independent research initiatives in areas such as perception, manipulation, scene understanding, sim2real transfer, multi-modal foundation models, and multi-task learning, designing novel algorithms that bridge the gap between state-of-the-art research and real-world deployment at Amazon scale. In this role, you'll balance innovative technical exploration with practical implementation, collaborating with platform teams to ensure your models and algorithms perform robustly in dynamic real-world environments. You'll have access to Amazon's vast computational resources, enabling you to tackle ambitious problems in areas like very large multi-modal robotic foundation models and efficient, promptable model architectures that can scale across diverse robotic applications. Key job responsibilities - Design and implement novel deep learning architectures that push the boundaries of what robots can understand and accomplish - Drive independent research initiatives in robotics foundation models, focusing on breakthrough approaches in perception, and manipulation, for example open-vocabulary panoptic scene understanding, scaling up multi-modal LLMs, sim2real/real2sim techniques, end-to-end vision-language-action models, efficient model inference, video tokenization - Lead technical projects from conceptualization through deployment, ensuring robust performance in production environments - Collaborate with platform teams to optimize and scale models for real-world applications - Contribute to the team's technical strategy and help shape our approach to next-generation robotics challenges A day in the life - Design and implement novel foundation model architectures, leveraging our extensive compute infrastructure to train and evaluate at scale - Collaborate with our world-class research team to solve complex technical challenges - Lead technical initiatives from conception to deployment, working closely with robotics engineers to integrate your solutions into production systems - Participate in technical discussions and brainstorming sessions with team leaders and fellow scientists - Leverage our massive compute cluster and extensive robotics infrastructure to rapidly prototype and validate new ideas - Transform theoretical insights into practical solutions that can handle the complexities of real-world robotics applications Amazon offers a full range of benefits that support you and eligible family members, including domestic partners and their children. Benefits can vary by location, the number of regularly scheduled hours you work, length of employment, and job status such as seasonal or temporary employment. The benefits that generally apply to regular, full-time employees include: 1. Medical, Dental, and Vision Coverage 2. Maternity and Parental Leave Options 3. Paid Time Off (PTO) 4. 401(k) Plan If you are not sure that every qualification on the list above describes you exactly, we'd still love to hear from you! At Amazon, we value people with unique backgrounds, experiences, and skillsets. If you’re passionate about this role and want to make an impact on a global scale, please apply! About the team At Frontier AI & Robotics, we're not just advancing robotics – we're reimagining it from the ground up. Our team is building the future of intelligent robotics through ground breaking foundation models and end-to-end learned systems. We tackle some of the most challenging problems in AI and robotics, from developing sophisticated perception systems to creating adaptive manipulation strategies that work in complex, real-world scenarios. What sets us apart is our unique combination of ambitious research vision and practical impact. We leverage Amazon's massive computational infrastructure and rich real-world datasets to train and deploy state-of-the-art foundation models. Our work spans the full spectrum of robotics intelligence – from multimodal perception using images, videos, and sensor data, to sophisticated manipulation strategies that can handle diverse real-world scenarios. We're building systems that don't just work in the lab, but scale to meet the demands of Amazon's global operations. Join us if you're excited about pushing the boundaries of what's possible in robotics, working with world-class researchers, and seeing your innovations deployed at unprecedented scale.
US, CA, San Francisco
Join the next revolution in robotics at Amazon's Frontier AI & Robotics team, where you'll work alongside world-renowned AI pioneers to push the boundaries of what's possible in robotic intelligence. As an Applied Scientist, you'll be at the forefront of developing breakthrough foundation models that enable robots to perceive, understand, and interact with the world in unprecedented ways. You'll drive independent research initiatives in areas such as perception, manipulation, scene understanding, sim2real transfer, multi-modal foundation models, and multi-task learning, designing novel algorithms that bridge the gap between state-of-the-art research and real-world deployment at Amazon scale. In this role, you'll balance innovative technical exploration with practical implementation, collaborating with platform teams to ensure your models and algorithms perform robustly in dynamic real-world environments. You'll have access to Amazon's vast computational resources, enabling you to tackle ambitious problems in areas like very large multi-modal robotic foundation models and efficient, promptable model architectures that can scale across diverse robotic applications. Key job responsibilities - Design and implement novel deep learning architectures that push the boundaries of what robots can understand and accomplish - Drive independent research initiatives in robotics foundation models, focusing on breakthrough approaches in perception, and manipulation, for example open-vocabulary panoptic scene understanding, scaling up multi-modal LLMs, sim2real/real2sim techniques, end-to-end vision-language-action models, efficient model inference, video tokenization - Lead technical projects from conceptualization through deployment, ensuring robust performance in production environments - Collaborate with platform teams to optimize and scale models for real-world applications - Contribute to the team's technical strategy and help shape our approach to next-generation robotics challenges A day in the life - Design and implement novel foundation model architectures, leveraging our extensive compute infrastructure to train and evaluate at scale - Collaborate with our world-class research team to solve complex technical challenges - Lead technical initiatives from conception to deployment, working closely with robotics engineers to integrate your solutions into production systems - Participate in technical discussions and brainstorming sessions with team leaders and fellow scientists - Leverage our massive compute cluster and extensive robotics infrastructure to rapidly prototype and validate new ideas - Transform theoretical insights into practical solutions that can handle the complexities of real-world robotics applications Amazon offers a full range of benefits that support you and eligible family members, including domestic partners and their children. Benefits can vary by location, the number of regularly scheduled hours you work, length of employment, and job status such as seasonal or temporary employment. The benefits that generally apply to regular, full-time employees include: 1. Medical, Dental, and Vision Coverage 2. Maternity and Parental Leave Options 3. Paid Time Off (PTO) 4. 401(k) Plan If you are not sure that every qualification on the list above describes you exactly, we'd still love to hear from you! At Amazon, we value people with unique backgrounds, experiences, and skillsets. If you’re passionate about this role and want to make an impact on a global scale, please apply! About the team At Frontier AI & Robotics, we're not just advancing robotics – we're reimagining it from the ground up. Our team is building the future of intelligent robotics through ground breaking foundation models and end-to-end learned systems. We tackle some of the most challenging problems in AI and robotics, from developing sophisticated perception systems to creating adaptive manipulation strategies that work in complex, real-world scenarios. What sets us apart is our unique combination of ambitious research vision and practical impact. We leverage Amazon's massive computational infrastructure and rich real-world datasets to train and deploy state-of-the-art foundation models. Our work spans the full spectrum of robotics intelligence – from multimodal perception using images, videos, and sensor data, to sophisticated manipulation strategies that can handle diverse real-world scenarios. We're building systems that don't just work in the lab, but scale to meet the demands of Amazon's global operations. Join us if you're excited about pushing the boundaries of what's possible in robotics, working with world-class researchers, and seeing your innovations deployed at unprecedented scale.
ES, Barcelona
Are you a MS or PhD student interested in a 2026 internship in the field of machine learning, deep learning, generative AI, large language models, speech technology, robotics, computer vision, optimization, operations research, quantum computing, automated reasoning, or formal methods? If so, we want to hear from you! We are looking for students interested in using a variety of domain expertise to invent, design and implement state-of-the-art solutions for never-before-solved problems. You can find more information about the Amazon Science community as well as our interview process via the links below; https://www.amazon.science/ https://amazon.jobs/content/en/career-programs/university/science https://amazon.jobs/content/en/how-we-hire/university-roles/applied-science Key job responsibilities As an Applied Science Intern, you will own the design and development of end-to-end systems. You’ll have the opportunity to write technical white papers, create roadmaps and drive production level projects that will support Amazon Science. You will work closely with Amazon scientists and other science interns to develop solutions and deploy them into production. You will have the opportunity to design new algorithms, models, or other technical solutions whilst experiencing Amazon’s customer focused culture. The ideal intern must have the ability to work with diverse groups of people and cross-functional teams to solve complex business problems. A day in the life At Amazon, you will grow into the high impact person you know you’re ready to be. Every day will be filled with developing new skills and achieving personal growth. How often can you say that your work changes the world? At Amazon, you’ll say it often. Join us and define tomorrow. Some more benefits of an Amazon Science internship include; • All of our internships offer a competitive stipend/salary • Interns are paired with an experienced manager and mentor(s) • Interns receive invitations to different events such as intern program initiatives or site events • Interns can build their professional and personal network with other Amazon Scientists • Interns can potentially publish work at top tier conferences each year About the team Applicants will be reviewed on a rolling basis and are assigned to teams aligned with their research interests and experience prior to interviews. Start dates are available throughout the year and durations can vary in length from 3-6 months for full time internships. This role may available across multiple locations in the EMEA region (Austria, Estonia, France, Germany, Ireland, Israel, Italy, Jordan, Luxembourg, Netherlands, Poland, Romania, Spain, South Africa, UAE, and UK). Please note these are not remote internships.
US, CA, San Francisco
The Amazon AGI SF Lab is focused on developing new foundational capabilities for enabling useful AI agents that can take actions in the digital and physical worlds. In other words, we’re enabling practical AI that can actually do things for us and make our customers more productive, empowered, and fulfilled. The lab is designed to empower AI researchers and engineers to make major breakthroughs with speed and focus toward this goal. Our philosophy combines the agility of a startup with the resources of Amazon. By keeping the team lean, we’re able to maximize the amount of compute per person. Each team in the lab has the autonomy to move fast and the long-term commitment to pursue high-risk, high-payoff research. Key job responsibilities - Develop multimodal Large Language Models (LLMs) to observe, model and derive insights from manual workflows for automation - Work in a joint scrum with engineers for rapid invention, develop automation agent systems, and take them to launch for millions of customers - Collaborate with cross-functional teams of engineers, product managers, and scientists to identify and solve complex problems in GenAI - Design and execute experiments to evaluate the performance of different algorithms and models, and iterate quickly to improve results - Think big about the arc of development of GenAI over a multi-year horizon, and identify new opportunities to apply these technologies to solve real-world problems - Communicate results and insights to both technical and non-technical audiences, including through presentations and written reports - Mentor and guide junior scientists and engineers, and contribute to the overall growth and development of the team