Rohit Prasad, vice president and head scientist for Alexa AI, demonstrates interactive teaching by customers, a new Alexa capability announced last fall.

Alexa: The science must go on

Throughout the pandemic, the Alexa team has continued to invent on behalf of our customers.

COVID-19 has cost us precious lives and served a harsh reminder that so much more needs to be done to prepare for unforeseen events. In these difficult times, we have also seen heroic efforts — from frontline health workers working night and day to take care of patients, to rapid development of vaccines, to delivery of groceries and essential items in the safest possible way given the circumstances.

Communication features.gif
Alexa’s communications capabilities are helping families connect with their loved ones during lockdown.

Alexa has also tried to help where it can. We rapidly added skills that provide information about resources for dealing with COVID-19. We donated Echo Shows and Echo Dots to healthcare providers, patients, and assisted-living facilities around the country, and Alexa’s communications capabilities — including new calling features (e.g., group calling), and the new Care Hub — are helping providers coordinate care and families connect with their loved ones during lockdown.

It has been just over a year since our schools closed down and we started working remotely. With our homes turned into offices and classrooms, one of the challenges has been keeping our kids motivated and on-task for remote learning. Skills such as the School Schedule Blueprint are helping parents like me manage their children’s remote learning and keep them excited about the future.

Despite the challenges of the pandemic, the Alexa team has shown incredible adaptability and grit, delivering scientific results that are already making a difference for our customers and will have long-lasting effects. Over the past 12 months, we have made advances in four thematic areas, making Alexa more

  1. natural and conversational: interactions with Alexa should be as free-flowing as interacting with another person, without requiring customers to use strict linguistic constructs to communicate with Alexa’s ever-growing set of skills. 
  2. self-learning and data efficient: Alexa’s intelligence should improve without requiring manually labeled data, and it should strive to learn directly from customers. 
  3. insightful and proactive: Alexa should assist and/or provide useful information to customers by anticipating their needs.
  4. trustworthy: Alexa should have attributes like those we cherish in trustworthy people, such as discretion, fairness, and ethical behavior.

Natural and conversational 

Accurate far-field automatic speech recognition (ASR) is critical for natural interactions with Alexa. We have continued to make advances in this area, and at Interspeech 2020, we presented 12 papers, including improvements in end-to-end ASR using the recurrent-neural-network-transducer (RNN-T) architecture. ASR advances, coupled with improvements in natural-language understanding (NLU), have reduced the worldwide error rate for Alexa by more than 24% in the past 12 months.

DashHashLM.png
One of Alexa Speech’s Interspeech 2020 papers, “Rescore in a flash: compact, cache efficient hashing data structures for n-gram language models”, proposes a new data structure, DashHashLM, for encoding the probabilities of word sequences in language models with a minimal memory footprint.

Customers depend on Alexa’s ability to answer single-shot requests, but to continue to provide new, delightful experiences, we are teaching Alexa to accomplish complex goals that require multiturn dialogues. In February, we announced the general release of Alexa Conversations, a capability that makes it easy for developers to build skills that engage customers in dialogues. The developer simply provides APIs (application programming interfaces), a list of entity types invoked in the skill, and a small set of sample dialogues that illustrate interactions with the skills’ capabilities. 

Alexa Conversations’ deep-learning-based dialogue manager takes care of the rest by predicting numerous alternate ways in which a customer might engage with the skill. Nearly 150 skills — such as iRobot Home and Art Museum — have now been built with Alexa Conversations, with another 100 under way, and our internal teams have launched capabilities such as Alexa Greetings (where Alexa answers the Ring doorbell on behalf of customers) and “what to read” with the same underlying capability.  

Further, to ensure that existing skills built without Alexa Conversations understand customer requests more accurately, we migrated hundreds of skills to deep neural networks (as opposed to conditional random fields). Migrated skills are seeing increases in understanding accuracy of 15% to 23% across locales. 

Alexa’s skills are ever expanding, with over 100,000 skills built worldwide by external developers. As that number has grown, discovering new skills has become a challenge. Even when customers know of a skill, they can have trouble remembering its name or how to interact with it. 

To make skills more discoverable and eliminate the need to say “Alexa, ask <skill X> to do <Y>,” we launched a deep-learning-based capability for routing utterances that do not have explicit mention of a skill’s name to relevant skills. Thousands of skills are now being discovered naturally, and in preview, they received an average of 15% more traffic. At last year’s International Conference on Acoustics, Speech, and Signal Processing (ICASSP), we presented a novel method for automatically labeling training data for Alexa’s skill selection model, which is crucial to improving utterance routing accuracy as the number of skills continues to grow.  

A constituency tree featuring syntactic-distance measures.
To make the prosody of Alexa's speech more natural, the Amazon Text-to-Speech team uses constituency trees to measure the syntactic distance (orange circles) between words of an utterance, a good indicator of where phrasing breaks or prosodic resets should occur.
Credit: Glynis Condon

As we’ve been improving Alexa’s understanding capabilities, our Text-to-Speech (TTS) synthesis team has been working to increase the naturalness of Alexa’s speech. We have developed prosodic models that enable Alexa to vary patterns of intonation and inflection to fit different conversational contexts. 

This is a first milestone on the path to contextual language generation and speech synthesis. Depending on the conversational context and the speaking attributes of the customer, Alexa will vary its response — both the words chosen and the speaking style, including prosody, stress, and intonation. We also made progress in detecting tone of voice, which can be an additional signal for adapting Alexa’s responses.

Humor is a critical element of human-like conversational abilities. However, recognizing humor and generating humorous responses is one of the most challenging tasks in conversational AI. University teams participating in the Alexa Prize socialbot challenge have made significant progress in this area by identifying opportunities to use humor in conversation and selecting humorous phrases and jokes that are contextually appropriate.

One of our teams is identifying humor in product reviews by detecting incongruity between product titles and questions asked by customers. For instance, the question “Does this make espresso?” might be reasonable when applied to a high-end coffee machine, but applied to a Swiss Army knife, it’s probably a joke. 

We live in a multilingual and multicultural world, and this pandemic has made it even more important for us to connect across language barriers. In 2019, we had launched a bilingual version of Alexa — i.e., customers could address the same device in US English or Spanish without asking Alexa to switch languages on every request. However, the Spanish responses from Alexa were in a different voice than the English responses.  

By leveraging advances in neural text-to-speech (much the way we had used multilingual learning techniques to improve language understanding), we taught the original Alexa voice — which was based on English-only recordings — to speak perfectly accented U.S. Spanish. 

To further break down language barriers, in December we launched two-way language translation, which enables Alexa to act as an interpreter for customers speaking different languages. Alexa can now translate on the fly between English and six other languages on the same device.

In September 2020, I had the privilege of demonstrating natural turn-taking (NTT), a new capability that has the potential to make Alexa even more useful and delightful for our customers. With NTT, Alexa uses visual cues, in combination with acoustic and linguistic information, to determine whether a customer is addressing Alexa or other people in the household — even when there is no wake word. Our teams are working hard on bringing NTT to our customers later this year so that Alexa can participate in conversations just like a family member or a friend.  

Self-learning and data-efficient 

In AI, one definition of generalization is the ability to robustly handle novel situations and learn from them with minimal human supervision. Two years back, we introduced the ability for Alexa to automatically correct errors in its understanding without requiring any manual labeling. This self-learning system uses implicit feedback (e.g., when a customer interrupts a response to rephrase a request) to automatically revise Alexa’s handling of requests that fail. This learning method is automatically addressing 15% of defects, as quickly as a few hours after detection; with supervised learning, these defects would have taken weeks to address. 

Diagram depicting example of paraphrase alignment
We won a best-paper award at last year's International Conference on Computational Linguistics for a self-learning system that finds the best mapping from a successful request to an unsuccessful one, then transfers the training labels automatically.
Credit: Glynis Condon

At December 2020’s International Conference on Computational Linguistics, our scientists won a best-paper award for a complementary approach to self-learning. Where the earlier system overwrites the outputs of Alexa’s NLU models, the newer system uses implicit feedback to create automatically labeled training examples for those models. This approach is particularly promising for the long tail of unusually phrased requests, and it can be used in conjunction with the existing self-learning system.

In parallel, we have been inventing methods that enable Alexa to add new capabilities, intents, and concepts with as little manually labeled data as possible — often by generalizing from one task to another. For example, in a paper at last year’s ACL Workshop on NLP for Conversational AI, we demonstrated the value of transfer learning from reading comprehension to other natural-language-processing tasks, resulting in the best published results on few-shot learning for dialogue state tracking in low-data regimes.

Similarly, at this year’s Spoken Language Technology conference, we showed how to combine two existing approaches to few-shot learning — prototypical networks and data augmentation — to quickly and accurately learn new intents.

Human-like conversational abilities require common sense — something that is still elusive for conversational-AI services, despite the massive progress due to deep learning. We received the best-paper award at the Empirical Methods in Natural Language Processing (EMNLP) 2020 Workshop on Deep Learning Inside Out (DeeLIO) for our work on infusing commonsense knowledge graphs explicitly and implicitly into large pre-trained language models to give machines greater social intelligence. We will continue to build on such techniques to make interactions with Alexa more intuitive for our customers, without requiring a large quantity of annotated data. 

In December 2020, we launched a new feature that allows customers to teach Alexa new concepts. For instance, if a customer says, “Alexa, set the living room light to study mode”, Alexa might now respond, “I don't know what study mode is. Can you teach me?” Alexa extracts a definition from the customer’s answer, and when the customer later makes the same request — or a similar request — Alexa responds with the learned action. 

Alexa uses multiple deep-learning-based parsers to enable such explicit teaching. First, Alexa detects spans in requests that it has trouble understanding. Next, it engages in a clarification dialogue to learn the new concept. Thanks to this novel capability, customers are able to customize Alexa for their needs, and Alexa is learning thousands of new concepts in the smart-home domain every day, without any manual labeling. We will continue to build on this success and develop more self-learning techniques to make Alexa more useful and personal for our customers.

Insightful and proactive

Alexa-enabled ambient devices have revolutionized daily convenience, enabling us to get what we need simply by asking for it. However, the utility of these devices and endpoints does not need to be limited to customer-initiated requests. Instead, Alexa should anticipate customer needs and seamlessly assist in meeting those needs. Smart huncheslocation-based reminders, and discovery of routines are a few ways in which Alexa is already helping customers. 

Illustration of Alexa inferring a customer asking about weather at the beach may be planning a beach trip.
In this interaction, Alexa infers that a customer who asks about the weather at the beach may be interested in other information that could be useful for planning a beach trip.
credit: Glynis Condon

Another way for Alexa to be more useful to our customers is to predict customers’ goals that span multiple disparate skills. For instance, if a customer asks, “How long does it take to steep tea?”, Alexa might answer, “Five minutes is a good place to start", then follow up by asking, "Would you like me to set a timer for five minutes?” In 2020, we launched an initial version of Alexa’s ability to anticipate and complete multi-skill goals without any explicit preprogramming.  

While this ability makes the complex seem simple, underneath, it depends on multiple deep-learning models. A “trigger model” decides whether to predict the customer’s goal at all, and if it decides it should, it suggests a skill to handle the predicted goal. But the skills it suggests are identified by another model that relies on information-theoretic analyses of input utterances, together with subsidiary models that assess features such as whether the customer was trying to rephrase a prior command, or whether the direct goal and the latent goal have common entities or values.  

Trustworthy

We have made significant advances in areas that are key to making Alexa more trusted by customers. In the field of privacy-preserving machine learning, for instance, we have been exploring differential privacy, a theoretical framework for evaluating the privacy protections offered by systems that generate aggregate statistics from individuals’ data. 

At the EMNLP 2020 Workshop on Privacy in Natural Language Processing, we presented a paper that proposes a new way to offer metric-differential-privacy assurances by adding so-called elliptical noise to training data for machine learning systems, and at this year’s Conference of the European Chapter of the Association for Computational Linguistics, we’ll present a technique for transforming texts that preserves their semantic content but removes potentially identifying information. Both methods significantly improve on the privacy protections afforded by older approaches while leaving the performance of the resulting systems unchanged.

Elliptical vs. spherical noise.png
A new approach to protecting privacy in machine learning systems that uses elliptical noise (right) rather than the conventional spherical noise (left) to perturb training data significantly improves privacy protections while leaving the performance of the resulting systems unchanged.


We have also made Alexa’s answers to information-centric questions more trustworthy by expanding our knowledge graph and improving our neural semantic parsing and web-based information retrieval. If, however, the sources of information used to produce a knowledge graph encode harmful social biases — even as a matter of historical accident — the knowledge graph may as well. In a pair of papers presented last year, our scientists devised techniques for both identifying and remediating instances of bias in knowledge graphs, to help ensure that those biases don’t leak into Alexa’s answers to questions.

A two-dimensional representation of our method for measuring bias in knowledge graph embeddings.
A two-dimensional representation of the method for measuring bias in knowledge graph embeddings that we presented last year. In each diagram, the blue dots labeled person1 indicate the shift in an embedding as we tune its parameters. The orange arrows represent relation vectors and the orange dots the sums of those vectors and the embeddings. As we shift the gender relation toward maleness, the profession relation shifts away from nurse and closer to doctor, indicating gender bias.
Credit: Glynis Condon

Similarly, the language models that many speech recognition and natural-language-understanding applications depend on are trained on corpora of publicly available texts; if those data reflect biases, so will the resulting models. At the recent ACM Conference on Fairness, Accountability, and Transparency, Alexa AI scientists presented a new data set that can be used to test language models for bias and a new metric for quantitatively evaluating the test results.

Still, we recognize that a lot more needs to be done in AI in the areas of fairness and ethics, and to that end, partnership with universities and other dedicated research organizations can be a force multiplier. As a case in point, our collaboration with the National Science Foundation to accelerate research on fairness in AI recently entered its second year, with a new round of grant recipients named in February 2021.

And in January 2021, we announced the creation of the Center for Secure and Trusted Machine Learning, a collaboration with the University of Southern California that will support USC and Amazon researchers in the development of novel approaches to privacy-preserving ML solutions

Strengthening the research community

I am particularly proud that, despite the effort required to bring all these advances to fruition, our scientists have remained actively engaged with the broader research community in many other areas. To choose just a few examples:

  • In August, we announced the winners of the third instance of the Alexa Prize Grand Challenge to develop conversational-AI systems, or socialbots, and in September, we opened registration for the fourth instance. Earlier this month, we announced another track of research for Alexa Prize called the TaskBot Challenge, in which university teams will compete to develop multimodal agents that assist customers in completing tasks requiring multiple steps and decisions.
  • In September, we announced the creation of the Columbia Center of Artificial Intelligence Technology, a collaboration with Columbia Engineering that will be a hub of research, education, and outreach programs.
  • In October, we launched the DialoGLUE challenge, together with a set of benchmark models, to encourage research on conversational generalizability, or the ability of dialogue agents trained on one task to adapt easily to new tasks.

Come work with us

Amazon is looking for data scientists, research scientists, applied scientists, interns, and more. Check out our careers page to find all of the latest job listings around the world.

We are grateful for the amazing work of our fellow researchers in the medical, pharmaceutical, and biotech communities who have developed COVID-19 vaccines in record time.

Thanks to their scientific contributions, we now have the strong belief that we will prevail against this pandemic. 

I am looking forward to the end of this pandemic and the chance to work even more closely with the Alexa teams and the broader scientific community to make further advances in conversational AI and enrich our customers’ lives. 

Research areas

Related content

US, WA, Redmond
Amazon Leo is Amazon’s low Earth orbit satellite network. Our mission is to deliver fast, reliable internet connectivity to customers beyond the reach of existing networks. From individual households to schools, hospitals, businesses, and government agencies, Amazon Leo will serve people and organizations operating in locations without reliable connectivity. Export Control Requirement: Due to applicable export control laws and regulations, candidates must be a U.S. citizen or national, U.S. permanent resident (i.e., current Green Card holder), or lawfully admitted into the U.S. as a refugee or granted asylum. This position is part of the Satellite Attitude Determination and Control team. You will design and analyze the control system and algorithms, support development of our flight hardware and software, help integrate the satellite in our labs, participate in flight operations, and see a constellation of satellites flow through the production line in the building next door. Key job responsibilities - Design and analyze algorithms for estimation, flight control, and precise pointing using linear methods and simulation. - Develop and apply models and simulations, with various levels of fidelity, of the satellite and our constellation. - Component level environmental testing, functional and performance checkout, subsystem integration, satellite integration, and in space operations. - Manage the spacecraft constellation as it grows and evolves. - Continuously improve our ability to serve customers by maximizing payload operations time. - Develop autonomy for Fault Detection and Isolation on board the spacecraft. A day in the life This is an opportunity to play a significant role in the design of an entirely new satellite system with challenging performance requirements. The large, integrated constellation brings opportunities for advanced capabilities that need investigation and development. The constellation size also puts emphasis on engineering excellence so our tools and methods, from conceptualization through manufacturing and all phases of test, will be state of the art as will the satellite and supporting infrastructure on the ground. You will find that Amazon Leo's mission is compelling, so our program is staffed with some of the top engineers in the industry. Our daily collaboration with other teams on the program brings constant opportunity for discovery, learning, and growth. About the team Our team has lots of experience with various satellite systems and many other flight vehicles. We have bench strength in both our mission and core GNC disciplines. We design, prototype, test, iterate and learn together. Because GNC is central to safe flight, we tend to drive Concepts of Operation and many system level analyses.
US, CA, San Francisco
If you are interested in this position, please apply on Twitch's Career site https://www.twitch.tv/jobs/en/ About Us: Twitch is the world’s biggest live streaming service, with global communities built around gaming, entertainment, music, sports, cooking, and more. It is where thousands of communities come together for whatever, every day. We’re about community, inside and out. You’ll find coworkers who are eager to team up, collaborate, and smash (or elegantly solve) problems together. We’re on a quest to empower live communities, so if this sounds good to you, see what we’re up to on LinkedIn and X, and discover the projects we’re solving on our Blog. Be sure to explore our Interviewing Guide to learn how to ace our interview process. About the Role We are looking for applied scientists to solve challenging and open-ended problems in the domain of user and content safety. As an applied scientist on Twitch's Community team, you will use machine learning to develop data products tackling problems such as harassment, spam, and illegal content. You will use a wide toolbox of ML tools to handle multiple types of data, including user behavior, metadata, and user generated content such as text and video. You will collaborate with a team of passionate scientists and engineers to develop these models and put them into production, where they can help Twitch's creators and viewers succeed and build communities. You will report to our Senior Applied Science Manager in San Francisco, CA. You can work from San Francisco, CA or Seattle, WA. You Will - Build machine learning products to protect Twitch and its users from abusive behavior such as harassment, spam, and violent or illegal content. - Work backwards from customer problems to develop the right solution for the job, whether a classical ML model or a state-of-the-art one. - Collaborate with Community Health's engineering and product management team to productionize your models into flexible data pipelines and ML-based services. - Continue to learn and experiment with new techniques in ML, software engineering, or safety so that we can better help communities on Twitch grow and stay safe. Perks * Medical, Dental, Vision & Disability Insurance * 401(k) * Maternity & Parental Leave * Flexible PTO * Amazon Employee Discount
US, WA, Redmond
As a Guidance, Navigation & Control Hardware Engineer, you will directly contribute to the planning, selection, development, and acceptance of Guidance, Navigation & Control hardware for Amazon Leo's constellation of satellites. Specializing in critical satellite hardware components including reaction wheels, star trackers, magnetometers, sun sensors, and other spacecraft sensors and actuators, you will play a crucial role in the integration and support of these precision systems. You will work closely with internal Amazon Leo hardware teams who develop these components, as well as Guidance, Navigation & Control engineers, software teams, systems engineering, configuration & data management, and Assembly, Integration & Test teams. A key aspect of your role will be actively resolving hardware issues discovered during both factory testing phases and operational space missions, working hand-in-hand with internal Amazon Leo hardware development teams to implement solutions and ensure optimal satellite performance. Export Control Requirement: Due to applicable export control laws and regulations, candidates must be a U.S. citizen or national, U.S. permanent resident (i.e., current Green Card holder), or lawfully admitted into the U.S. as a refugee or granted asylum. Key job responsibilities * Planning and coordination of resources necessary to successfully accept and integrate satellite Guidance, Navigation & Control components including reaction wheels, star trackers, magnetometers, and sun sensors provided by internal Amazon Leo teams * Partner with internal Amazon Leo hardware teams to develop and refine spacecraft actuator and sensor solutions, ensuring they meet requirements and providing technical guidance for future satellite designs * Collaborate with internal Amazon Leo hardware development teams to resolve issues discovered during both factory test phases and operational space missions, implementing corrective actions and design improvements * Work with internal Amazon Leo teams to ensure state-of-the-art satellite hardware technologies including precision pointing systems, attitude determination sensors, and spacecraft actuators meet mission requirements * Lead verification and testing activities, ensuring satellite Guidance, Navigation & Control hardware components meet stringent space-qualified requirements * Drive implementation of hardware-in-the-loop testing for satellite systems, coordinating with internal Amazon Leo hardware engineers to validate component performance in simulated space environments * Troubleshoot and resolve complex hardware integration issues working directly with internal Amazon Leo hardware development teams
US, CA, San Francisco
Are you interested in a unique opportunity to advance the accuracy and efficiency of Artificial General Intelligence (AGI) systems? If so, you're at the right place! We are the AGI Autonomy organization, and we are looking for a driven and talented Member of Technical Staff to join us to build state-of-the art agents. As an MTS on our team, you will design, build, and maintain a Spark-based infrastructure to process and manage large datasets critical for machine learning research. You’ll work closely with our researchers to develop data workflows and tools that streamline the preparation and analysis of massive multimodal datasets, ensuring efficiency and scalability. We operate at Amazon's large scale with the energy of a nimble start-up. If you have a learner's mindset, enjoy solving challenging problems and value an inclusive and collaborative team culture, you will thrive in this role, and we hope to hear from you. Key job responsibilities * Develop and maintain reliable infrastructure to enable large-scale data extraction and transformation. * Work closely with researchers to create tooling for emerging data-related needs. * Manage project prioritization, deliverables, timelines, and stakeholder communication. * Illuminate trade-offs, educate the team on best practices, and influence technical strategy. * Operate in a dynamic environment to deliver high quality software.
IN, KA, Bangalore
Have you ever ordered a product on Amazon and when that box with the smile arrived you wondered how it got to you so fast? Have you wondered where it came from and how much it cost Amazon to deliver it to you? If so, the WW Amazon Logistics, Business Analytics team is for you. We manage the delivery of tens of millions of products every week to Amazon’s customers, achieving on-time delivery in a cost-effective manner. We are looking for an enthusiastic, customer obsessed, Applied Scientist with good analytical skills to help manage projects and operations, implement scheduling solutions, improve metrics, and develop scalable processes and tools. The primary role of an Operations Research Scientist within Amazon is to address business challenges through building a compelling case, and using data to influence change across the organization. This individual will be given responsibility on their first day to own those business challenges and the autonomy to think strategically and make data driven decisions. Decisions and tools made in this role will have significant impact to the customer experience, as it will have a major impact on how the final phase of delivery is done at Amazon. Candidates will be a high potential, strategic and analytic graduate with a PhD in (Operations Research, Statistics, Engineering, and Supply Chain) ready for challenging opportunities in the core of our world class operations space. Great candidates have a history of operations research, and the ability to use data and research to make changes. This role requires robust program management skills and research science skills in order to act on research outcomes. This individual will need to be able to work with a team, but also be comfortable making decisions independently, in what is often times an ambiguous environment. Responsibilities may include: - Develop input and assumptions based preexisting models to estimate the costs and savings opportunities associated with varying levels of network growth and operations - Creating metrics to measure business performance, identify root causes and trends, and prescribe action plans - Managing multiple projects simultaneously - Working with technology teams and product managers to develop new tools and systems to support the growth of the business - Communicating with and supporting various internal stakeholders and external audiences
US, NY, New York
Amazon is investing heavily in building a world class advertising business and we are responsible for defining and delivering a collection of self-service performance advertising products that drive discovery and sales. Our products are strategically important to our Retail and Marketplace businesses driving long term growth. We deliver billions of ad impressions and millions of clicks daily and are breaking fresh ground to create world-class products. We are highly motivated, collaborative and fun-loving with an entrepreneurial spirit and bias for action. With a broad mandate to experiment and innovate, we are growing at an unprecedented rate with a seemingly endless range of new opportunities. The Ad Response Prediction team in the Sponsored Products organization builds GenAI-based shopper understanding and audience targeting systems, along with advanced deep-learning models for Click-through Rate (CTR) and Conversion Rate (CVR) predictions. We develop large-scale machine-learning (ML) pipelines and real-time serving infrastructure to match shoppers' intent with relevant ads across all devices, contexts, and marketplaces. Through precise estimation of shoppers' interactions with ads and their long-term value, we aim to drive optimal ad allocation and pricing, helping to deliver a relevant, engaging, and delightful advertising experience to Amazon shoppers. As our business grows and we undertake increasingly complex initiatives, we are looking for entrepreneurial, and self-driven science leaders to join our team. Key job responsibilities As a Principal Applied Scientist in the team, you will: * Seek to understand in depth the Sponsored Products offering at Amazon and identify areas of opportunities to grow our business via principled ML solutions. * Mentor and guide the applied scientists in our organization and hold us to a high standard of technical rigor and excellence in ML. * Design and lead organization wide ML roadmaps to help our Amazon shoppers have a delightful shopping experience while creating long term value for our sellers. * Work with our engineering partners and draw upon your experience to meet latency and other system constraints. * Identify untapped, high-risk technical and scientific directions, and simulate new research directions that you will drive to completion and deliver. * Be responsible for communicating our ML innovations to the broader internal & external scientific community.
US, CA, San Francisco
Amazon has launched a new research lab in San Francisco to develop foundational capabilities for useful AI agents. We’re enabling practical AI to make our customers more productive, empowered, and fulfilled. In particular, our work combines large language models (LLMs) with reinforcement learning (RL) to solve reasoning, planning, and world modeling in both virtual and physical environments. Our research builds on that of Amazon’s broader AGI organization, which recently introduced Amazon Nova, a new generation of state-of-the-art foundation models (FMs). Our lab is a small, talent-dense team with the resources and scale of Amazon. Each team in the lab has the autonomy to move fast and the long-term commitment to pursue high-risk, high-payoff research. We’re entering an exciting new era where agents can redefine what AI makes possible. We’d love for you to join our lab and build it from the ground up! Key job responsibilities You will contribute directly to AI agent development in an applied research role, including model training, dataset design, and pre- and post-training optimization. You will be hired as a Member of Technical Staff.
US, WA, Seattle
PXTCS is looking for an economist who can apply economic methods to address business problems. The ideal candidate will work with engineers and computer scientists to estimate models and algorithms on large scale data, design pilots and measure impact, and transform successful prototypes into improved policies and programs at scale. PXTCS is looking for creative thinkers who can combine a strong technical economic toolbox with a desire to learn from other disciplines, and who know how to execute and deliver on big ideas as part of an interdisciplinary technical team. Ideal candidates will work in a team setting with individuals from diverse disciplines and backgrounds. They will work with teammates to develop scientific models and conduct the data analysis, modeling, and experimentation that is necessary for estimating and validating models. They will work closely with engineering teams to develop scalable data resources to support rapid insights, and take successful models and findings into production as new products and services. They will be customer-centric and will communicate scientific approaches and findings to business leaders, listening to and incorporate their feedback, and delivering successful scientific solutions. A day in the life The Economist will work with teammates to apply economic methods to business problems. This might include identifying the appropriate research questions, writing code to implement a DID analysis or estimate a structural model, or writing and presenting a document with findings to business leaders. Our economists also collaborate with partner teams throughout the process, from understanding their challenges, to developing a research agenda that will address those challenges, to help them implement solutions. About the team The People eXperience and Technology Central Science (PXTCS) team uses economics, behavioral science, statistics, and machine learning to proactively identify mechanisms and process improvements which simultaneously improve Amazon and the lives, wellbeing, and the value of work to Amazonians. PXTCS is an interdisciplinary team that combines the talents of science and engineering to develop and deliver solutions that measurably achieve this goal.
US, CA, San Francisco
The Amazon General Intelligence “AGI” organization is looking for an Executive Assistant to support leaders of our Autonomy Team in our growing AI Lab space located in San Francisco. This role is ideal for exceptionally talented, dependable, customer-obsessed, and self-motivated individuals eager to work in a fast paced, exciting and growing team. This role serves as a strategic business partner, managing complex executive operations across the AGI organization. The position requires superior attention to detail, ability to meet tight deadlines, excellent organizational skills, and juggling multiple critical requests while proactively anticipating needs and driving improvements. High integrity, discretion with confidential information, and professionalism are essential. The successful candidate will complete complex tasks and projects quickly with minimal guidance, react with appropriate urgency, and take effective action while navigating ambiguity. Flexibility to change direction at a moment's notice is critical for success in this role. Key job responsibilities - Serve as strategic partner to senior leadership, identifying opportunities to improve organizational effectiveness and drive operational excellence - Manage complex calendars and scheduling for multiple executives - Drive continuous improvement through process optimization and new mechanisms - Coordinate team activities including staff meetings, offsites, and events - Schedule and manage cost-effective travel - Attend key meetings, track deliverables, and ensure timely follow-up - Create expense reports and manage budget tracking - Serve as liaison between executives and internal/external stakeholders - Build collaborative relationships with Executive Assistants across the company and with critical external partners - Help us build a great team culture in the SF Lab!
US, CA, San Francisco
Join the next revolution in robotics at Amazon's Frontier AI & Robotics team, where you'll work alongside world-renowned AI pioneers to push the boundaries of what's possible in robotic intelligence. As an Applied Scientist, you'll be at the forefront of developing breakthrough foundation models that enable robots to perceive, understand, and interact with the world in unprecedented ways. You'll drive independent research initiatives in areas such as perception, manipulation, science understanding, locomotion, manipulation, sim2real transfer, multi-modal foundation models and multi-task robot learning, designing novel frameworks that bridge the gap between state-of-the-art research and real-world deployment at Amazon scale. In this role, you'll balance innovative technical exploration with practical implementation, collaborating with platform teams to ensure your models and algorithms perform robustly in dynamic real-world environments. You'll have access to Amazon's vast computational resources, enabling you to tackle ambitious problems in areas like very large multi-modal robotic foundation models and efficient, promptable model architectures that can scale across diverse robotic applications. Key job responsibilities - Drive independent research initiatives across the robotics stack, including robotics foundation models, focusing on breakthrough approaches in perception, and manipulation, for example open-vocabulary panoptic scene understanding, scaling up multi-modal LLMs, sim2real/real2sim techniques, end-to-end vision-language-action models, efficient model inference, video tokenization - Design and implement novel deep learning architectures that push the boundaries of what robots can understand and accomplish - Lead full-stack robotics projects from conceptualization through deployment, taking a system-level approach that integrates hardware considerations with algorithmic development, ensuring robust performance in production environments - Collaborate with platform and hardware teams to ensure seamless integration across the entire robotics stack, optimizing and scaling models for real-world applications - Contribute to the team's technical strategy and help shape our approach to next-generation robotics challenges A day in the life - Design and implement novel foundation model architectures and innovative systems and algorithms, leveraging our extensive infrastructure to prototype and evaluate at scale - Collaborate with our world-class research team to solve complex technical challenges - Lead technical initiatives from conception to deployment, working closely with robotics engineers to integrate your solutions into production systems - Participate in technical discussions and brainstorming sessions with team leaders and fellow scientists - Leverage our massive compute cluster and extensive robotics infrastructure to rapidly prototype and validate new ideas - Transform theoretical insights into practical solutions that can handle the complexities of real-world robotics applications About the team At Frontier AI & Robotics, we're not just advancing robotics – we're reimagining it from the ground up. Our team is building the future of intelligent robotics through innovative foundation models and end-to-end learned systems. We tackle some of the most challenging problems in AI and robotics, from developing sophisticated perception systems to creating adaptive manipulation strategies that work in complex, real-world scenarios. What sets us apart is our unique combination of ambitious research vision and practical impact. We leverage Amazon's massive computational infrastructure and rich real-world datasets to train and deploy state-of-the-art foundation models. Our work spans the full spectrum of robotics intelligence – from multimodal perception using images, videos, and sensor data, to sophisticated manipulation strategies that can handle diverse real-world scenarios. We're building systems that don't just work in the lab, but scale to meet the demands of Amazon's global operations. Join us if you're excited about pushing the boundaries of what's possible in robotics, working with world-class researchers, and seeing your innovations deployed at unprecedented scale.