Rohit Prasad, vice president and head scientist for Alexa AI, demonstrates interactive teaching by customers, a new Alexa capability announced last fall.

Alexa: The science must go on

Throughout the pandemic, the Alexa team has continued to invent on behalf of our customers.

COVID-19 has cost us precious lives and served a harsh reminder that so much more needs to be done to prepare for unforeseen events. In these difficult times, we have also seen heroic efforts — from frontline health workers working night and day to take care of patients, to rapid development of vaccines, to delivery of groceries and essential items in the safest possible way given the circumstances.

Communication features.gif
Alexa’s communications capabilities are helping families connect with their loved ones during lockdown.

Alexa has also tried to help where it can. We rapidly added skills that provide information about resources for dealing with COVID-19. We donated Echo Shows and Echo Dots to healthcare providers, patients, and assisted-living facilities around the country, and Alexa’s communications capabilities — including new calling features (e.g., group calling), and the new Care Hub — are helping providers coordinate care and families connect with their loved ones during lockdown.

It has been just over a year since our schools closed down and we started working remotely. With our homes turned into offices and classrooms, one of the challenges has been keeping our kids motivated and on-task for remote learning. Skills such as the School Schedule Blueprint are helping parents like me manage their children’s remote learning and keep them excited about the future.

Despite the challenges of the pandemic, the Alexa team has shown incredible adaptability and grit, delivering scientific results that are already making a difference for our customers and will have long-lasting effects. Over the past 12 months, we have made advances in four thematic areas, making Alexa more

  1. natural and conversational: interactions with Alexa should be as free-flowing as interacting with another person, without requiring customers to use strict linguistic constructs to communicate with Alexa’s ever-growing set of skills. 
  2. self-learning and data efficient: Alexa’s intelligence should improve without requiring manually labeled data, and it should strive to learn directly from customers. 
  3. insightful and proactive: Alexa should assist and/or provide useful information to customers by anticipating their needs.
  4. trustworthy: Alexa should have attributes like those we cherish in trustworthy people, such as discretion, fairness, and ethical behavior.

Natural and conversational 

Accurate far-field automatic speech recognition (ASR) is critical for natural interactions with Alexa. We have continued to make advances in this area, and at Interspeech 2020, we presented 12 papers, including improvements in end-to-end ASR using the recurrent-neural-network-transducer (RNN-T) architecture. ASR advances, coupled with improvements in natural-language understanding (NLU), have reduced the worldwide error rate for Alexa by more than 24% in the past 12 months.

DashHashLM.png
One of Alexa Speech’s Interspeech 2020 papers, “Rescore in a flash: compact, cache efficient hashing data structures for n-gram language models”, proposes a new data structure, DashHashLM, for encoding the probabilities of word sequences in language models with a minimal memory footprint.

Customers depend on Alexa’s ability to answer single-shot requests, but to continue to provide new, delightful experiences, we are teaching Alexa to accomplish complex goals that require multiturn dialogues. In February, we announced the general release of Alexa Conversations, a capability that makes it easy for developers to build skills that engage customers in dialogues. The developer simply provides APIs (application programming interfaces), a list of entity types invoked in the skill, and a small set of sample dialogues that illustrate interactions with the skills’ capabilities. 

Alexa Conversations’ deep-learning-based dialogue manager takes care of the rest by predicting numerous alternate ways in which a customer might engage with the skill. Nearly 150 skills — such as iRobot Home and Art Museum — have now been built with Alexa Conversations, with another 100 under way, and our internal teams have launched capabilities such as Alexa Greetings (where Alexa answers the Ring doorbell on behalf of customers) and “what to read” with the same underlying capability.  

Further, to ensure that existing skills built without Alexa Conversations understand customer requests more accurately, we migrated hundreds of skills to deep neural networks (as opposed to conditional random fields). Migrated skills are seeing increases in understanding accuracy of 15% to 23% across locales. 

Alexa’s skills are ever expanding, with over 100,000 skills built worldwide by external developers. As that number has grown, discovering new skills has become a challenge. Even when customers know of a skill, they can have trouble remembering its name or how to interact with it. 

To make skills more discoverable and eliminate the need to say “Alexa, ask <skill X> to do <Y>,” we launched a deep-learning-based capability for routing utterances that do not have explicit mention of a skill’s name to relevant skills. Thousands of skills are now being discovered naturally, and in preview, they received an average of 15% more traffic. At last year’s International Conference on Acoustics, Speech, and Signal Processing (ICASSP), we presented a novel method for automatically labeling training data for Alexa’s skill selection model, which is crucial to improving utterance routing accuracy as the number of skills continues to grow.  

A constituency tree featuring syntactic-distance measures.
To make the prosody of Alexa's speech more natural, the Amazon Text-to-Speech team uses constituency trees to measure the syntactic distance (orange circles) between words of an utterance, a good indicator of where phrasing breaks or prosodic resets should occur.
Credit: Glynis Condon

As we’ve been improving Alexa’s understanding capabilities, our Text-to-Speech (TTS) synthesis team has been working to increase the naturalness of Alexa’s speech. We have developed prosodic models that enable Alexa to vary patterns of intonation and inflection to fit different conversational contexts. 

This is a first milestone on the path to contextual language generation and speech synthesis. Depending on the conversational context and the speaking attributes of the customer, Alexa will vary its response — both the words chosen and the speaking style, including prosody, stress, and intonation. We also made progress in detecting tone of voice, which can be an additional signal for adapting Alexa’s responses.

Humor is a critical element of human-like conversational abilities. However, recognizing humor and generating humorous responses is one of the most challenging tasks in conversational AI. University teams participating in the Alexa Prize socialbot challenge have made significant progress in this area by identifying opportunities to use humor in conversation and selecting humorous phrases and jokes that are contextually appropriate.

One of our teams is identifying humor in product reviews by detecting incongruity between product titles and questions asked by customers. For instance, the question “Does this make espresso?” might be reasonable when applied to a high-end coffee machine, but applied to a Swiss Army knife, it’s probably a joke. 

We live in a multilingual and multicultural world, and this pandemic has made it even more important for us to connect across language barriers. In 2019, we had launched a bilingual version of Alexa — i.e., customers could address the same device in US English or Spanish without asking Alexa to switch languages on every request. However, the Spanish responses from Alexa were in a different voice than the English responses.  

By leveraging advances in neural text-to-speech (much the way we had used multilingual learning techniques to improve language understanding), we taught the original Alexa voice — which was based on English-only recordings — to speak perfectly accented U.S. Spanish. 

To further break down language barriers, in December we launched two-way language translation, which enables Alexa to act as an interpreter for customers speaking different languages. Alexa can now translate on the fly between English and six other languages on the same device.

In September 2020, I had the privilege of demonstrating natural turn-taking (NTT), a new capability that has the potential to make Alexa even more useful and delightful for our customers. With NTT, Alexa uses visual cues, in combination with acoustic and linguistic information, to determine whether a customer is addressing Alexa or other people in the household — even when there is no wake word. Our teams are working hard on bringing NTT to our customers later this year so that Alexa can participate in conversations just like a family member or a friend.  

Self-learning and data-efficient 

In AI, one definition of generalization is the ability to robustly handle novel situations and learn from them with minimal human supervision. Two years back, we introduced the ability for Alexa to automatically correct errors in its understanding without requiring any manual labeling. This self-learning system uses implicit feedback (e.g., when a customer interrupts a response to rephrase a request) to automatically revise Alexa’s handling of requests that fail. This learning method is automatically addressing 15% of defects, as quickly as a few hours after detection; with supervised learning, these defects would have taken weeks to address. 

Diagram depicting example of paraphrase alignment
We won a best-paper award at last year's International Conference on Computational Linguistics for a self-learning system that finds the best mapping from a successful request to an unsuccessful one, then transfers the training labels automatically.
Credit: Glynis Condon

At December 2020’s International Conference on Computational Linguistics, our scientists won a best-paper award for a complementary approach to self-learning. Where the earlier system overwrites the outputs of Alexa’s NLU models, the newer system uses implicit feedback to create automatically labeled training examples for those models. This approach is particularly promising for the long tail of unusually phrased requests, and it can be used in conjunction with the existing self-learning system.

In parallel, we have been inventing methods that enable Alexa to add new capabilities, intents, and concepts with as little manually labeled data as possible — often by generalizing from one task to another. For example, in a paper at last year’s ACL Workshop on NLP for Conversational AI, we demonstrated the value of transfer learning from reading comprehension to other natural-language-processing tasks, resulting in the best published results on few-shot learning for dialogue state tracking in low-data regimes.

Similarly, at this year’s Spoken Language Technology conference, we showed how to combine two existing approaches to few-shot learning — prototypical networks and data augmentation — to quickly and accurately learn new intents.

Human-like conversational abilities require common sense — something that is still elusive for conversational-AI services, despite the massive progress due to deep learning. We received the best-paper award at the Empirical Methods in Natural Language Processing (EMNLP) 2020 Workshop on Deep Learning Inside Out (DeeLIO) for our work on infusing commonsense knowledge graphs explicitly and implicitly into large pre-trained language models to give machines greater social intelligence. We will continue to build on such techniques to make interactions with Alexa more intuitive for our customers, without requiring a large quantity of annotated data. 

In December 2020, we launched a new feature that allows customers to teach Alexa new concepts. For instance, if a customer says, “Alexa, set the living room light to study mode”, Alexa might now respond, “I don't know what study mode is. Can you teach me?” Alexa extracts a definition from the customer’s answer, and when the customer later makes the same request — or a similar request — Alexa responds with the learned action. 

Alexa uses multiple deep-learning-based parsers to enable such explicit teaching. First, Alexa detects spans in requests that it has trouble understanding. Next, it engages in a clarification dialogue to learn the new concept. Thanks to this novel capability, customers are able to customize Alexa for their needs, and Alexa is learning thousands of new concepts in the smart-home domain every day, without any manual labeling. We will continue to build on this success and develop more self-learning techniques to make Alexa more useful and personal for our customers.

Insightful and proactive

Alexa-enabled ambient devices have revolutionized daily convenience, enabling us to get what we need simply by asking for it. However, the utility of these devices and endpoints does not need to be limited to customer-initiated requests. Instead, Alexa should anticipate customer needs and seamlessly assist in meeting those needs. Smart huncheslocation-based reminders, and discovery of routines are a few ways in which Alexa is already helping customers. 

Illustration of Alexa inferring a customer asking about weather at the beach may be planning a beach trip.
In this interaction, Alexa infers that a customer who asks about the weather at the beach may be interested in other information that could be useful for planning a beach trip.
credit: Glynis Condon

Another way for Alexa to be more useful to our customers is to predict customers’ goals that span multiple disparate skills. For instance, if a customer asks, “How long does it take to steep tea?”, Alexa might answer, “Five minutes is a good place to start", then follow up by asking, "Would you like me to set a timer for five minutes?” In 2020, we launched an initial version of Alexa’s ability to anticipate and complete multi-skill goals without any explicit preprogramming.  

While this ability makes the complex seem simple, underneath, it depends on multiple deep-learning models. A “trigger model” decides whether to predict the customer’s goal at all, and if it decides it should, it suggests a skill to handle the predicted goal. But the skills it suggests are identified by another model that relies on information-theoretic analyses of input utterances, together with subsidiary models that assess features such as whether the customer was trying to rephrase a prior command, or whether the direct goal and the latent goal have common entities or values.  

Trustworthy

We have made significant advances in areas that are key to making Alexa more trusted by customers. In the field of privacy-preserving machine learning, for instance, we have been exploring differential privacy, a theoretical framework for evaluating the privacy protections offered by systems that generate aggregate statistics from individuals’ data. 

At the EMNLP 2020 Workshop on Privacy in Natural Language Processing, we presented a paper that proposes a new way to offer metric-differential-privacy assurances by adding so-called elliptical noise to training data for machine learning systems, and at this year’s Conference of the European Chapter of the Association for Computational Linguistics, we’ll present a technique for transforming texts that preserves their semantic content but removes potentially identifying information. Both methods significantly improve on the privacy protections afforded by older approaches while leaving the performance of the resulting systems unchanged.

Elliptical vs. spherical noise.png
A new approach to protecting privacy in machine learning systems that uses elliptical noise (right) rather than the conventional spherical noise (left) to perturb training data significantly improves privacy protections while leaving the performance of the resulting systems unchanged.


We have also made Alexa’s answers to information-centric questions more trustworthy by expanding our knowledge graph and improving our neural semantic parsing and web-based information retrieval. If, however, the sources of information used to produce a knowledge graph encode harmful social biases — even as a matter of historical accident — the knowledge graph may as well. In a pair of papers presented last year, our scientists devised techniques for both identifying and remediating instances of bias in knowledge graphs, to help ensure that those biases don’t leak into Alexa’s answers to questions.

A two-dimensional representation of our method for measuring bias in knowledge graph embeddings.
A two-dimensional representation of the method for measuring bias in knowledge graph embeddings that we presented last year. In each diagram, the blue dots labeled person1 indicate the shift in an embedding as we tune its parameters. The orange arrows represent relation vectors and the orange dots the sums of those vectors and the embeddings. As we shift the gender relation toward maleness, the profession relation shifts away from nurse and closer to doctor, indicating gender bias.
Credit: Glynis Condon

Similarly, the language models that many speech recognition and natural-language-understanding applications depend on are trained on corpora of publicly available texts; if those data reflect biases, so will the resulting models. At the recent ACM Conference on Fairness, Accountability, and Transparency, Alexa AI scientists presented a new data set that can be used to test language models for bias and a new metric for quantitatively evaluating the test results.

Still, we recognize that a lot more needs to be done in AI in the areas of fairness and ethics, and to that end, partnership with universities and other dedicated research organizations can be a force multiplier. As a case in point, our collaboration with the National Science Foundation to accelerate research on fairness in AI recently entered its second year, with a new round of grant recipients named in February 2021.

And in January 2021, we announced the creation of the Center for Secure and Trusted Machine Learning, a collaboration with the University of Southern California that will support USC and Amazon researchers in the development of novel approaches to privacy-preserving ML solutions

Strengthening the research community

I am particularly proud that, despite the effort required to bring all these advances to fruition, our scientists have remained actively engaged with the broader research community in many other areas. To choose just a few examples:

  • In August, we announced the winners of the third instance of the Alexa Prize Grand Challenge to develop conversational-AI systems, or socialbots, and in September, we opened registration for the fourth instance. Earlier this month, we announced another track of research for Alexa Prize called the TaskBot Challenge, in which university teams will compete to develop multimodal agents that assist customers in completing tasks requiring multiple steps and decisions.
  • In September, we announced the creation of the Columbia Center of Artificial Intelligence Technology, a collaboration with Columbia Engineering that will be a hub of research, education, and outreach programs.
  • In October, we launched the DialoGLUE challenge, together with a set of benchmark models, to encourage research on conversational generalizability, or the ability of dialogue agents trained on one task to adapt easily to new tasks.

Come work with us

Amazon is looking for data scientists, research scientists, applied scientists, interns, and more. Check out our careers page to find all of the latest job listings around the world.

We are grateful for the amazing work of our fellow researchers in the medical, pharmaceutical, and biotech communities who have developed COVID-19 vaccines in record time.

Thanks to their scientific contributions, we now have the strong belief that we will prevail against this pandemic. 

I am looking forward to the end of this pandemic and the chance to work even more closely with the Alexa teams and the broader scientific community to make further advances in conversational AI and enrich our customers’ lives. 

Research areas

Related content

IN, HR, Gurugram
Our customers have immense faith in our ability to deliver packages timely and as expected. A well planned network seamlessly scales to handle millions of package movements a day. It has monitoring mechanisms that detect failures before they even happen (such as predicting network congestion, operations breakdown), and perform proactive corrective actions. When failures do happen, it has inbuilt redundancies to mitigate impact (such as determine other routes or service providers that can handle the extra load), and avoids relying on single points of failure (service provider, node, or arc). Finally, it is cost optimal, so that customers can be passed the benefit from an efficiently set up network. Amazon Shipping is hiring Applied Scientists to help improve our ability to plan and execute package movements. As an Applied Scientist in Amazon Shipping, you will work on multiple challenging machine learning problems spread across a wide spectrum of business problems. You will build ML models to help our transportation cost auditing platforms effectively audit off-manifest (discrepancies between planned and actual shipping cost). You will build models to improve the quality of financial and planning data by accurately predicting ship cost at a package level. Your models will help forecast the packages required to be pick from shipper warehouses to reduce First Mile shipping cost. Using signals from within the transportation network (such as network load, and velocity of movements derived from package scan events) and outside (such as weather signals), you will build models that predict delivery delay for every package. These models will help improve buyer experience by triggering early corrective actions, and generating proactive customer notifications. Your role will require you to demonstrate Think Big and Invent and Simplify, by refining and translating Transportation domain-related business problems into one or more Machine Learning problems. You will use techniques from a wide array of machine learning paradigms, such as supervised, unsupervised, semi-supervised and reinforcement learning. Your model choices will include, but not be limited to, linear/logistic models, tree based models, deep learning models, ensemble models, and Q-learning models. You will use techniques such as LIME and SHAP to make your models interpretable for your customers. You will employ a family of reusable modelling solutions to ensure that your ML solution scales across multiple regions (such as North America, Europe, Asia) and package movement types (such as small parcel movements and truck movements). You will partner with Applied Scientists and Research Scientists from other teams in US and India working on related business domains. Your models are expected to be of production quality, and will be directly used in production services. You will work as part of a diverse data science and engineering team comprising of other Applied Scientists, Software Development Engineers and Business Intelligence Engineers. You will participate in the Amazon ML community by authoring scientific papers and submitting them to Machine Learning conferences. You will mentor Applied Scientists and Software Development Engineers having a strong interest in ML. You will also be called upon to provide ML consultation outside your team for other problem statements. If you are excited by this charter, come join us!
US, MA, Boston
The Artificial General Intelligence (AGI) team is looking for a passionate, talented, and inventive Senior Applied Scientist with a strong deep learning background, to build industry-leading technology with Large Language Models (LLMs) and multimodal systems. Key job responsibilities As a Senior Applied Scientist with the AGI team, you will work with talented peers to lead the development of novel algorithms and modeling techniques, to advance the state of the art with LLMs. Your work will directly impact our customers in the form of products and services that make use of speech and language technology. You will leverage Amazon’s heterogeneous data sources and large-scale computing resources to accelerate advances in generative artificial intelligence (GenAI). About the team The AGI team has a mission to push the envelope in LLMs and multimodal systems, in order to provide the best-possible experience for our customers.
IN, KA, Bengaluru
The Amazon Alexa AI team in India is seeking a talented, self-driven Applied Scientist to work on prototyping, optimizing, and deploying ML algorithms within the realm of Generative AI. Key responsibilities include: - Research, experiment and build Proof Of Concepts advancing the state of the art in AI & ML for GenAI. - Collaborate with cross-functional teams to architect and execute technically rigorous AI projects. - Thrive in dynamic environments, adapting quickly to evolving technical requirements and deadlines. - Engage in effective technical communication (written & spoken) with coordination across teams. - Conduct thorough documentation of algorithms, methodologies, and findings for transparency and reproducibility. - Publish research papers in internal and external venues of repute - Support on-call activities for critical issues Basic Qualifications: - Master’s or PhD in computer science, statistics or a related field - 2-7 years experience in deep learning, machine learning, and data science. - Proficiency in coding and software development, with a strong focus on machine learning frameworks. - Experience in Python, or another language; command line usage; familiarity with Linux and AWS ecosystems. - Understanding of relevant statistical measures such as confidence intervals, significance of error measurements, development and evaluation data sets, etc. - Excellent communication skills (written & spoken) and ability to collaborate effectively in a distributed, cross-functional team setting. - Papers published in AI/ML venues of repute Preferred Qualifications: - Track record of diving into data to discover hidden patterns and conducting error/deviation analysis - Ability to develop experimental and analytic plans for data modeling processes, use of strong baselines, ability to accurately determine cause and effect relations - The motivation to achieve results in a fast-paced environment. - Exceptional level of organization and strong attention to detail - Comfortable working in a fast paced, highly collaborative, dynamic work environment
GB, London
Are you looking to work at the forefront of Machine Learning and AI? Would you be excited to apply cutting edge Generative AI algorithms to solve real world problems with significant impact? The AWS Industries Team at AWS helps AWS customers implement Generative AI solutions and realize transformational business opportunities for AWS customers in the most strategic industry verticals. This is a team of data scientists, engineers, and architects working step-by-step with customers to build bespoke solutions that harness the power of generative AI. The team helps customers imagine and scope the use cases that will create the greatest value for their businesses, select and train and fine tune the right models, define paths to navigate technical or business challenges, develop proof-of-concepts, and build applications to launch these solutions at scale. The AWS Industries team provides guidance and implements best practices for applying generative AI responsibly and cost efficiently. You will work directly with customers and innovate in a fast-paced organization that contributes to game-changing projects and technologies. You will design and run experiments, research new algorithms, and find new ways of optimizing risk, profitability, and customer experience. In this Data Scientist role you will be capable of using GenAI and other techniques to design, evangelize, and implement and scale cutting-edge solutions for never-before-solved problems. Key job responsibilities - Collaborate with AI/ML scientists, engineers, and architects to research, design, develop, and evaluate cutting-edge generative AI algorithms and build ML systems to address real-world challenges - Interact with customers directly to understand the business problem, help and aid them in implementation of generative AI solutions, deliver briefing and deep dive sessions to customers and guide customer on adoption patterns and paths to production - Create and deliver best practice recommendations, tutorials, blog posts, publications, sample code, and presentations adapted to technical, business, and executive stakeholder - Provide customer and market feedback to Product and Engineering teams to help define product direction About the team Diverse Experiences Amazon values diverse experiences. Even if you do not meet all of the preferred qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why AWS Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. Inclusive Team Culture Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences, inspire us to never stop embracing our uniqueness. Mentorship and Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional.
US, WA, Seattle
The Selling Partner Experience (SPX) organization strives to make Amazon the best place for Selling Partners to do business. The SPX Science team is building an AI-powered conversational assistant to transform the Selling Partner experience. The Selling Assistant is a trusted partner and a seasoned advisor that’s always available to enable our partners to thrive in Amazon’s stores. It takes away the cognitive load of selling on Amazon by providing a single interface to handle a diverse set of selling needs. The assistant always stays by the seller's side, talks to them in their language, enables them to capitalize on opportunities, and helps them accomplish their business goals with ease. It is powered by the state-of-the-art Generative AI, going beyond a typical chatbot to provide a personalized experience to sellers running real businesses, large and small. Do you want to join an innovative team of scientists, engineers, product and program managers who use the latest Generative AI and Machine Learning technologies to help Amazon create a delightful Selling Partner experience? Do you want to build solutions to real business problems by automatically understanding and addressing sellers’ challenges, needs and opportunities? Are you excited by the prospect of contributing to one of Amazon’s most strategic Generative AI initiatives? If yes, then you may be a great fit to join the Selling Partner Experience Science team. Key job responsibilities - Use state-of-the-art Machine Learning and Generative AI techniques to create the next generation of the tools that empower Amazon's Selling Partners to succeed. - Design, develop and deploy highly innovative models to interact with Sellers and delight them with solutions. - Work closely with teams of scientists and software engineers to drive real-time model implementations and deliver novel and highly impactful features. - Establish scalable, efficient, automated processes for large scale data analyses, model benchmarking, model validation and model implementation. - Research and implement novel machine learning and statistical approaches. - Participate in strategic initiatives to employ the most recent advances in ML in a fast-paced, experimental environment. About the team Selling Partner Experience Science is a growing team of scientists, engineers and product leaders engaged in the research and development of the next generation of ML-driven technology to empower Amazon's Selling Partners to succeed. We draw from many science domains, from Natural Language Processing to Computer Vision to Optimization to Economics, to create solutions that seamlessly and automatically engage with Sellers, solve their problems, and help them grow. We are focused on building seller facing AI-powered tools using the latest science advancements to empower sellers to drive the growth of their business. We strive to radically simplify the seller experience, lowering the cognitive burden of selling on Amazon by making it easy to accomplish critical tasks such as launching new products, understanding and complying with Amazon’s policies and taking actions to grow their business.
US, WA, Seattle
The Seller Fees organization drives the monetization infrastructure powering Amazon's global marketplace, processing billions of transactions for over two million active third-party sellers worldwide. Our team owns the complete technical stack and strategic vision for fee computation systems, leveraging advanced machine learning to optimize seller experiences and maintain fee integrity at unprecedented scale. We're seeking an exceptional Applied Scientist to push the boundaries of large-scale ML systems in a business-critical domain. This role presents unique opportunities to • Architect and deploy state-of-the-art transformer-based models for fee classification and anomaly detection across hundreds of millions of products • Pioneer novel applications of multimodal LLMs to analyze product attributes, images, and seller metadata for intelligent fee determination • Build production-scale generative AI systems for fee integrity and seller communications • Advance the field of ML through novel research in high-stakes, large-scale transaction processing • Develop SOTA causal inference frameworks integrated with deep learning to understand fee impacts and optimize seller outcomes • Collaborate with world-class scientists and engineers to solve complex problems at the intersection of deep learning, economics, and large business systems. If you're passionate about advancing the state-of-the-art in applied ML/AI while tackling challenging problems at global scale, we want you on our team! Key job responsibilities Responsibilities: . Design measurable and scalable science solutions that can be adopted across stores worldwide with different languages, policy and requirements. · Integrate AI (both generative and symbolic) into compound agentic workflows to transform complex business systems into intelligent ones for both internal and external customers. · Develop large scale classification and prediction models using the rich features of text, image and customer interactions and state-of-the-art techniques. · Research and implement novel machine learning, statistical and econometrics approaches. · Write high quality code and implement scalable models within the production systems. · Stay up to date with relevant scientific publications. · Collaborate with business and software teams both within and outside of the fees organization.
US, WA, Seattle
Join us in the evolution of Amazon’s Seller business! The Selling Partner Growth organization is the growth and development engine for our Store. Partnering with business, product, and engineering, we catalyze SP growth with comprehensive and accurate data, unique insights, and actionable recommendations and collaborate with WW SP facing teams to drive adoption and create feedback loops. We strongly believe that any motivated SP should be able to grow their businesses and reach their full potential supported by Amazon tools and resources. We are looking for a Senior Applied Scientist to lead us to identify data-driven insight and opportunities to improve our SP growth strategy and drive new seller success. As a successful applied scientist on our talented team of scientists and engineers, you will solve complex problems to identify actionable opportunities, and collaborate with engineering, research, and business teams for future innovation. You need to have deep understanding on the business domain and have the ability to connect business with science. You are also strong in ML modeling and scientific foundation with the ability to collaborate with engineering to put models in production to answer specific business questions. You are an expert at synthesizing and communicating insights and recommendations to audiences of varying levels of technical sophistication. You will continue to contribute to the research community, by working with scientists across Amazon, as well as collaborating with academic researchers and publishing papers (www.aboutamazon.com/research). Key job responsibilities As a Sr. Applied Scientist in the team, you will: - Identify opportunities to improve SP growth and translate those opportunities into science problems via principled statistical solutions (e.g. ML, causal, RL). - Mentor and guide the applied scientists in our organization and hold us to a high standard of technical rigor and excellence in MLOps. - Design and lead roadmaps for complex science projects to help SP have a delightful selling experience while creating long term value for our shoppers. - Work with our engineering partners and draw upon your experience to meet latency and other system constraints. - Identify untapped, high-risk technical and scientific directions, and simulate new research directions that you will drive to completion and deliver. - Be responsible for communicating our science innovations to the broader internal & external scientific community.
US, CA, Sunnyvale
Our team leads the development and optimization of on-device ML models for Amazon's hardware products, including audio, vision, and multi-modal AI features. We work at the critical intersection of ML innovation and silicon design, ensuring AI capabilities can run efficiently on resource-constrained devices. Currently, we enable production ML models across multiple device families, including Echo, Ring/Blink, and other consumer devices. Our work directly impacts Amazon's customer experiences in consumer AI device market. The solutions we develop determine which AI features can be offered on-device versus requiring cloud connectivity, ultimately shaping product capabilities and customer experience across Amazon's hardware portfolio. This is a unique opportunity to shape the future of AI in consumer devices at unprecedented scale. You'll be at the forefront of developing industry-first model architectures and compression techniques that will power AI features across millions of Amazon devices worldwide. Your innovations will directly enable new AI features that enhance how customers interact with Amazon products every day. Come join our team! Key job responsibilities As a Principal Applied Scientist, you will: • Own the technical architecture and optimization strategy for ML models deployed across Amazon's device ecosystem, from existing to yet-to-be-shipped products. • Develop novel model architectures optimized for our custom silicon, establishing new methodologies for model compression and quantization. • Create an evaluation framework for model efficiency and implement multimodal optimization techniques that work across vision, language, and audio tasks. • Define technical standards for model deployment and drive research initiatives in model efficiency to guide future silicon designs. • Spend the majority of your time doing deep technical work - developing novel ML architectures, writing critical optimization code, and creating proof-of-concept implementations that demonstrate breakthrough efficiency gains. • Influence architecture decisions impacting future silicon generations, establish standards for model optimization, and mentor others in advanced ML techniques.
US, CO, Boulder
The Advertising Incrementality Measurement (AIM) team is looking for an Applied Scientist II with experience in causal inference, experimentation, and ML development to help us expand our causal modeling solutions for understanding advertising effectiveness. Our work is foundational to providing customer-facing experimentation tools, furthering internal research & development, and building out Amazon's new Multi-Touch Attribution (MTA) measurement offerings. Incrementality measurement is a lynchpin for the next generation of Amazon Advertising measurement solutions and this role will play a key role in the release and expansion of these offerings. Key job responsibilities * Partner with economists and senior team members to drive science improvements and implement technical solutions at the state-of-the-art of machine learning and econometrics * Partner with engineering and other science collaborators to design, implement, prototype, deploy, and maintain large-scale causal ML models. * Carry out in-depth research and analysis exploring advertising-related data sets, including large sets of real-world experimental data, to understand advertiser behavior, highlight model improvement opportunities, and understand shortcomings and limitations. * Define data quality standards for understanding typical behavior, capturing outliers, and detecting model performance issues. * Work with product stakeholders to help improve our ability to provide quality measurement of advertising effectiveness for our customers. About the team AIM is a cross disciplinary team of engineers, product managers, economists, data scientists, and applied scientists with a charter to build scientifically-rigorous causal inference methodologies at scale. Our job is to help customers cut through the noise of the modern advertising landscape and understand what actions, behaviors, and strategies actually have a real, measurable impact on key outcomes. The data we produce becomes the effective ground truth for advertisers and partners making decisions affecting $10s to $100s of millions in advertising spend.
US, WA, Seattle
The Measurement, Ad Tech, and Data Science (MADS) team at Amazon Ads is at the forefront of developing cutting-edge solutions that help our tens of millions of advertisers understand the value of their ad spend while prioritizing customer privacy and measurement quality. We develop cutting-edge deterministic algorithms, machine learning models, causal models, and statistical approaches to empower advertisers with insights on the effectiveness of their ads in guiding customers from awareness to purchases. Our insights help advertisers build full-funnel advertising strategies. We maximize the information we extract from incomplete traffic signals and alternative sources to capture the impact of their ad spending for both Amazon recognized and anonymous traffic. Our vision is to lead the industry in extracting and combining information from several sources to enable advertisers to optimize their return on their ad spend. As an Applied Scientist on this team, you will: - Drive end-to-end Machine Learning projects that have a high degree of ambiguity, scale, complexity. - Perform hands-on analysis and modeling of enormous data sets to develop insights that increase traffic monetization and merchandise sales, without compromising the shopper experience. - Build machine learning models, perform proof-of-concept, experiment, optimize, and deploy your models into production; work closely with software engineers to assist in productionizing your ML models. - Run A/B experiments, gather data, and perform statistical analysis. - Establish scalable, efficient, automated processes for large-scale data analysis, machine-learning model development, model validation and serving. - Research new and innovative machine learning approaches. - Recruit Applied Scientists to the team and provide mentorship. Why you will love this opportunity: Amazon is investing heavily in building a world-class advertising business. This team defines and delivers a collection of advertising products that drive discovery and sales. Our solutions generate billions in revenue and drive long-term growth for Amazon’s Retail and Marketplace businesses. We deliver billions of ad impressions, millions of clicks daily, and break fresh ground to create world-class products. We are a highly motivated, collaborative, and fun-loving team with an entrepreneurial spirit - with a broad mandate to experiment and innovate. Impact and Career Growth: You will invent new experiences and influence customer-facing shopping experiences to help suppliers grow their retail business and the auction dynamics that leverage native advertising; this is your opportunity to work within the fastest-growing businesses across all of Amazon! Define a long-term science vision for our advertising business, driven from our customers' needs, translating that direction into specific plans for research and applied scientists, as well as engineering and product teams. This role combines science leadership, organizational ability, technical strength, product focus, and business understanding. Team video https://youtu.be/zD_6Lzw8raE Key job responsibilities * Lead the development of ad measurement models and solutions that address the full spectrum of an advertiser's investment, focusing on scalable and efficient methodologies. * Collaborate closely with cross-functional teams including engineering, product management, and business teams to define and implement measurement solutions. * Use state-of-the-art scientific technologies including Generative AI, Classical Machine Learning, Causal Inference, Natural Language Processing, and Computer Vision to develop cutting-edge models that measure the impact of ad spend across multiple platforms and timescales. * Drive experimentation and the continuous improvement of ML models through iterative development, testing, and optimization. * Translate complex scientific challenges into clear and impactful solutions for business stakeholders. * Mentor and guide junior scientists, fostering a collaborative and high-performing team culture. * Foster collaborations between scientists to move faster, with broader impact. * Regularly engage with the broader scientific community with presentations, publications, and patents. A day in the life You will solve real-world problems by getting and analyzing large amounts of data, generate business insights and opportunities, design simulations and experiments, and develop statistical and ML models. The team is driven by business needs, which requires collaboration with other Scientists, Engineers, and Product Managers across the advertising organization. You will prepare written and verbal presentations to share insights to audiences of varying levels of technical sophistication. Team video https://advertising.amazon.com/help/G4LNN5YWHP6SM9TJ