Rohit Prasad, vice president and head scientist for Alexa AI, demonstrates interactive teaching by customers, a new Alexa capability announced last fall.

Alexa: The science must go on

Throughout the pandemic, the Alexa team has continued to invent on behalf of our customers.

COVID-19 has cost us precious lives and served a harsh reminder that so much more needs to be done to prepare for unforeseen events. In these difficult times, we have also seen heroic efforts — from frontline health workers working night and day to take care of patients, to rapid development of vaccines, to delivery of groceries and essential items in the safest possible way given the circumstances.

Communication features.gif
Alexa’s communications capabilities are helping families connect with their loved ones during lockdown.

Alexa has also tried to help where it can. We rapidly added skills that provide information about resources for dealing with COVID-19. We donated Echo Shows and Echo Dots to healthcare providers, patients, and assisted-living facilities around the country, and Alexa’s communications capabilities — including new calling features (e.g., group calling), and the new Care Hub — are helping providers coordinate care and families connect with their loved ones during lockdown.

It has been just over a year since our schools closed down and we started working remotely. With our homes turned into offices and classrooms, one of the challenges has been keeping our kids motivated and on-task for remote learning. Skills such as the School Schedule Blueprint are helping parents like me manage their children’s remote learning and keep them excited about the future.

Despite the challenges of the pandemic, the Alexa team has shown incredible adaptability and grit, delivering scientific results that are already making a difference for our customers and will have long-lasting effects. Over the past 12 months, we have made advances in four thematic areas, making Alexa more

  1. natural and conversational: interactions with Alexa should be as free-flowing as interacting with another person, without requiring customers to use strict linguistic constructs to communicate with Alexa’s ever-growing set of skills. 
  2. self-learning and data efficient: Alexa’s intelligence should improve without requiring manually labeled data, and it should strive to learn directly from customers. 
  3. insightful and proactive: Alexa should assist and/or provide useful information to customers by anticipating their needs.
  4. trustworthy: Alexa should have attributes like those we cherish in trustworthy people, such as discretion, fairness, and ethical behavior.

Natural and conversational 

Accurate far-field automatic speech recognition (ASR) is critical for natural interactions with Alexa. We have continued to make advances in this area, and at Interspeech 2020, we presented 12 papers, including improvements in end-to-end ASR using the recurrent-neural-network-transducer (RNN-T) architecture. ASR advances, coupled with improvements in natural-language understanding (NLU), have reduced the worldwide error rate for Alexa by more than 24% in the past 12 months.

DashHashLM.png
One of Alexa Speech’s Interspeech 2020 papers, “Rescore in a flash: compact, cache efficient hashing data structures for n-gram language models”, proposes a new data structure, DashHashLM, for encoding the probabilities of word sequences in language models with a minimal memory footprint.

Customers depend on Alexa’s ability to answer single-shot requests, but to continue to provide new, delightful experiences, we are teaching Alexa to accomplish complex goals that require multiturn dialogues. In February, we announced the general release of Alexa Conversations, a capability that makes it easy for developers to build skills that engage customers in dialogues. The developer simply provides APIs (application programming interfaces), a list of entity types invoked in the skill, and a small set of sample dialogues that illustrate interactions with the skills’ capabilities. 

Alexa Conversations’ deep-learning-based dialogue manager takes care of the rest by predicting numerous alternate ways in which a customer might engage with the skill. Nearly 150 skills — such as iRobot Home and Art Museum — have now been built with Alexa Conversations, with another 100 under way, and our internal teams have launched capabilities such as Alexa Greetings (where Alexa answers the Ring doorbell on behalf of customers) and “what to read” with the same underlying capability.  

Further, to ensure that existing skills built without Alexa Conversations understand customer requests more accurately, we migrated hundreds of skills to deep neural networks (as opposed to conditional random fields). Migrated skills are seeing increases in understanding accuracy of 15% to 23% across locales. 

Alexa’s skills are ever expanding, with over 100,000 skills built worldwide by external developers. As that number has grown, discovering new skills has become a challenge. Even when customers know of a skill, they can have trouble remembering its name or how to interact with it. 

To make skills more discoverable and eliminate the need to say “Alexa, ask <skill X> to do <Y>,” we launched a deep-learning-based capability for routing utterances that do not have explicit mention of a skill’s name to relevant skills. Thousands of skills are now being discovered naturally, and in preview, they received an average of 15% more traffic. At last year’s International Conference on Acoustics, Speech, and Signal Processing (ICASSP), we presented a novel method for automatically labeling training data for Alexa’s skill selection model, which is crucial to improving utterance routing accuracy as the number of skills continues to grow.  

A constituency tree featuring syntactic-distance measures.
To make the prosody of Alexa's speech more natural, the Amazon Text-to-Speech team uses constituency trees to measure the syntactic distance (orange circles) between words of an utterance, a good indicator of where phrasing breaks or prosodic resets should occur.
Credit: Glynis Condon

As we’ve been improving Alexa’s understanding capabilities, our Text-to-Speech (TTS) synthesis team has been working to increase the naturalness of Alexa’s speech. We have developed prosodic models that enable Alexa to vary patterns of intonation and inflection to fit different conversational contexts. 

This is a first milestone on the path to contextual language generation and speech synthesis. Depending on the conversational context and the speaking attributes of the customer, Alexa will vary its response — both the words chosen and the speaking style, including prosody, stress, and intonation. We also made progress in detecting tone of voice, which can be an additional signal for adapting Alexa’s responses.

Humor is a critical element of human-like conversational abilities. However, recognizing humor and generating humorous responses is one of the most challenging tasks in conversational AI. University teams participating in the Alexa Prize socialbot challenge have made significant progress in this area by identifying opportunities to use humor in conversation and selecting humorous phrases and jokes that are contextually appropriate.

One of our teams is identifying humor in product reviews by detecting incongruity between product titles and questions asked by customers. For instance, the question “Does this make espresso?” might be reasonable when applied to a high-end coffee machine, but applied to a Swiss Army knife, it’s probably a joke. 

We live in a multilingual and multicultural world, and this pandemic has made it even more important for us to connect across language barriers. In 2019, we had launched a bilingual version of Alexa — i.e., customers could address the same device in US English or Spanish without asking Alexa to switch languages on every request. However, the Spanish responses from Alexa were in a different voice than the English responses.  

By leveraging advances in neural text-to-speech (much the way we had used multilingual learning techniques to improve language understanding), we taught the original Alexa voice — which was based on English-only recordings — to speak perfectly accented U.S. Spanish. 

To further break down language barriers, in December we launched two-way language translation, which enables Alexa to act as an interpreter for customers speaking different languages. Alexa can now translate on the fly between English and six other languages on the same device.

In September 2020, I had the privilege of demonstrating natural turn-taking (NTT), a new capability that has the potential to make Alexa even more useful and delightful for our customers. With NTT, Alexa uses visual cues, in combination with acoustic and linguistic information, to determine whether a customer is addressing Alexa or other people in the household — even when there is no wake word. Our teams are working hard on bringing NTT to our customers later this year so that Alexa can participate in conversations just like a family member or a friend.  

Self-learning and data-efficient 

In AI, one definition of generalization is the ability to robustly handle novel situations and learn from them with minimal human supervision. Two years back, we introduced the ability for Alexa to automatically correct errors in its understanding without requiring any manual labeling. This self-learning system uses implicit feedback (e.g., when a customer interrupts a response to rephrase a request) to automatically revise Alexa’s handling of requests that fail. This learning method is automatically addressing 15% of defects, as quickly as a few hours after detection; with supervised learning, these defects would have taken weeks to address. 

Diagram depicting example of paraphrase alignment
We won a best-paper award at last year's International Conference on Computational Linguistics for a self-learning system that finds the best mapping from a successful request to an unsuccessful one, then transfers the training labels automatically.
Credit: Glynis Condon

At December 2020’s International Conference on Computational Linguistics, our scientists won a best-paper award for a complementary approach to self-learning. Where the earlier system overwrites the outputs of Alexa’s NLU models, the newer system uses implicit feedback to create automatically labeled training examples for those models. This approach is particularly promising for the long tail of unusually phrased requests, and it can be used in conjunction with the existing self-learning system.

In parallel, we have been inventing methods that enable Alexa to add new capabilities, intents, and concepts with as little manually labeled data as possible — often by generalizing from one task to another. For example, in a paper at last year’s ACL Workshop on NLP for Conversational AI, we demonstrated the value of transfer learning from reading comprehension to other natural-language-processing tasks, resulting in the best published results on few-shot learning for dialogue state tracking in low-data regimes.

Similarly, at this year’s Spoken Language Technology conference, we showed how to combine two existing approaches to few-shot learning — prototypical networks and data augmentation — to quickly and accurately learn new intents.

Human-like conversational abilities require common sense — something that is still elusive for conversational-AI services, despite the massive progress due to deep learning. We received the best-paper award at the Empirical Methods in Natural Language Processing (EMNLP) 2020 Workshop on Deep Learning Inside Out (DeeLIO) for our work on infusing commonsense knowledge graphs explicitly and implicitly into large pre-trained language models to give machines greater social intelligence. We will continue to build on such techniques to make interactions with Alexa more intuitive for our customers, without requiring a large quantity of annotated data. 

In December 2020, we launched a new feature that allows customers to teach Alexa new concepts. For instance, if a customer says, “Alexa, set the living room light to study mode”, Alexa might now respond, “I don't know what study mode is. Can you teach me?” Alexa extracts a definition from the customer’s answer, and when the customer later makes the same request — or a similar request — Alexa responds with the learned action. 

Alexa uses multiple deep-learning-based parsers to enable such explicit teaching. First, Alexa detects spans in requests that it has trouble understanding. Next, it engages in a clarification dialogue to learn the new concept. Thanks to this novel capability, customers are able to customize Alexa for their needs, and Alexa is learning thousands of new concepts in the smart-home domain every day, without any manual labeling. We will continue to build on this success and develop more self-learning techniques to make Alexa more useful and personal for our customers.

Insightful and proactive

Alexa-enabled ambient devices have revolutionized daily convenience, enabling us to get what we need simply by asking for it. However, the utility of these devices and endpoints does not need to be limited to customer-initiated requests. Instead, Alexa should anticipate customer needs and seamlessly assist in meeting those needs. Smart huncheslocation-based reminders, and discovery of routines are a few ways in which Alexa is already helping customers. 

Illustration of Alexa inferring a customer asking about weather at the beach may be planning a beach trip.
In this interaction, Alexa infers that a customer who asks about the weather at the beach may be interested in other information that could be useful for planning a beach trip.
credit: Glynis Condon

Another way for Alexa to be more useful to our customers is to predict customers’ goals that span multiple disparate skills. For instance, if a customer asks, “How long does it take to steep tea?”, Alexa might answer, “Five minutes is a good place to start", then follow up by asking, "Would you like me to set a timer for five minutes?” In 2020, we launched an initial version of Alexa’s ability to anticipate and complete multi-skill goals without any explicit preprogramming.  

While this ability makes the complex seem simple, underneath, it depends on multiple deep-learning models. A “trigger model” decides whether to predict the customer’s goal at all, and if it decides it should, it suggests a skill to handle the predicted goal. But the skills it suggests are identified by another model that relies on information-theoretic analyses of input utterances, together with subsidiary models that assess features such as whether the customer was trying to rephrase a prior command, or whether the direct goal and the latent goal have common entities or values.  

Trustworthy

We have made significant advances in areas that are key to making Alexa more trusted by customers. In the field of privacy-preserving machine learning, for instance, we have been exploring differential privacy, a theoretical framework for evaluating the privacy protections offered by systems that generate aggregate statistics from individuals’ data. 

At the EMNLP 2020 Workshop on Privacy in Natural Language Processing, we presented a paper that proposes a new way to offer metric-differential-privacy assurances by adding so-called elliptical noise to training data for machine learning systems, and at this year’s Conference of the European Chapter of the Association for Computational Linguistics, we’ll present a technique for transforming texts that preserves their semantic content but removes potentially identifying information. Both methods significantly improve on the privacy protections afforded by older approaches while leaving the performance of the resulting systems unchanged.

Elliptical vs. spherical noise.png
A new approach to protecting privacy in machine learning systems that uses elliptical noise (right) rather than the conventional spherical noise (left) to perturb training data significantly improves privacy protections while leaving the performance of the resulting systems unchanged.


We have also made Alexa’s answers to information-centric questions more trustworthy by expanding our knowledge graph and improving our neural semantic parsing and web-based information retrieval. If, however, the sources of information used to produce a knowledge graph encode harmful social biases — even as a matter of historical accident — the knowledge graph may as well. In a pair of papers presented last year, our scientists devised techniques for both identifying and remediating instances of bias in knowledge graphs, to help ensure that those biases don’t leak into Alexa’s answers to questions.

A two-dimensional representation of our method for measuring bias in knowledge graph embeddings.
A two-dimensional representation of the method for measuring bias in knowledge graph embeddings that we presented last year. In each diagram, the blue dots labeled person1 indicate the shift in an embedding as we tune its parameters. The orange arrows represent relation vectors and the orange dots the sums of those vectors and the embeddings. As we shift the gender relation toward maleness, the profession relation shifts away from nurse and closer to doctor, indicating gender bias.
Credit: Glynis Condon

Similarly, the language models that many speech recognition and natural-language-understanding applications depend on are trained on corpora of publicly available texts; if those data reflect biases, so will the resulting models. At the recent ACM Conference on Fairness, Accountability, and Transparency, Alexa AI scientists presented a new data set that can be used to test language models for bias and a new metric for quantitatively evaluating the test results.

Still, we recognize that a lot more needs to be done in AI in the areas of fairness and ethics, and to that end, partnership with universities and other dedicated research organizations can be a force multiplier. As a case in point, our collaboration with the National Science Foundation to accelerate research on fairness in AI recently entered its second year, with a new round of grant recipients named in February 2021.

And in January 2021, we announced the creation of the Center for Secure and Trusted Machine Learning, a collaboration with the University of Southern California that will support USC and Amazon researchers in the development of novel approaches to privacy-preserving ML solutions

Strengthening the research community

I am particularly proud that, despite the effort required to bring all these advances to fruition, our scientists have remained actively engaged with the broader research community in many other areas. To choose just a few examples:

  • In August, we announced the winners of the third instance of the Alexa Prize Grand Challenge to develop conversational-AI systems, or socialbots, and in September, we opened registration for the fourth instance. Earlier this month, we announced another track of research for Alexa Prize called the TaskBot Challenge, in which university teams will compete to develop multimodal agents that assist customers in completing tasks requiring multiple steps and decisions.
  • In September, we announced the creation of the Columbia Center of Artificial Intelligence Technology, a collaboration with Columbia Engineering that will be a hub of research, education, and outreach programs.
  • In October, we launched the DialoGLUE challenge, together with a set of benchmark models, to encourage research on conversational generalizability, or the ability of dialogue agents trained on one task to adapt easily to new tasks.

Come work with us

Amazon is looking for data scientists, research scientists, applied scientists, interns, and more. Check out our careers page to find all of the latest job listings around the world.

We are grateful for the amazing work of our fellow researchers in the medical, pharmaceutical, and biotech communities who have developed COVID-19 vaccines in record time.

Thanks to their scientific contributions, we now have the strong belief that we will prevail against this pandemic. 

I am looking forward to the end of this pandemic and the chance to work even more closely with the Alexa teams and the broader scientific community to make further advances in conversational AI and enrich our customers’ lives. 

Research areas

Related content

ES, B, Barcelona
Are you a scientist passionate about advancing the frontiers of computer vision, machine learning, or large language models? Do you want to work on innovative research projects that lead to innovative products and scientific publications? Would you value access to extensive datasets? If you answer yes to any of these questions, you'll find a great fit at Amazon. We're seeking a hands-on researcher eager to derive, implement, and test the next generation of Generative AI, computer vision, ML, and NLP algorithms. Our research is innovative, multidisciplinary, and far-reaching. We aim to define, deploy, and publish pioneering research that pushes the boundaries of what's possible. To achieve our vision, we think big and tackle complex technological challenges at the forefront of our field. Where technology doesn't exist, we create it. Where it does, we adapt it to function at Amazon's scale. We need team members who are passionate, curious, and willing to learn continuously. Key job responsibilities * Derive novel computer vision and ML models or LLMs/VLMs. * Design and develop scalable ML models. * Create and work with large datasets * Work with large GPU clusters. * Work closely with software engineering teams to deploy your innovations. * Publish your work at major conferences/journals. * Mentor team members in the use of your AI models. A day in the life As a Senior Applied Scientist at Amazon, your typical day might look like this: * Dive into coding, deriving new ML models for computer vision or NLP * Experiment with massive datasets on our GPU clusters * Brainstorm with your team to solve complex AI challenges * Collaborate with engineers to turn your research into real products * Write up your findings for publication in top journals or conferences * Mentor junior team members on AI concepts and implementation About the team DiscoVision, a science unit within Amazon's UPMT, focuses on advancing visual content capabilities through state-of-the-art AI technology. Our team specializes in developing state-of-the-art technologies in text-to-image/video Generative AI, 3D modeling, and multimodal Large Language Models (LLMs).
US, WA, Redmond
Amazon Leo is an initiative to launch a constellation of Low Earth Orbit satellites that will provide low-latency, high-speed broadband connectivity to unserved and underserved communities around the world. As a Communication Systems Research Scientist, this role is primarily responsible for the design, development and integration of Ka band and S/C band communication payload and ground terminal systems. The Role: Be part of the team defining the overall communication system and architecture of Amazon’s broadband wireless network. This is a unique opportunity to innovate and define groundbreaking wireless technology with few legacy constraints. The team develops and designs the communication system of Amazon Leo and analyzes its overall system level performance such as for overall throughput, latency, system availability, packet loss etc. This role in particular will be responsible for leading the effort in designing and developing advanced technology and solutions for communication system. This role will also be responsible developing advanced L1/L2 proof of concept HW/SW systems to improve the performance and reliability of the Amazon Leo network. In particular this role will be responsible for using concepts from digital signal processing, information theory, wireless communications to develop novel solutions for achieving ultra-high performance LEO network. This role will also be part of a team and develop simulation tools with particular emphasis on modeling the physical layer aspects such as advanced receiver modeling and abstraction, interference cancellation techniques, FEC abstraction models etc. This role will also play a critical role in the design, integration and verification of various HW and SW sub-systems as a part of system integration and link bring-up and verification. Export Control Requirement: Due to applicable export control laws and regulations, candidates must be a U.S. citizen or national, U.S. permanent resident (i.e., current Green Card holder), or lawfully admitted into the U.S. as a refugee or granted asylum. Key job responsibilities • Design advanced L1/L2 algorithms and solutions for the Amazon Leo communication system, particularly Multi-User MIMO techniques. • Develop proof-of-concepts for critical communication payload components using SDR platforms consisting of FPGAs and general-purpose processors. • Work with ASIC development teams to build power/area efficient L1/L2 HW accelerators to be integrated into Amazon Leo SoCs. • Provide specifications and work with implementation teams on the development of embedded L1/L2 HW/SW architectures. • Work with multi-disciplinary teams to develop advanced solutions for time, frequency and spatial acquisition/tracking in LEO systems, particularly under large uncertainties. • Develop link-level and system-level simulators and work closely with implementation teams to evaluate expected performance and provide quick feedback on potential improvements. • Develop testbeds consisting of digital, IF and RF components while accounting for link-budgets and RF/IF line-ups. Previous experiences with VSAs/VSGs, channel emulators, antennas (particularly phased-arrays) and anechoic chamber instrumentation are a plus. • Work with development teams on system integration and debugging from PHY to network layer, including interfacing with flight computer and SDN control subsystems. • Willing to work in fast-paced environment and take ownership that goes from algorithm specification, to HW/SW architecture definition, to proof-of-concept development, to testbed bring-up, to integration into the Amazon Leo system. • Be a team player and provide support when requested while being able to unblock themselves by reaching out to RF, ASIC, SW, Comsys and Testbed supporting teams to move forward in development, testing and integration activities. • Ability to adapt design and test activities based on current HW/SW capabilities delivered by the development teams.
US, CA, San Francisco
Join the next revolution in robotics at Amazon's Frontier AI & Robotics team, where you'll work alongside world-renowned AI pioneers to push the boundaries of what's possible in robotic intelligence. As an Applied Scientist, you'll be at the forefront of developing breakthrough foundation models that enable robots to perceive, understand, and interact with the world in unprecedented ways. You'll drive independent research initiatives in areas such as perception, manipulation, science understanding, locomotion, manipulation, sim2real transfer, multi-modal foundation models and multi-task robot learning, designing novel frameworks that bridge the gap between state-of-the-art research and real-world deployment at Amazon scale. In this role, you'll balance innovative technical exploration with practical implementation, collaborating with platform teams to ensure your models and algorithms perform robustly in dynamic real-world environments. You'll have access to Amazon's vast computational resources, enabling you to tackle ambitious problems in areas like very large multi-modal robotic foundation models and efficient, promptable model architectures that can scale across diverse robotic applications. Key job responsibilities - Drive independent research initiatives across the robotics stack, including robotics foundation models, focusing on breakthrough approaches in perception, and manipulation, for example open-vocabulary panoptic scene understanding, scaling up multi-modal LLMs, sim2real/real2sim techniques, end-to-end vision-language-action models, efficient model inference, video tokenization - Design and implement novel deep learning architectures that push the boundaries of what robots can understand and accomplish - Lead full-stack robotics projects from conceptualization through deployment, taking a system-level approach that integrates hardware considerations with algorithmic development, ensuring robust performance in production environments - Collaborate with platform and hardware teams to ensure seamless integration across the entire robotics stack, optimizing and scaling models for real-world applications - Contribute to the team's technical strategy and help shape our approach to next-generation robotics challenges A day in the life - Design and implement novel foundation model architectures and innovative systems and algorithms, leveraging our extensive infrastructure to prototype and evaluate at scale - Collaborate with our world-class research team to solve complex technical challenges - Lead technical initiatives from conception to deployment, working closely with robotics engineers to integrate your solutions into production systems - Participate in technical discussions and brainstorming sessions with team leaders and fellow scientists - Leverage our massive compute cluster and extensive robotics infrastructure to rapidly prototype and validate new ideas - Transform theoretical insights into practical solutions that can handle the complexities of real-world robotics applications About the team At Frontier AI & Robotics, we're not just advancing robotics – we're reimagining it from the ground up. Our team is building the future of intelligent robotics through innovative foundation models and end-to-end learned systems. We tackle some of the most challenging problems in AI and robotics, from developing sophisticated perception systems to creating adaptive manipulation strategies that work in complex, real-world scenarios. What sets us apart is our unique combination of ambitious research vision and practical impact. We leverage Amazon's massive computational infrastructure and rich real-world datasets to train and deploy state-of-the-art foundation models. Our work spans the full spectrum of robotics intelligence – from multimodal perception using images, videos, and sensor data, to sophisticated manipulation strategies that can handle diverse real-world scenarios. We're building systems that don't just work in the lab, but scale to meet the demands of Amazon's global operations. Join us if you're excited about pushing the boundaries of what's possible in robotics, working with world-class researchers, and seeing your innovations deployed at unprecedented scale.
US, WA, Seattle
Are you excited to help customers discover the hottest and best reviewed products? The Discovery Tech team helps customers discover and engage with new, popular and relevant products across Amazon worldwide. We do this by combining technology, science, and innovation to build new customer-facing features and experiences alongside advanced tools for marketers. You will be responsible for creating and building critical services that automatically generate, target, and optimize Amazon’s cross-category marketing and merchandising. Through the enablement of intelligent marketing campaigns that leverage machine-learning models, you will help to deliver the best possible shopping experience for Amazon’s customers all over the globe. We are looking for analytical problem solvers who enjoy diving into data, excited about data science and statistics, can multi-task, and can credibly interface between engineering teams and business stakeholders. Your analytical abilities, business understanding, and technical savvy will be used to identify specific and actionable opportunities to solve existing business problems and look around corners for future opportunities. Your domain spans the design, development, testing, and deployment of data-driven and highly scalable machine learning solutions in product recommendation. As an Applied Scientist, you bring business and industry context to science and technology decisions. You set the standard for scientific excellence and make decisions that affect the way we build and integrate algorithms. Your solutions are exemplary in terms of algorithm design, clarity, model structure, efficiency, and extensibility. You tackle intrinsically hard problems, acquiring expertise as needed. You decompose complex problems into straightforward solutions. To know more about Amazon science, please visit https://www.amazon.science
JP, 13, Tokyo
At Amazon, we are excited to offer students the opportunity to launch into big careers with limitless possibilities. We are looking for a hands-on, creative, detail-oriented, analytical, and highly-motivated talents. You will work with the various stakeholders including global tech team, sales, vendor/account management teams and other Amazon business partners to delight our customers. Key job responsibilities Amazon Science gives insight into the company’s approach to customer-obsessed scientific innovation. Amazon fundamentally believes that scientific innovation is essential to being the most customer-centric company in the world. It’s the company’s ability to have an impact at scale that allows us to attract some of the brightest minds in artificial intelligence and related fields. Amazon Scientist use our working backwards method to enrich the way we live and work. A day in the life Come teach us a few things, and we’ll teach you a few things as we navigate the most customer-centric company on Earth.
IN, TS, Hyderabad
Do you want to join an innovative team of scientists who leverage machine learning and statistical techniques to revolutionize how businesses discover and purchase products on Amazon? Are you passionate about building intelligent systems that understand and predict complex B2B customer needs? The Amazon Business team is looking for exceptional Applied Science to help shape the future of B2B commerce. Amazon Business is one of Amazon's fastest-growing initiatives focused on serving business customers, from individual professionals to large institutions, with unique and complex purchasing needs. Our customers require sophisticated solutions that go beyond traditional B2C experiences, including bulk purchasing, approval workflows, and business-grade service support. The AB-MSET Applied Science team focuses on building intelligent systems for delivering personalized, contextual service experiences throughout the customer lifecycle. We apply advanced machine learning techniques to develop sophisticated intent detection models for business customer service needs, create intelligent matching algorithms for optimal service routing based on multiple variables including customer value, maturity, effort, and issue complexity, build predictive models to enable proactive service interventions, design recommendation systems for self-service solutions, and develop ML models for automated service resolution. As an Applied Scientist on the team, you will design and develop state-of-the-art ML models for service intent classification, routing optimization, and customer experience personalization. You will analyze large-scale business customer interaction data to identify patterns and opportunities for automation, create scalable solutions for complex B2B service scenarios using advanced ML techniques, and work closely with engineering teams to implement and deploy models in production. You will collaborate with business stakeholders to identify opportunities for ML applications, establish automated processes for model development, validation, and maintenance, lead research initiatives to advance the state-of-the-art in B2B service science, and mentor other scientists and engineers in applying ML techniques to business problems.
US, WA, Bellevue
Amazon Leo is an initiative to increase global broadband access through a constellation of 3,236 satellites in low Earth orbit (LEO). Its mission is to bring fast, affordable broadband to unserved and underserved communities around the world. Amazon Leo will help close the digital divide by delivering fast, affordable broadband to a wide range of customers, including consumers, businesses, government agencies, and other organizations operating in places without reliable connectivity. Do you get excited by aerospace, space exploration, and/or satellites? Do you want to help build solutions at Amazon Leo to transform the space industry? If so, then we would love to talk! Key job responsibilities Work cross-functionally with product, business development, and various technical teams (engineering, science, simulations, etc.) to execute on the long-term vision, strategy, and architecture for the science-based global demand forecast. Design and deliver modern, flexible, scalable solutions to integrate data from a variety of sources and systems (both internal and external) and develop Bandwidth Usage models at granular temporal and geographic grains, deployable to Leo traffic management systems. Work closely with the capacity planning science team to ensure that demand forecasts feed seamlessly into their systems to deliver continuous optimization of resources. Lead short and long terms technical roadmap definition efforts to deliver solutions that meet business needs in pre-launch, early-launch, and mature business phases. Synthesize and communicate insights and recommendations to audiences of varying levels of technical sophistication to drive change across Amazon Leo. Export Control Requirement: Due to applicable export control laws and regulations, candidates must be a U.S. citizen or national, U.S. permanent resident (i.e., current Green Card holder), or lawfully admitted into the U.S. as a refugee or granted asylum. About the team The Amazon Leo Global Demand Planning team's mission is to map customer demand across space and time. We enable Amazon Leo's long-term success by delivering actionable insights and scientific forecasts across geographies and customer segments to empower long range planning, capacity simulations, business strategy, and hardware manufacturing recommendations through scalable tools and durable mechanisms.
US, CA, Pasadena
Do you enjoy solving challenging problems and driving innovations in research? As a Research Science intern with the Quantum Algorithms Team at CQC, you will work alongside global experts to develop novel quantum algorithms, evaluate prospective applications of fault-tolerant quantum computers, and strengthen the long-term value proposition of quantum computing. A strong candidate will have experience applying methods of mathematical and numerical analysis to assess the performance of quantum algorithms and establish their advantage over classical algorithms. Key job responsibilities We are particularly interested in candidates with expertise in any of the following subareas related to quantum algorithms: quantum chemistry, many-body physics, quantum machine learning, cryptography, optimization theory, quantum complexity theory, quantum error correction & fault tolerance, quantum sensing, and scientific computing, among others. A day in the life Throughout your journey, you'll have access to unparalleled resources, including state-of-the-art computing infrastructure, cutting-edge research papers, and mentorship from industry luminaries. This immersive experience will not only sharpen your technical skills but also cultivate your ability to think critically, communicate effectively, and thrive in a fast-paced, innovative environment where bold ideas are celebrated. Diverse Experiences AWS values diverse experiences. Even if you do not meet all of the qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why AWS? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Inclusive Team Culture Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences, inspire us to never stop embracing our uniqueness. Mentorship & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. Hybrid Work We value innovation and recognize this sometimes requires uninterrupted time to focus on a build. We also value in-person collaboration and time spent face-to-face. Our team affords employees options to work in the office every day or in a flexible, hybrid work model near one of our U.S. Amazon offices. This is not a remote internship opportunity. About the team Amazon Web Services (AWS) Center for Quantum Computing (CQC) is a multi-disciplinary team of theoretical and experimental physicists, materials scientists, and hardware and software engineers on a mission to develop a fault-tolerant quantum computer.
US, CA, Pasadena
We’re on the lookout for the curious, those who think big and want to define the world of tomorrow. At Amazon, you will grow into the high impact, visionary person you know you’re ready to be. Every day will be filled with exciting new challenges, developing new skills, and achieving personal growth. How often can you say that your work changes the world? At Amazon, you’ll say it often. Join us and define tomorrow. The Amazon Web Services (AWS) Center for Quantum Computing (CQC) in Pasadena, CA, is looking for a Quantum Research Scientist Intern in the Device and Architecture Theory group. You will be joining a multi-disciplinary team of scientists, engineers, and technicians, all working at the forefront of quantum computing to innovate for the benefit of our customers. Key job responsibilities As an intern with the Device and Architecture Theory team, you will conduct pathfinding theoretical research to inform the development of next-generation quantum processors. Potential focus areas include device physics of superconducting circuits, novel qubits and gate schemes, and physical implementations of error-correcting codes. You will work closely with both theorists and experimentalists to explore these directions. We are looking for candidates with excellent problem-solving and communication skills who are eager to work collaboratively in a team environment. Amazon Science gives you insight into the company’s approach to customer-obsessed scientific innovation. Amazon fundamentally believes that scientific innovation is essential to being the most customer-centric company in the world. It’s the company’s ability to have an impact at scale that allows us to attract some of the brightest minds in quantum computing and related fields. Our scientists continue to publish, teach, and engage with the academic community, in addition to utilizing our working backwards method to enrich the way we live and work. A day in the life Why AWS? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. AWS Utility Computing (UC) provides product innovations — from foundational services such as Amazon’s Simple Storage Service (S3) and Amazon Elastic Compute Cloud (EC2), to consistently released new product innovations that continue to set AWS’s services and features apart in the industry. As a member of the UC organization, you’ll support the development and management of Compute, Database, Storage, Internet of Things (Iot), Platform, and Productivity Apps services in AWS. Within AWS UC, Amazon Dedicated Cloud (ADC) roles engage with AWS customers who require specialized security solutions for their cloud services. Inclusive Team Culture Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences, inspire us to never stop embracing our uniqueness. Diverse Experiences AWS values diverse experiences. Even if you do not meet all of the qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Mentorship & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. Export Control Requirement: Due to applicable export control laws and regulations, candidates must be either a U.S. citizen or national, U.S. permanent resident (i.e., current Green Card holder), or lawfully admitted into the U.S. as a refugee or granted asylum, or be able to obtain a US export license. If you are unsure if you meet these requirements, please apply and Amazon will review your application for eligibility.
US, WA, Seattle
WW Amazon Stores Finance Science (ASFS) works to leverage science and economics to drive improved financial results, foster data backed decisions, and embed science within Finance. ASFS is focused on developing products that empower controllership, improve business decisions and financial planning by understanding financial drivers, and innovate science capabilities for efficiency and scale. We are looking for a data scientist to lead high visibility initiatives for forecasting Amazon Stores' financials. You will develop new science-based forecasting methodologies and build scalable models to improve financial decision making and planning for senior leadership up to VP and SVP level. You will build new ML and statistical models from the ground up that aim to transform financial planning for Amazon Stores. We prize creative problem solvers with the ability to draw on an expansive methodological toolkit to transform financial decision-making with science. The ideal candidate combines data-science acumen with strong business judgment. You have versatile modeling skills and are comfortable owning and extracting insights from data. You are excited to learn from and alongside seasoned scientists, engineers, and business leaders. You are an excellent communicator and effectively translate technical findings into business action. Key job responsibilities Demonstrating thorough technical knowledge, effective exploratory data analysis, and model building using industry standard ML models Working with technical and non-technical stakeholders across every step of science project life cycle Collaborating with finance, product, data engineering, and software engineering teams to create production implementations for large-scale ML models Innovating by adapting new modeling techniques and procedures Presenting research results to our internal research community