Alexa at five: Looking back, looking forward

Today is the fifth anniversary of the launch of the Amazon Echo, so in a talk I gave yesterday at the Web Summit in Lisbon, I looked at how far Alexa has come and where we’re heading next.

Poster-captioned.jpg._CB447972009_.jpg
This poster of the original Echo device, signed by the scientists and engineers who helped make it possible, hangs in Rohit's office.

Amazon’s mission is to be the earth’s most customer-centric company. With that mission in mind and the Star Trek computer as an inspiration, on November 6, 2014, a small multidisciplinary team launched Amazon Echo, with the aspiration of revolutionizing daily convenience for our customers using artificial intelligence (AI).

Before Echo ushered in the convenience of voice-enabled ambient computing, customers were used to searches on desktops and mobile phones, where the onus was entirely on them to sift through blue links to find answers to their questions or connect to services. While app stores on phones offered “there’s an app for that” convenience, the cognitive load on customers continued to increase.

Alexa-powered Echo broke these human-machine interaction paradigms, shifting the cognitive load from customers to AI and causing a tectonic shift in how customers interact with a myriad of services, find information on the Web, control smart appliances, and connect with other people.

Enhancements in foundational components of Alexa

In order to be magical at the launch of Echo, Alexa needed to be great at four fundamental AI tasks:

  1. Wake word detection: On the device, detect the keyword “Alexa” to get the AI’s attention;
  2. Automatic speech recognition (ASR): Upon detecting the wake word, convert audio streamed to the Amazon Web Services (AWS) cloud into words;
  3. Natural-language understanding (NLU): Extract the meaning of the recognized words so that Alexa can take the appropriate action in response to the customer’s request; and
  4. Text-to-speech synthesis (TTS): Convert Alexa’s textual response to the customer’s request into spoken audio.

Over the past five years, we have continued to advance each of these foundational components. In both wake word and ASR, we’ve seen fourfold reductions in recognition errors. In NLU, the error reduction has been threefold — even though the range of utterances that NLU processes, and the range of actions Alexa can take, have both increased dramatically. And in listener studies that use the MUSHRA audio perception methodology, we’ve seen an 80% reduction in the naturalness gap between Alexa’s speech and human speech.

Our overarching strategy for Alexa’s AI has been to combine machine learning (ML) — in particular, deep learning — with the large-scale data and computational resources available through AWS. But these performance improvements are the result of research on a variety of specific topics that extend deep learning, including

  • semi-supervised learning, or using a combination of unlabeled and labeled data to improve the ML system;
  • active learning, or the learning strategy where the ML system selects more-informative samples to receive manual labels;
  • large-scale distributed training, or parallelizing ML-based model training for efficient learning on a large corpus; and
  • context-aware modeling, or using a wide variety of information — including the type of device where a request originates, skills the customer uses or has enabled, and past requests — to improve accuracy.

For more coverage of the anniversary of the Echo's launch, see "Alexa, happy birthday" on Amazon's Day One blog.

Customer impact

From Echo’s launch in November 2014 to now, we have gone from zero customer interactions with Alexa to billions per week. Customers now interact with Alexa in 15 language variants and more than 80 countries.

Through the Alexa Voice Service and the Alexa Skills Kit, we have democratized conversational AI. These self-serve APIs and toolkits let developers integrate Alexa into their devices and create custom skills. Alexa is now available on hundreds of different device types. There are more than 85,000 smart-home products that can be controlled with Alexa, from more than 9,500 unique brands, and third-party developers have built more than 100,000 custom skills.

Ongoing research in conversational AI

Alexa’s success doesn’t mean that conversational AI is a solved problem. On the contrary, we’ve just scratched the surface of what’s possible. We’re working hard to make Alexa …

1. More self-learning

Our scientists and engineers are making Alexa smarter faster by reducing reliance on supervised learning (i.e., building ML models on manually labeled data). A few months back, we announced that we’d trained a speech recognition system on a million hours of unlabeled speech using the teacher-student paradigm of deep learning. This technology is now in production for UK English, where it has improved the accuracy of Alexa’s speech recognizers, and we’re working to apply it to all language variants.

LSTMnetworkanimationV3.gif._CB467045280_.gif
In the teacher-student paradigm of deep learning, a powerful but impractically slow teacher model is trained on a small amount of hand-labeled data, and it in turn annotates a much larger body of unlabeled data to train a leaner, more efficient student model.

This year, we introduced a new self-learning paradigm that enables Alexa to automatically correct ASR and NLU errors without any human annotator in the loop. In this novel approach, we use ML to detect potentially unsatisfactory interactions with Alexa through signals such as the customer’s barging in on (i.e., interrupting) Alexa. Then, a graphical model trained on customers’ paraphrases of their requests automatically revises failing requests into semantically equivalent forms that work.

For example, “play Sirius XM Chill” used to fail, but from customer rephrasing, Alexa has learned that “play Sirius XM Chill” is equivalent to “play Sirius Channel 53” and automatically corrects the failing variant.

Using this implicit learning technique and occasional explicit feedback from customers — e.g., “did you want/mean … ?” — Alexa is now self-correcting millions of defects per week.

2. More natural

In 2015, when the first third-party skills began to appear, customers had to invoke them by name — e.g., “Alexa, ask Lyft to get me a ride to the airport.” However, with tens of thousands of custom skills, it can be difficult to discover skills by voice and remember their names. This is a unique challenge that Alexa faces.

To address this challenge, we have been exploring deep-learning-based name-free skill interaction to make skill discovery and invocation seamless. For several thousands of skills, customers can simply issue a request — “Alexa, get me a ride to the airport” — and Alexa uses information about the customer’s context and interaction history to decide which skill to invoke.

Another way we’ve made interacting with Alexa more natural is by enabling her to handle compound requests, such as “Alexa, turn down the lights and play music”. Among other innovations, this required more efficient techniques for training semantic parsers, which analyze both the structure of a sentence and the meanings of its parts.

Alexa’s responses are also becoming more natural. This year, we began using neural networks for text-to-speech synthesis. This not only results in more-natural-sounding speech but makes it much easier to adapt Alexa’s TTS system to different speaking styles — a newscaster style for reading the news, a DJ style for announcing songs, or even celebrity voices, like Samuel L. Jackson’s.

3. More knowledgeable

Every day, Alexa answers millions of questions that she’s never been asked before, an indication of customers’ growing confidence in Alexa’s question-answering ability.

The core of Alexa’s knowledge base is a knowledge graph, which encodes billions of facts and has grown 20-fold over the past five years. But Alexa also draws information from hundreds of other sources.

And now, customers are helping Alexa learn through Alexa Answers, an online interface that lets people add to Alexa’s knowledge. In a private beta test and the first month of public release, Alexa customers have furnished Alexa Answers with hundreds of thousands of new answers, which have been shared with customers millions of times.

4. More context-aware and proactive

Today, through an optional feature called Hunches, Alexa can learn how you interact with your smart home and suggest actions when she senses that devices such as lights, locks, switches, and plugs are not in the states that you prefer. We are currently expanding the notion of Hunches to include another Alexa feature called Routines. If you set your alarm for 6:00 a.m. every day, for example, and on waking, you immediately ask for the weather, Alexa will suggest creating a Routine that sets the weekday alarm to 6:00 and plays the weather report as soon as the alarm goes off.

Earlier this year, we launched Alexa Guard, a feature that you can activate when you leave the house. If your Echo device detects the sound of a smoke alarm, a carbon monoxide alarm, or glass breaking, Alexa Guard sends you an alert. Guard’s acoustic-event-detection model uses multitask learning, which reduces the amount of labeled data needed for training and makes the model more compact.

This fall, we will begin previewing an extended version of Alexa Guard that recognizes additional sounds associated with activity, such as footsteps, talking, coughing, or doors closing. Customers can also create Routines that include Guard — activating Guard automatically during work hours, for instance.

5. More conversational

Customers want Alexa to do more for them than complete one-shot requests like “Alexa, play Duke Ellington” or “Alexa, what’s the weather?” This year, we have improved Alexa’s ability to carry context from one request to another, the way humans do in conversation.

For instance, if an Alexa customer asks, “When is The Addams Family playing at the Bijou?” and then follows up with the question “Is there a good Mexican restaurant near there?”, Alexa needs to know that “there” refers to the Bijou. Some of our recent work in this area won one of the two best-paper awards at the Association for Computational Linguistics’ Workshop on Natural-Language Processing for Conversational AI. The key idea is to jointly model the salient entities with transformer networks that use a self-attention mechanism.

However, completing complex tasks that require back-and-forth interaction and anticipation of the customer’s latent goals is still a challenging problem. For example, a customer using Alexa to plan a night out would have to use different skills to find a movie, a restaurant near the theater, and a ride-sharing service, coordinating times and locations.

We are currently testing a new deep-learning-based technology, called Alexa Conversations, with a small group of skill developers who are using it to build high-quality multiturn experiences with minimal effort. The developer supplies Alexa Conversations with a set of sample dialogues, and a simulator expands it into 100 times as much data. Alexa Conversations then uses that data to train a bleeding-edge deep-learning model to predict dialogue actions, without the need for a priori hand-authored rules.

State_tracking.png._CB438077172_.png
Dialogue management involves tracking the values of "slots", such as time and location, throughout a conversation. Here, blue arrows indicate slots whose values must be updated across conversational turns.

At re:MARS, we demonstrated a new Night Out planning experience that uses Alexa Conversations technology and novel skill-transitioning algorithms to automatically coordinate conversational planning tasks across multiple skills.

We’re also adapting Alexa Conversations technology to the new concierge feature for Ring video doorbells. With this technology, the doorbell can engage in short conversations on your behalf, taking messages or telling a delivery person where to leave a package. We’re working hard to bring both of these experiences to customers.

What will the next five years look like?

Five years ago, it was inconceivable to us that customers would be interacting with Alexa billions of times per week and that developers would, on their own, build 100,000-plus skills. Such adoption is inspiring our teams to invent at an even faster pace, creating novel experiences that will increase utility and further delight our customers.

1. Alexa everywhere

The Echo family of devices and Alexa’s integration into third-party products has made Alexa a part of millions of homes worldwide. We have been working arduously on bringing the convenience of Alexa, which revolutionized daily convenience in homes, to our customers on the go. Echo Buds, Echo Auto, and the Day 1 Editions of Echo Loop and Echo Frames are already demonstrating that Alexa-on-the-go can simplify our lives even further.

With greater portability comes greater risk of slow or lost Internet connections. Echo devices with built-in smart-home hubs already have a hybrid mode, which allows them to do some spoken-language processing when they can’t rely on Alexa’s cloud-based models. This is an important area of ongoing research for us. For instance, we are investigating new techniques for compressing Alexa’s machine learning models so that they can run on-device.

The new on-the-go hardware isn’t the only way that Alexa is becoming more portable. The new Guest Connect experience allows you to log into your Alexa account from any Echo device — even ones you don’t own — and play your music or preferred news.

2. Moving up the AI stack

Alexa’s unparalleled customer and developer adoption provides new challenges for AI research. In particular, to further shift the cognitive load from customers to AI, we must move up the AI stack, from predictions (e.g., extracting customers’ intents) to more contextual reasoning.

One of our goals is to seamlessly connect disparate skills to increase convenience for our customers. Alexa Conversations and the Night Out experience are the first steps in that direction, completing complex tasks across multiple services and skills.

To enable the same kind of interoperability across different AIs, we helped found the Voice Interoperability Initiative, a consortium of dozens of tech companies uniting to promote customer choice by supporting multiple, interoperable voice services on a single device.

Alexa will also make better decisions by factoring in more information about the customer’s context and history. For instance, when a customer asks an Alexa-enabled device in a hotel room “Alexa, what are the pool hours?”, Alexa needs to respond with the hours for the hotel pool and not the community pool.

We are inspired by the success of learning directly from customers through the self-learning techniques I described earlier. This is an important area where we will continue to incorporate new signals, such as vocal frustration with Alexa, and learn from direct and indirect feedback to make Alexa more accurate.

3. Alexa for everyone

As AI systems like Alexa become an indispensable part of our social fabric, bias mitigation and fairness in AI will require even deeper attention. Our goal is for Alexa to work equally well for all our customers. In addition to our own research, we’ve entered into a three-year collaboration with the National Science Foundation to fund research on fairness in AI.

We envision a future where anyone can create conversational-AI systems. With the Alexa Skills Kit and Alexa Voice Service, we made it easy for developers to innovate using Alexa’s AI. Even end users can build personal skills within minutes using Alexa Skill Blueprints.

We are also thrilled with the Alexa Prize competition, which is democratizing conversational AI by letting university students perform state-of-the-art research at scale. University teams are working on the ultimate conversational-AI challenge of creating socialbots that can converse coherently and engagingly for 20 minutes with humans on a range of current events and popular topics”.

The third instance of the challenge is under way, and we are confident that the university teams will continue to push boundaries — perhaps even give their socialbots an original sense of humor, by far one of the hardest AI challenges.

Together with developers and academic researchers, we’ve made great strides in conversational AI. But there’s so much more to be accomplished. While the future is difficult to predict, one thing I am sure of is that the Alexa team will continue to invent on behalf of our customers.

Research areas

Related content

IN, KA, Bengaluru
Do you want to join an innovative team of scientists who use machine learning and statistical techniques to create state-of-the-art solutions for providing better value to Amazon’s customers? Do you want to build and deploy advanced algorithmic systems that help optimize millions of transactions every day? Are you excited by the prospect of analyzing and modeling terabytes of data to solve real world problems? Do you like to own end-to-end business problems/metrics and directly impact the profitability of the company? Do you like to innovate and simplify? If yes, then you may be a great fit to join the Machine Learning and Data Sciences team for India Consumer Businesses. If you have an entrepreneurial spirit, know how to deliver, love to work with data, are deeply technical, highly innovative and long for the opportunity to build solutions to challenging problems that directly impact the company's bottom-line, we want to talk to you. Major responsibilities - Use machine learning and analytical techniques to create scalable solutions for business problems - Analyze and extract relevant information from large amounts of Amazon’s historical business data to help automate and optimize key processes - Design, development, evaluate and deploy innovative and highly scalable models for predictive learning - Research and implement novel machine learning and statistical approaches - Work closely with software engineering teams to drive real-time model implementations and new feature creations - Work closely with business owners and operations staff to optimize various business operations - Establish scalable, efficient, automated processes for large scale data analyses, model development, model validation and model implementation - Mentor other scientists and engineers in the use of ML techniques
US, CA, Pasadena
The Amazon Web Services (AWS) Center for Quantum Computing in Pasadena, CA, is looking to hire a Quantum Research Scientist in the Fabrication group. You will join a multi-disciplinary team of theoretical and experimental physicists, materials scientists, and hardware and software engineers working at the forefront of quantum computing. You should have a deep and broad knowledge of device fabrication techniques. Candidates with a track record of original scientific contributions will be preferred. We are looking for candidates with strong engineering principles, resourcefulness and a bias for action, superior problem solving, and excellent communication skills. Working effectively within a team environment is essential. As a research scientist you will be expected to work on new ideas and stay abreast of the field of experimental quantum computation. Key job responsibilities In this role, you will drive improvements in qubit performance by characterizing the impact of environmental and material noise on qubit dynamics. This will require designing experiments to assess the role of specific noise sources, ensuring the collection of statistically significant data through automation, analyzing the results, and preparing clear summaries for the team. Finally, you will work with hardware engineers, material scientists, and circuit designers to implement changes which mitigate the impact of the most significant noise sources. About the team Why AWS? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. AWS Utility Computing (UC) provides product innovations — from foundational services such as Amazon’s Simple Storage Service (S3) and Amazon Elastic Compute Cloud (EC2), to consistently released new product innovations that continue to set AWS’s services and features apart in the industry. As a member of the UC organization, you’ll support the development and management of Compute, Database, Storage, Internet of Things (Iot), Platform, and Productivity Apps services in AWS. Within AWS UC, Amazon Dedicated Cloud (ADC) roles engage with AWS customers who require specialized security solutions for their cloud services. Inclusive Team Culture AWS values curiosity and connection. Our employee-led and company-sponsored affinity groups promote inclusion and empower our people to take pride in what makes us unique. Our inclusion events foster stronger, more collaborative teams. Our continual innovation is fueled by the bold ideas, fresh perspectives, and passionate voices our teams bring to everything we do. Diverse Experiences AWS values diverse experiences. Even if you do not meet all of the qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Mentorship & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. Export Control Requirement: Due to applicable export control laws and regulations, candidates must be either a U.S. citizen or national, U.S. permanent resident (i.e., current Green Card holder), or lawfully admitted into the U.S. as a refugee or granted asylum, or be able to obtain a US export license. If you are unsure if you meet these requirements, please apply and Amazon will review your application for eligibility.
US, VA, Herndon
AWS Infrastructure Services owns the design, planning, delivery, and operation of all AWS global infrastructure. In other words, we’re the people who keep the cloud running. We support all AWS data centers and all of the servers, storage, networking, power, and cooling equipment that ensure our customers have continual access to the innovation they rely on. We work on the most challenging problems, with thousands of variables impacting the supply chain — and we’re looking for talented people who want to help. You’ll join a diverse team of software, hardware, and network engineers, supply chain specialists, security experts, operations managers, and other vital roles. You’ll collaborate with people across AWS to help us deliver the highest standards for safety and security while providing seemingly infinite capacity at the lowest possible cost for our customers. And you’ll experience an inclusive culture that welcomes bold ideas and empowers you to own them to completion. AWS Infrastructure Services Science (AISS) researches and builds machine learning models that influence the power utilization at our data centers to ensure the health of our thermal and electrical infrastructure at high infrastructure utilization. As a Data Scientist, you will work on our Science team and partner closely with other scientists and data engineers as well as Business Intelligence, Technical Program Management, and Software teams to accurately model and optimize our power infrastructure. Outputs from your models will directly influence our data center topology and will drive exceptional cost savings. You will be responsible for building data science prototypes that optimize our power and thermal infrastructure, working across AWS to solve data mapping and quality issues (e.g. predicting when we might have bad sensor readings), and contribute to our Science team vision. You are skeptical. When someone gives you a data source, you pepper them with questions about sampling biases, accuracy, and coverage. When you’re told a model can make assumptions, you actively try to break those assumptions. You have passion for excellence. The wrong choice of data could cost the business dearly. You maintain rigorous standards and take ownership of the outcome of your data pipelines and code. You do whatever it takes to add value. You don’t care whether you’re building complex ML models, writing blazing fast code, integrating multiple disparate data-sets, or creating baseline models - you care passionately about stakeholders and know that as a curator of data insight you can unlock massive cost savings and preserve customer availability. You have a limitless curiosity. You constantly ask questions about the technologies and approaches we are taking and are constantly learning about industry best practices you can bring to our team. You have excellent business and communication skills to be able to work with product owners to understand key business questions and earn the trust of senior leaders. You will need to learn Data Center architecture and components of electrical engineering to build your models. You are comfortable juggling competing priorities and handling ambiguity. You thrive in an agile and fast-paced environment on highly visible projects and initiatives. The tradeoffs of cost savings and customer availability are constantly up for debate among senior leadership - you will help drive this conversation. Key job responsibilities - Proactively seek to identify opportunities and insights through analysis and provide solutions to automate and optimize power utilization based on a broad and deep knowledge of AWS data center systems and infrastructure. - Apply a range of data science techniques and tools combined with subject matter expertise to solve difficult customer or business problems and cases in which the solution approach is unclear. - Collaborate with Engineering teams to obtain useful data by accessing data sources and building the necessary SQL/ETL queries or scripts. - Build models and automated tools using statistical modeling, econometric modeling, network modeling, machine learning algorithms and neural networks. - Validate these models against alternative approaches, expected and observed outcome, and other business defined key performance indicators. - Collaborate with Engineering teams to implement these models in a manner which complies with evaluations of the computational demands, accuracy, and reliability of the relevant ETL processes at various stages of production. About the team Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. *Why AWS* Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. *Diverse Experiences* Amazon values diverse experiences. Even if you do not meet all of the preferred qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. *Work/Life Balance* We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. *Inclusive Team Culture* Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) conferences, inspire us to never stop embracing our uniqueness. *Mentorship and Career Growth* We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional.
US, CA, San Francisco
Join the next revolution in robotics at Amazon's Frontier AI & Robotics team, where you'll work alongside world-renowned AI pioneers to push the boundaries of what's possible in robotic intelligence. As an Applied Scientist, you'll be at the forefront of developing breakthrough foundation models that enable robots to perceive, understand, and interact with the world in unprecedented ways. You'll drive independent research initiatives in areas such as perception, manipulation, science understanding, locomotion, manipulation, sim2real transfer, multi-modal foundation models and multi-task robot learning, designing novel frameworks that bridge the gap between state-of-the-art research and real-world deployment at Amazon scale. In this role, you'll balance innovative technical exploration with practical implementation, collaborating with platform teams to ensure your models and algorithms perform robustly in dynamic real-world environments. You'll have access to Amazon's vast computational resources, enabling you to tackle ambitious problems in areas like very large multi-modal robotic foundation models and efficient, promptable model architectures that can scale across diverse robotic applications. Key job responsibilities - Drive independent research initiatives across the robotics stack, including robotics foundation models, focusing on breakthrough approaches in perception, and manipulation, for example open-vocabulary panoptic scene understanding, scaling up multi-modal LLMs, sim2real/real2sim techniques, end-to-end vision-language-action models, efficient model inference, video tokenization - Design and implement novel deep learning architectures that push the boundaries of what robots can understand and accomplish - Lead full-stack robotics projects from conceptualization through deployment, taking a system-level approach that integrates hardware considerations with algorithmic development, ensuring robust performance in production environments - Collaborate with platform and hardware teams to ensure seamless integration across the entire robotics stack, optimizing and scaling models for real-world applications - Contribute to the team's technical strategy and help shape our approach to next-generation robotics challenges A day in the life - Design and implement novel foundation model architectures and innovative systems and algorithms, leveraging our extensive infrastructure to prototype and evaluate at scale - Collaborate with our world-class research team to solve complex technical challenges - Lead technical initiatives from conception to deployment, working closely with robotics engineers to integrate your solutions into production systems - Participate in technical discussions and brainstorming sessions with team leaders and fellow scientists - Leverage our massive compute cluster and extensive robotics infrastructure to rapidly prototype and validate new ideas - Transform theoretical insights into practical solutions that can handle the complexities of real-world robotics applications About the team At Frontier AI & Robotics, we're not just advancing robotics – we're reimagining it from the ground up. Our team is building the future of intelligent robotics through innovative foundation models and end-to-end learned systems. We tackle some of the most challenging problems in AI and robotics, from developing sophisticated perception systems to creating adaptive manipulation strategies that work in complex, real-world scenarios. What sets us apart is our unique combination of ambitious research vision and practical impact. We leverage Amazon's massive computational infrastructure and rich real-world datasets to train and deploy state-of-the-art foundation models. Our work spans the full spectrum of robotics intelligence – from multimodal perception using images, videos, and sensor data, to sophisticated manipulation strategies that can handle diverse real-world scenarios. We're building systems that don't just work in the lab, but scale to meet the demands of Amazon's global operations. Join us if you're excited about pushing the boundaries of what's possible in robotics, working with world-class researchers, and seeing your innovations deployed at unprecedented scale.
US, WA, Seattle
Innovators wanted! Are you an entrepreneur? A builder? A dreamer? This role is part of an Amazon Special Projects team that takes the company’s Think Big leadership principle to the next level. We focus on creating entirely new products and services with a goal of positively impacting the lives of our customers. No industries or subject areas are out of bounds. If you’re interested in innovating at scale to address big challenges in the world, this is the team for you. As a Senior Research Scientist, you will work with a unique and gifted team developing exciting products for consumers and collaborate with cross-functional teams. Our team rewards intellectual curiosity while maintaining a laser-focus in bringing products to market. Competitive candidates are responsive, flexible, and able to succeed within an open, collaborative, entrepreneurial, startup-like environment. At the intersection of both academic and applied research in this product area, you have the opportunity to work together with some of the most talented scientists, engineers, and product managers. Here at Amazon, we embrace our differences. We are committed to furthering our culture of inclusion. We have thirteen employee-led affinity groups, reaching 40,000 employees in over 190 chapters globally. We are constantly learning through programs that are local, regional, and global. Amazon’s culture of inclusion is reinforced within our 16 Leadership Principles, which remind team members to seek diverse perspectives, learn and be curious, and earn trust. Our team highly values work-life balance, mentorship and career growth. We believe striking the right balance between your personal and professional life is critical to life-long happiness and fulfillment. We care about your career growth and strive to assign projects and offer training that will challenge you to become your best.
IN, KA, Bengaluru
The Amazon Alexa AI team in India is seeking a talented, self-driven Applied Scientist to work on prototyping, optimizing, and deploying ML algorithms within the realm of Generative AI. Key responsibilities include: - Research, experiment and build Proof Of Concepts advancing the state of the art in AI & ML for GenAI. - Collaborate with cross-functional teams to architect and execute technically rigorous AI projects. - Thrive in dynamic environments, adapting quickly to evolving technical requirements and deadlines. - Engage in effective technical communication (written & spoken) with coordination across teams. - Conduct thorough documentation of algorithms, methodologies, and findings for transparency and reproducibility. - Publish research papers in internal and external venues of repute - Support on-call activities for critical issues
US, VA, Arlington
This position requires that the candidate selected be a US Citizen and currently possess and maintain an active Top Secret security clearance. The Amazon Web Services Professional Services (ProServe) team seeks an experienced Principal Data Scientist to join our ProServe Shared Delivery Team (SDT). In this role, you will serve as a technical leader and strategic advisor to AWS enterprise customers, partners, and internal AWS teams on transformative AI/ML projects. You will leverage your deep technical expertise to architect and implement innovative machine learning and generative AI solutions that drive significant business outcomes. As a Principal Data Scientist, you will lead complex, high-impact AI/ML initiatives across multiple customer engagements. You will collaborate with Director and C-level executives to translate business challenges into technical solutions. You will drive innovation through thought leadership, establish technical standards, and develop reusable solution frameworks that accelerate customer adoption of AWS AI/ML services. Your work will directly influence the strategic direction of AWS Professional Services AI/ML offerings and delivery approaches. Your extensive experience in designing and implementing sophisticated AI/ML solutions will enable you to tackle the most challenging customer problems. You will provide technical mentorship to other data scientists, establish best practices, and represent AWS as a subject matter expert in customer-facing engagements. You will build trusted advisor relationships with customers and partners, helping them achieve their business outcomes through innovative applications of AWS AI/ML services. The AWS Professional Services organization is a global team of experts that help customers realize their desired business outcomes when using the AWS Cloud. We work together with customer teams and the AWS Partner Network (APN) to execute enterprise cloud computing initiatives. Our team provides a collection of offerings which help customers achieve specific outcomes related to enterprise cloud adoption. We also deliver focused guidance through our global specialty practices, which cover a variety of solutions, technologies, and industries. Key job responsibilities Architecting and implementing complex, enterprise-scale AI/ML solutions that solve critical customer business challenges Providing technical leadership across multiple customer engagements, establishing best practices and driving innovation Collaborating with Delivery Consultants, Engagement Managers, Account Executives, and Cloud Architects to design and deploy AI/ML solutions Developing reusable solution frameworks, reference architectures, and technical assets that accelerate customer adoption of AWS AI/ML services Representing AWS as a subject matter expert in customer-facing engagements, including executive briefings and technical workshops Identifying and driving new business opportunities through technical innovation and thought leadership Mentoring junior data scientists and contributing to the growth of AI/ML capabilities within AWS Professional Services
US, WA, Seattle
Have you ever wondered what it takes to transform millions of manual network planning decisions into AI-powered precision? Network Planning Solutions is looking for scientific innovators obsessed with building the AI/ML intelligence that makes orchestrating complex global operations feel effortless. Here, you'll do more than just build models; you'll create 'delight' by discovering and deploying the science that delivers exactly what our customers need, right when they need it. If you're ready to transform complex data patterns into breakthrough AI capabilities that power intuitive human experiences, you've found your team. Network Planning Solutions architects and orchestrates Amazon's customer service network of the future. By building AI-native solutions that continuously learn, predict and optimize, we deliver seamless customer experiences and empower associates with high-value work—driving measurable business impact at a global scale. As a Sr. Manager, Applied Science, you will own the scientific innovation and research initiatives that make this vision possible. You will lead a team of applied scientists and collaborate with cross-functional partners to develop and implement breakthrough scientific solutions that redefine our global network. Key job responsibilities Lead AI/ML Innovation for Network Planning Solutions: - Develop and deploy production-ready demand forecasting algorithms that continuously sense and predict customer demand using real-time signals - Build network optimization algorithms that automatically adjust staffing as conditions evolve across the service network - Architect scalable AI/ML infrastructure supporting automated forecasting and network optimization capabilities across the system Drive Scientific Excellence: - Build and mentor a team of applied scientists to deliver breakthrough AI/ML solutions - Design rigorous experiments to validate hypotheses and quantify business impact - Establish scientific excellence mechanisms including evaluation metrics and peer review processes Enable Strategic Transformation: - Drive scientific innovation from research to production - Design and validate next-generation AI-native models while ensuring robust performance, explainability, and seamless integration with existing systems. - Partner with Engineering, Product, and Operations teams to translate AI/ML capabilities into measurable business outcomes - Navigate ambiguity through experimentation while balancing innovation with operational constraints - Influence senior leadership through scientific rigor, translating complex algorithms into clear business value A day in the life Your day will be a dynamic blend of scientific innovation and strategic problem-solving. You'll collaborate with cross-functional teams, design AI algorithms, and translate complex data patterns into intuitive solutions that drive meaningful business impact. About the team We are Network Planning Solutions, a team of scientific innovators dedicated to reshaping how global service networks operate. Our mission is to create AI-native solutions that continuously learn, predict, and optimize customer experiences. We empower our associates to tackle high-value challenges and drive transformative change at a global scale.
US, WA, Seattle
Amazon Advertising is one of Amazon's fastest growing businesses. Amazon's advertising portfolio helps merchants, retail vendors, and brand owners succeed via native advertising, which grows incremental sales of their products sold through Amazon. The primary goals are to help shoppers discover new products they love, be the most efficient way for advertisers to meet their business objectives, and build a sustainable business that continuously innovates on behalf of customers. Our products and solutions are strategically important to enable our Retail and Marketplace businesses to drive long-term growth. We deliver billions of ad impressions and millions of clicks and break fresh ground in product and technical innovations every day! The Creative X team within Amazon Advertising time aims to democratize access to high-quality creatives (audio, images, videos, text) by building AI-driven solutions for advertisers. To accomplish this, we are investing in understanding how best users can leverage Generative AI methods such as latent-diffusion models, large language models (LLM), generative audio (music and speech synthesis), computer vision (CV), reinforced learning (RL) and related. As an Applied Scientist you will be part of a close-knit team of other applied scientists and product managers, UX and engineers who are highly collaborative and at the top of their respective fields. We are looking for talented Applied Scientists who are adept at a variety of skills, especially at the development and use of multi-modal Generative AI and can use state-of-the-art generative music and audio, computer vision, latent diffusion or related foundational models that will accelerate our plans to generate high-quality creatives on behalf of advertisers. Every member of the team is expected to build customer (advertiser) facing features, contribute to the collaborative spirit within the team, publish, patent, and bring SOTA research to raise the bar within the team. As an Applied Scientist on this team, you will: - Drive the invention and development of novel multi-modal agentic architectures and models for the use of Generative AI methods in advertising. - Work closely and integrate end-to-end proof-of-concept Machine Learning projects that have a high degree of ambiguity, scale and complexity. - Build interface-oriented systems that use Machine Learning models, perform proof-of-concept, experiment, optimize, and deploy your models into production; work closely with software engineers to assist in productionizing your ML models. - Curate relevant multi-modal datasets. - Perform hands-on analysis and modeling of experiments with human-in-the-loop that eg increase traffic monetization and merchandise sales, without compromising the shopper experience. - Run A/B experiments, gather data, and perform statistical analysis. - Establish scalable, efficient, automated processes for large-scale data analysis, machine-learning model development, model validation and serving. - Mentor and help recruit Applied Scientists to the team. - Present results and explain methods to senior leadership. - Willingness to publish research at internal and external top scientific venues. - Write and pursue IP submissions. Key job responsibilities This role is focused on developing new multi-modal Generative AI methods to augment generative imagery and videos. You will develop new multi-modal paradigms, models, datasets and agentic architectures that will be at the core of advertising-facing tools that we are launching. You may also work on development of ML and GenAI models suitable for advertising. You will conduct literature reviews to stay on the SOTA of the field. You will regularly engage with product managers, UX designers and engineers who will partner with you to productize your work. For reference see our products: Enhanced Video Generator, Creative Agent and Creative Studio. A day in the life On a day-to-day basis, you will be doing your independent research and work to develop models, you will participate in sprint planning, collaborative sessions with your peers, and demo new models and share results with peers, other partner teams and leadership. About the team The team is a dynamic team of applied scientists, UX researchers, engineers and product leaders. We reside in the Creative X organization, which focuses on creating products for advertisers that will improve the quality of the creatives within Amazon Ads. We are open to hiring candidates to work out of one of the following locations: UK (London), USA (Seattle).
US, WA, Bellevue
The Amazon Fulfillment Technologies (AFT) Science team is seeking an exceptional Applied Scientist with strong operations research and optimization expertise to develop production solutions for one of the most complex systems in the world: Amazon's Fulfillment Network. At AFT Science, we design, build, and deploy optimization, statistics, machine learning, and GenAI/LLM solutions that power production systems running across Amazon Fulfillment Centers worldwide. We tackle a wide range of challenges throughout the network, including labor planning and staffing, pick scheduling, stow guidance, and capacity risk management. Our mission is to develop innovative, scalable, and reliable science-driven production solutions that exceed the published state of the art, enabling systems to run optimally and continuously (from every few minutes to every few hours) across our large-scale network. Key job responsibilities As an Applied Scientist, you will collaborate with scientists, software engineers, product managers, and operations leaders to develop optimization-driven solutions that directly impact process efficiency and associate experience in the fulfillment network. Your key responsibilities include: - Develop deep understanding and domain knowledge of operational processes, system architecture, and business requirements - Dive deep into data and code to identify opportunities for continuous improvement and disruptive new approaches - Design and develop scalable mathematical models for production systems to derive optimal or near-optimal solutions for existing and emerging challenges - Create prototypes and simulations for agile experimentation of proposed solutions - Advocate for technical solutions with business stakeholders, engineering teams, and senior leadership - Partner with software engineers to integrate prototypes into production systems - Design and execute experiments to test new or incremental solutions launched in production - Build and monitor metrics to track solution performance and business impact About the team Amazon Fulfillment Technology (AFT) designs, develops, and operates end-to-end fulfillment technology solutions for all Amazon Fulfillment Centers (FCs). We harmonize the physical and virtual worlds so Amazon customers can get what they want, when they want it. The AFT Science team brings expertise in operations research, optimization, statistics, machine learning, and GenAI/LLM, combined with deep domain knowledge of operational processes within FCs and their unique challenges. We prioritize advancements that support AFT tech teams and focus areas rather than specific fields of research or individual business partners. We influence each stage of innovation from inception to deployment, which includes both developing novel solutions and improving existing approaches. Our production systems rely on a diverse set of technologies, and our teams invest in multiple specialties as the needs of each focus area evolve.