Alexa at five: Looking back, looking forward

Today is the fifth anniversary of the launch of the Amazon Echo, so in a talk I gave yesterday at the Web Summit in Lisbon, I looked at how far Alexa has come and where we’re heading next.

Poster-captioned.jpg._CB447972009_.jpg
This poster of the original Echo device, signed by the scientists and engineers who helped make it possible, hangs in Rohit's office.

Amazon’s mission is to be the earth’s most customer-centric company. With that mission in mind and the Star Trek computer as an inspiration, on November 6, 2014, a small multidisciplinary team launched Amazon Echo, with the aspiration of revolutionizing daily convenience for our customers using artificial intelligence (AI).

Before Echo ushered in the convenience of voice-enabled ambient computing, customers were used to searches on desktops and mobile phones, where the onus was entirely on them to sift through blue links to find answers to their questions or connect to services. While app stores on phones offered “there’s an app for that” convenience, the cognitive load on customers continued to increase.

Alexa-powered Echo broke these human-machine interaction paradigms, shifting the cognitive load from customers to AI and causing a tectonic shift in how customers interact with a myriad of services, find information on the Web, control smart appliances, and connect with other people.

Enhancements in foundational components of Alexa

In order to be magical at the launch of Echo, Alexa needed to be great at four fundamental AI tasks:

  1. Wake word detection: On the device, detect the keyword “Alexa” to get the AI’s attention;
  2. Automatic speech recognition (ASR): Upon detecting the wake word, convert audio streamed to the Amazon Web Services (AWS) cloud into words;
  3. Natural-language understanding (NLU): Extract the meaning of the recognized words so that Alexa can take the appropriate action in response to the customer’s request; and
  4. Text-to-speech synthesis (TTS): Convert Alexa’s textual response to the customer’s request into spoken audio.

Over the past five years, we have continued to advance each of these foundational components. In both wake word and ASR, we’ve seen fourfold reductions in recognition errors. In NLU, the error reduction has been threefold — even though the range of utterances that NLU processes, and the range of actions Alexa can take, have both increased dramatically. And in listener studies that use the MUSHRA audio perception methodology, we’ve seen an 80% reduction in the naturalness gap between Alexa’s speech and human speech.

Our overarching strategy for Alexa’s AI has been to combine machine learning (ML) — in particular, deep learning — with the large-scale data and computational resources available through AWS. But these performance improvements are the result of research on a variety of specific topics that extend deep learning, including

  • semi-supervised learning, or using a combination of unlabeled and labeled data to improve the ML system;
  • active learning, or the learning strategy where the ML system selects more-informative samples to receive manual labels;
  • large-scale distributed training, or parallelizing ML-based model training for efficient learning on a large corpus; and
  • context-aware modeling, or using a wide variety of information — including the type of device where a request originates, skills the customer uses or has enabled, and past requests — to improve accuracy.

For more coverage of the anniversary of the Echo's launch, see "Alexa, happy birthday" on Amazon's Day One blog.

Customer impact

From Echo’s launch in November 2014 to now, we have gone from zero customer interactions with Alexa to billions per week. Customers now interact with Alexa in 15 language variants and more than 80 countries.

Through the Alexa Voice Service and the Alexa Skills Kit, we have democratized conversational AI. These self-serve APIs and toolkits let developers integrate Alexa into their devices and create custom skills. Alexa is now available on hundreds of different device types. There are more than 85,000 smart-home products that can be controlled with Alexa, from more than 9,500 unique brands, and third-party developers have built more than 100,000 custom skills.

Ongoing research in conversational AI

Alexa’s success doesn’t mean that conversational AI is a solved problem. On the contrary, we’ve just scratched the surface of what’s possible. We’re working hard to make Alexa …

1. More self-learning

Our scientists and engineers are making Alexa smarter faster by reducing reliance on supervised learning (i.e., building ML models on manually labeled data). A few months back, we announced that we’d trained a speech recognition system on a million hours of unlabeled speech using the teacher-student paradigm of deep learning. This technology is now in production for UK English, where it has improved the accuracy of Alexa’s speech recognizers, and we’re working to apply it to all language variants.

LSTMnetworkanimationV3.gif._CB467045280_.gif
In the teacher-student paradigm of deep learning, a powerful but impractically slow teacher model is trained on a small amount of hand-labeled data, and it in turn annotates a much larger body of unlabeled data to train a leaner, more efficient student model.

This year, we introduced a new self-learning paradigm that enables Alexa to automatically correct ASR and NLU errors without any human annotator in the loop. In this novel approach, we use ML to detect potentially unsatisfactory interactions with Alexa through signals such as the customer’s barging in on (i.e., interrupting) Alexa. Then, a graphical model trained on customers’ paraphrases of their requests automatically revises failing requests into semantically equivalent forms that work.

For example, “play Sirius XM Chill” used to fail, but from customer rephrasing, Alexa has learned that “play Sirius XM Chill” is equivalent to “play Sirius Channel 53” and automatically corrects the failing variant.

Using this implicit learning technique and occasional explicit feedback from customers — e.g., “did you want/mean … ?” — Alexa is now self-correcting millions of defects per week.

2. More natural

In 2015, when the first third-party skills began to appear, customers had to invoke them by name — e.g., “Alexa, ask Lyft to get me a ride to the airport.” However, with tens of thousands of custom skills, it can be difficult to discover skills by voice and remember their names. This is a unique challenge that Alexa faces.

To address this challenge, we have been exploring deep-learning-based name-free skill interaction to make skill discovery and invocation seamless. For several thousands of skills, customers can simply issue a request — “Alexa, get me a ride to the airport” — and Alexa uses information about the customer’s context and interaction history to decide which skill to invoke.

Another way we’ve made interacting with Alexa more natural is by enabling her to handle compound requests, such as “Alexa, turn down the lights and play music”. Among other innovations, this required more efficient techniques for training semantic parsers, which analyze both the structure of a sentence and the meanings of its parts.

Alexa’s responses are also becoming more natural. This year, we began using neural networks for text-to-speech synthesis. This not only results in more-natural-sounding speech but makes it much easier to adapt Alexa’s TTS system to different speaking styles — a newscaster style for reading the news, a DJ style for announcing songs, or even celebrity voices, like Samuel L. Jackson’s.

3. More knowledgeable

Every day, Alexa answers millions of questions that she’s never been asked before, an indication of customers’ growing confidence in Alexa’s question-answering ability.

The core of Alexa’s knowledge base is a knowledge graph, which encodes billions of facts and has grown 20-fold over the past five years. But Alexa also draws information from hundreds of other sources.

And now, customers are helping Alexa learn through Alexa Answers, an online interface that lets people add to Alexa’s knowledge. In a private beta test and the first month of public release, Alexa customers have furnished Alexa Answers with hundreds of thousands of new answers, which have been shared with customers millions of times.

4. More context-aware and proactive

Today, through an optional feature called Hunches, Alexa can learn how you interact with your smart home and suggest actions when she senses that devices such as lights, locks, switches, and plugs are not in the states that you prefer. We are currently expanding the notion of Hunches to include another Alexa feature called Routines. If you set your alarm for 6:00 a.m. every day, for example, and on waking, you immediately ask for the weather, Alexa will suggest creating a Routine that sets the weekday alarm to 6:00 and plays the weather report as soon as the alarm goes off.

Earlier this year, we launched Alexa Guard, a feature that you can activate when you leave the house. If your Echo device detects the sound of a smoke alarm, a carbon monoxide alarm, or glass breaking, Alexa Guard sends you an alert. Guard’s acoustic-event-detection model uses multitask learning, which reduces the amount of labeled data needed for training and makes the model more compact.

This fall, we will begin previewing an extended version of Alexa Guard that recognizes additional sounds associated with activity, such as footsteps, talking, coughing, or doors closing. Customers can also create Routines that include Guard — activating Guard automatically during work hours, for instance.

5. More conversational

Customers want Alexa to do more for them than complete one-shot requests like “Alexa, play Duke Ellington” or “Alexa, what’s the weather?” This year, we have improved Alexa’s ability to carry context from one request to another, the way humans do in conversation.

For instance, if an Alexa customer asks, “When is The Addams Family playing at the Bijou?” and then follows up with the question “Is there a good Mexican restaurant near there?”, Alexa needs to know that “there” refers to the Bijou. Some of our recent work in this area won one of the two best-paper awards at the Association for Computational Linguistics’ Workshop on Natural-Language Processing for Conversational AI. The key idea is to jointly model the salient entities with transformer networks that use a self-attention mechanism.

However, completing complex tasks that require back-and-forth interaction and anticipation of the customer’s latent goals is still a challenging problem. For example, a customer using Alexa to plan a night out would have to use different skills to find a movie, a restaurant near the theater, and a ride-sharing service, coordinating times and locations.

We are currently testing a new deep-learning-based technology, called Alexa Conversations, with a small group of skill developers who are using it to build high-quality multiturn experiences with minimal effort. The developer supplies Alexa Conversations with a set of sample dialogues, and a simulator expands it into 100 times as much data. Alexa Conversations then uses that data to train a bleeding-edge deep-learning model to predict dialogue actions, without the need for a priori hand-authored rules.

State_tracking.png._CB438077172_.png
Dialogue management involves tracking the values of "slots", such as time and location, throughout a conversation. Here, blue arrows indicate slots whose values must be updated across conversational turns.

At re:MARS, we demonstrated a new Night Out planning experience that uses Alexa Conversations technology and novel skill-transitioning algorithms to automatically coordinate conversational planning tasks across multiple skills.

We’re also adapting Alexa Conversations technology to the new concierge feature for Ring video doorbells. With this technology, the doorbell can engage in short conversations on your behalf, taking messages or telling a delivery person where to leave a package. We’re working hard to bring both of these experiences to customers.

What will the next five years look like?

Five years ago, it was inconceivable to us that customers would be interacting with Alexa billions of times per week and that developers would, on their own, build 100,000-plus skills. Such adoption is inspiring our teams to invent at an even faster pace, creating novel experiences that will increase utility and further delight our customers.

1. Alexa everywhere

The Echo family of devices and Alexa’s integration into third-party products has made Alexa a part of millions of homes worldwide. We have been working arduously on bringing the convenience of Alexa, which revolutionized daily convenience in homes, to our customers on the go. Echo Buds, Echo Auto, and the Day 1 Editions of Echo Loop and Echo Frames are already demonstrating that Alexa-on-the-go can simplify our lives even further.

With greater portability comes greater risk of slow or lost Internet connections. Echo devices with built-in smart-home hubs already have a hybrid mode, which allows them to do some spoken-language processing when they can’t rely on Alexa’s cloud-based models. This is an important area of ongoing research for us. For instance, we are investigating new techniques for compressing Alexa’s machine learning models so that they can run on-device.

The new on-the-go hardware isn’t the only way that Alexa is becoming more portable. The new Guest Connect experience allows you to log into your Alexa account from any Echo device — even ones you don’t own — and play your music or preferred news.

2. Moving up the AI stack

Alexa’s unparalleled customer and developer adoption provides new challenges for AI research. In particular, to further shift the cognitive load from customers to AI, we must move up the AI stack, from predictions (e.g., extracting customers’ intents) to more contextual reasoning.

One of our goals is to seamlessly connect disparate skills to increase convenience for our customers. Alexa Conversations and the Night Out experience are the first steps in that direction, completing complex tasks across multiple services and skills.

To enable the same kind of interoperability across different AIs, we helped found the Voice Interoperability Initiative, a consortium of dozens of tech companies uniting to promote customer choice by supporting multiple, interoperable voice services on a single device.

Alexa will also make better decisions by factoring in more information about the customer’s context and history. For instance, when a customer asks an Alexa-enabled device in a hotel room “Alexa, what are the pool hours?”, Alexa needs to respond with the hours for the hotel pool and not the community pool.

We are inspired by the success of learning directly from customers through the self-learning techniques I described earlier. This is an important area where we will continue to incorporate new signals, such as vocal frustration with Alexa, and learn from direct and indirect feedback to make Alexa more accurate.

3. Alexa for everyone

As AI systems like Alexa become an indispensable part of our social fabric, bias mitigation and fairness in AI will require even deeper attention. Our goal is for Alexa to work equally well for all our customers. In addition to our own research, we’ve entered into a three-year collaboration with the National Science Foundation to fund research on fairness in AI.

We envision a future where anyone can create conversational-AI systems. With the Alexa Skills Kit and Alexa Voice Service, we made it easy for developers to innovate using Alexa’s AI. Even end users can build personal skills within minutes using Alexa Skill Blueprints.

We are also thrilled with the Alexa Prize competition, which is democratizing conversational AI by letting university students perform state-of-the-art research at scale. University teams are working on the ultimate conversational-AI challenge of creating socialbots that can converse coherently and engagingly for 20 minutes with humans on a range of current events and popular topics”.

The third instance of the challenge is under way, and we are confident that the university teams will continue to push boundaries — perhaps even give their socialbots an original sense of humor, by far one of the hardest AI challenges.

Together with developers and academic researchers, we’ve made great strides in conversational AI. But there’s so much more to be accomplished. While the future is difficult to predict, one thing I am sure of is that the Alexa team will continue to invent on behalf of our customers.

Research areas

Related content

US, WA, Seattle
Do you want to re-invent how millions of people consume video content on their TVs, Tablets and Alexa? We are building a free to watch streaming service called Fire TV Channels (https://techcrunch.com/2023/08/21/amazon-launches-fire-tv-channels-app-400-fast-channels/). Our goal is to provide customers with a delightful and personalized experience for consuming content across News, Sports, Cooking, Gaming, Entertainment, Lifestyle and more. You will work closely with engineering and product stakeholders to realize our ambitious product vision. You will get to work with Generative AI and other state of the art technologies to help build personalization and recommendation solutions from the ground up. You will be in the driver's seat to present customers with content they will love. Using Amazon’s large-scale computing resources, you will ask research questions about customer behavior, build state-of-the-art models to generate recommendations and run these models to enhance the customer experience. You will participate in the Amazon ML community and mentor Applied Scientists and Software Engineers with a strong interest in and knowledge of ML. Your work will directly benefit customers and you will measure the impact using scientific tools.
IN, HR, Gurugram
Our customers have immense faith in our ability to deliver packages timely and as expected. A well planned network seamlessly scales to handle millions of package movements a day. It has monitoring mechanisms that detect failures before they even happen (such as predicting network congestion, operations breakdown), and perform proactive corrective actions. When failures do happen, it has inbuilt redundancies to mitigate impact (such as determine other routes or service providers that can handle the extra load), and avoids relying on single points of failure (service provider, node, or arc). Finally, it is cost optimal, so that customers can be passed the benefit from an efficiently set up network. Amazon Shipping is hiring Applied Scientists to help improve our ability to plan and execute package movements. As an Applied Scientist in Amazon Shipping, you will work on multiple challenging machine learning problems spread across a wide spectrum of business problems. You will build ML models to help our transportation cost auditing platforms effectively audit off-manifest (discrepancies between planned and actual shipping cost). You will build models to improve the quality of financial and planning data by accurately predicting ship cost at a package level. Your models will help forecast the packages required to be pick from shipper warehouses to reduce First Mile shipping cost. Using signals from within the transportation network (such as network load, and velocity of movements derived from package scan events) and outside (such as weather signals), you will build models that predict delivery delay for every package. These models will help improve buyer experience by triggering early corrective actions, and generating proactive customer notifications. Your role will require you to demonstrate Think Big and Invent and Simplify, by refining and translating Transportation domain-related business problems into one or more Machine Learning problems. You will use techniques from a wide array of machine learning paradigms, such as supervised, unsupervised, semi-supervised and reinforcement learning. Your model choices will include, but not be limited to, linear/logistic models, tree based models, deep learning models, ensemble models, and Q-learning models. You will use techniques such as LIME and SHAP to make your models interpretable for your customers. You will employ a family of reusable modelling solutions to ensure that your ML solution scales across multiple regions (such as North America, Europe, Asia) and package movement types (such as small parcel movements and truck movements). You will partner with Applied Scientists and Research Scientists from other teams in US and India working on related business domains. Your models are expected to be of production quality, and will be directly used in production services. You will work as part of a diverse data science and engineering team comprising of other Applied Scientists, Software Development Engineers and Business Intelligence Engineers. You will participate in the Amazon ML community by authoring scientific papers and submitting them to Machine Learning conferences. You will mentor Applied Scientists and Software Development Engineers having a strong interest in ML. You will also be called upon to provide ML consultation outside your team for other problem statements. If you are excited by this charter, come join us!
US, MA, Boston
The Artificial General Intelligence (AGI) team is looking for a passionate, talented, and inventive Senior Applied Scientist with a strong deep learning background, to build industry-leading technology with Large Language Models (LLMs) and multimodal systems. Key job responsibilities As a Senior Applied Scientist with the AGI team, you will work with talented peers to lead the development of novel algorithms and modeling techniques, to advance the state of the art with LLMs. Your work will directly impact our customers in the form of products and services that make use of speech and language technology. You will leverage Amazon’s heterogeneous data sources and large-scale computing resources to accelerate advances in generative artificial intelligence (GenAI). About the team The AGI team has a mission to push the envelope in LLMs and multimodal systems, in order to provide the best-possible experience for our customers.
IN, KA, Bengaluru
The Amazon Alexa AI team in India is seeking a talented, self-driven Applied Scientist to work on prototyping, optimizing, and deploying ML algorithms within the realm of Generative AI. Key responsibilities include: - Research, experiment and build Proof Of Concepts advancing the state of the art in AI & ML for GenAI. - Collaborate with cross-functional teams to architect and execute technically rigorous AI projects. - Thrive in dynamic environments, adapting quickly to evolving technical requirements and deadlines. - Engage in effective technical communication (written & spoken) with coordination across teams. - Conduct thorough documentation of algorithms, methodologies, and findings for transparency and reproducibility. - Publish research papers in internal and external venues of repute - Support on-call activities for critical issues Basic Qualifications: - Master’s or PhD in computer science, statistics or a related field - 2-7 years experience in deep learning, machine learning, and data science. - Proficiency in coding and software development, with a strong focus on machine learning frameworks. - Experience in Python, or another language; command line usage; familiarity with Linux and AWS ecosystems. - Understanding of relevant statistical measures such as confidence intervals, significance of error measurements, development and evaluation data sets, etc. - Excellent communication skills (written & spoken) and ability to collaborate effectively in a distributed, cross-functional team setting. - Papers published in AI/ML venues of repute Preferred Qualifications: - Track record of diving into data to discover hidden patterns and conducting error/deviation analysis - Ability to develop experimental and analytic plans for data modeling processes, use of strong baselines, ability to accurately determine cause and effect relations - The motivation to achieve results in a fast-paced environment. - Exceptional level of organization and strong attention to detail - Comfortable working in a fast paced, highly collaborative, dynamic work environment
US, WA, Seattle
Have you ever wondered how Amazon launches and maintains a consistent customer experience across hundreds of countries and languages it serves its customers? Are you passionate about data and mathematics, and hope to impact the experience of millions of customers? Are you obsessed with designing simple algorithmic solutions to very challenging problems? If so, we look forward to hearing from you! At Amazon, we strive to be Earth's most customer-centric company, where both internal and external customers can find and discover anything they want in their own language of preference. Our Translations Services (TS) team plays a pivotal role in expanding the reach of our marketplace worldwide and enables thousands of developers and other stakeholders (Product Managers, Program Managers, Linguists) in developing locale specific solutions. Amazon Translations Services (TS) is seeking an Applied Scientist to be based in our Seattle office. As a key member of the Science and Engineering team of TS, this person will be responsible for designing algorithmic solutions based on data and mathematics for translating billions of words annually across 130+ and expanding set of locales. The successful applicant will ensure that there is minimal human touch involved in any language translation and accurate translated text is available to our worldwide customers in a streamlined and optimized manner. With access to vast amounts of data, cutting-edge technology, and a diverse community of talented individuals, you will have the opportunity to make a meaningful impact on the way customers and stakeholders engage with Amazon and our platform worldwide. Together, we will drive innovation, solve complex problems, and shape the future of e-commerce. Key job responsibilities * Apply your expertise in LLM models to design, develop, and implement scalable machine learning solutions that address complex language translation-related challenges in the eCommerce space. * Collaborate with cross-functional teams, including software engineers, data scientists, and product managers, to define project requirements, establish success metrics, and deliver high-quality solutions. * Conduct thorough data analysis to gain insights, identify patterns, and drive actionable recommendations that enhance seller performance and customer experiences across various international marketplaces. * Continuously explore and evaluate state-of-the-art modeling techniques and methodologies to improve the accuracy and efficiency of language translation-related systems. * Communicate complex technical concepts effectively to both technical and non-technical stakeholders, providing clear explanations and guidance on proposed solutions and their potential impact. About the team We are a start-up mindset team. As the long-term technical strategy is still taking shape, there is a lot of opportunity for this fresh Science team to innovate by leveraging Gen AI technoligies to build scalable solutions from scratch. Our Vision: Language will not stand in the way of anyone on earth using Amazon products and services. Our Mission: We are the enablers and guardians of translation for Amazon's customers. We do this by offering hands-off-the-wheel service to all Amazon teams, optimizing translation quality and speed at the lowest cost possible.
GB, London
Are you looking to work at the forefront of Machine Learning and AI? Would you be excited to apply cutting edge Generative AI algorithms to solve real world problems with significant impact? The AWS Industries Team at AWS helps AWS customers implement Generative AI solutions and realize transformational business opportunities for AWS customers in the most strategic industry verticals. This is a team of data scientists, engineers, and architects working step-by-step with customers to build bespoke solutions that harness the power of generative AI. The team helps customers imagine and scope the use cases that will create the greatest value for their businesses, select and train and fine tune the right models, define paths to navigate technical or business challenges, develop proof-of-concepts, and build applications to launch these solutions at scale. The AWS Industries team provides guidance and implements best practices for applying generative AI responsibly and cost efficiently. You will work directly with customers and innovate in a fast-paced organization that contributes to game-changing projects and technologies. You will design and run experiments, research new algorithms, and find new ways of optimizing risk, profitability, and customer experience. In this Data Scientist role you will be capable of using GenAI and other techniques to design, evangelize, and implement and scale cutting-edge solutions for never-before-solved problems. Key job responsibilities - Collaborate with AI/ML scientists, engineers, and architects to research, design, develop, and evaluate cutting-edge generative AI algorithms and build ML systems to address real-world challenges - Interact with customers directly to understand the business problem, help and aid them in implementation of generative AI solutions, deliver briefing and deep dive sessions to customers and guide customer on adoption patterns and paths to production - Create and deliver best practice recommendations, tutorials, blog posts, publications, sample code, and presentations adapted to technical, business, and executive stakeholder - Provide customer and market feedback to Product and Engineering teams to help define product direction About the team Diverse Experiences Amazon values diverse experiences. Even if you do not meet all of the preferred qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why AWS Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. Inclusive Team Culture Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences, inspire us to never stop embracing our uniqueness. Mentorship and Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional.
US, WA, Seattle
We’re working on the future. If you are seeking an iterative fast-paced environment where you can drive innovation, apply state-of-the-art technologies to solve large-scale real world delivery challenges, and provide visible benefit to end-users, this is your opportunity. Come work on the Amazon Prime Air Team! We are seeking a highly skilled weather scientist to help invent and develop new models and strategies to support Prime Air’s drone delivery program. In this role, you will develop, build, and implement novel weather solutions using your expertise in atmospheric science, data science, and software development. You will be supported by a team of world class software engineers, systems engineers, and other scientists. Your work will drive cross-functional decision-making through your excellent oral and written communication skills, define system architecture and requirements, enable the scaling of Prime Air’s operation, and produce innovative technological breakthroughs that unlock opportunities to meet our customers' evolving demands. About the team Prime air has ambitious goals to offer its service to an increasing number of customers. Enabling a lot of concurrent flights over many different locations is central to reaching more customers. To this end, the weather team is building algorithms, tools and services for the safe and efficient operation of prime air's autonomous drone fleet.
US, CA, Santa Clara
Amazon Q Business is an AI assistant powered by generative technology. It provides capabilities such as answering queries, summarizing information, generating content, and executing tasks based on enterprise data. We are seeking a Language Data Scientist II to join our data team. Our mission is to engineer high-quality datasets that are essential to the success of Amazon Q Business. From human evaluations and Responsible AI safeguards to Retrieval-Augmented Generation and beyond, our work ensures that Generative AI is enterprise-ready, safe, and effective for users. As part of our diverse team—including language engineers, linguists, data scientists, data engineers, and program managers—you will collaborate closely with science, engineering, and product teams. We are driven by customer obsession and a commitment to excellence. In this role, you will leverage data-centric AI principles to assess the impact of data on model performance and the broader machine learning pipeline. You will apply Generative AI techniques to evaluate how well our data represents human language and conduct experiments to measure downstream interactions. Key job responsibilities * oversee end-to-end evaluation data pipeline and propose evaluation metrics and methods * incorporate your knowledge of linguistic fundamentals, NLU, NLP to the data pipeline * process and analyze diverse media formats including audio recordings, video, images and text * perform statistical analysis of the data * write intuitive data generation & annotation guidelines * write advanced and nuanced prompts to optimize LLM outputs * write python scripts for data wrangling * automate repetitive workflows and improve existing processes * perform background research and vet available public datasets on topics such as long text retrieval, text generation, summarization, question-answering, and reasoning * leverage and integrate AWS services to optimize data collection workflows * collaborate with scientists, engineers, and product managers in defining data quality metrics and guidelines. * lead dive deep sessions with data annotators About the team About AWS Diverse Experiences AWS values diverse experiences. Even if you do not meet all of the preferred qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why AWS? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Inclusive Team Culture AWS values curiosity and connection. Our employee-led and company-sponsored affinity groups promote inclusion and empower our people to take pride in what makes us unique. Our inclusion events foster stronger, more collaborative teams. Our continual innovation is fueled by the bold ideas, fresh perspectives, and passionate voices our teams bring to everything we do. Mentorship & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve.
US, CA, Palo Alto
Amazon Sponsored Products is investing heavily in building a world class advertising business and we are responsible for defining and delivering a collection of GenAI/LLM powered self-service performance advertising products that drive discovery and sales. Our products are strategically important to Amazon’s Selling Partners and key to driving their long-term growth. We deliver billions of ad impressions and clicks daily and are breaking fresh ground to create world-class products. We are highly motivated, collaborative and fun-loving team with an entrepreneurial spirit and bias for action. With a broad mandate to experiment and innovate, we are growing at an unprecedented rate with a seemingly endless range of new opportunities. This role will be pivotal within the Autonomous Campaigns org of Sponsored Products Ads, where we're pioneering the development of AI-powered advertising innovations that will redefine the future of campaign management and optimization. As a Principal Applied Scientist, you will lead the charge in creating the next generation of self-operating, GenAI-driven advertising systems that will set a new standard for the industry. Our team is at the forefront of designing and implementing these transformative technologies, which will leverage advanced Large Language Models (LLMs) and sophisticated chain-of-thought reasoning to achieve true advertising autonomy. Your work will bring to life systems capable of deeply understanding the nuanced context of each product, market trends, and consumer behavior, making intelligent, real-time decisions that surpass human capabilities. By harnessing the power of these future-state GenAI systems, we will develop advertising solutions capable of autonomously selecting optimal keywords, dynamically adjusting bids based on complex market conditions, and optimizing product targeting across various Amazon platforms. Crucially, these systems will continuously analyze performance metrics and implement strategic pivots, all without requiring manual intervention from advertisers, allowing them to focus on their core business while our AI works tirelessly on their behalf. This is not simply about automating existing processes; your work will redefine what's possible in advertising. Our GenAI systems will employ multi-step reasoning, considering a vast array of factors, from seasonality and competitive landscape to macroeconomic trends, to make decisions that far exceed human speed and effectiveness. This autonomous, context-aware approach represents a paradigm shift in how advertising campaigns are conceived, executed, and optimized. As a Principal Applied Scientist, you will be at the forefront of this transformation, tackling complex challenges in natural language processing, reinforcement learning, and causal inference. Your pioneering efforts will directly shape the future of e-commerce advertising, with the potential to influence marketplace dynamics on a global scale. This is an unparalleled opportunity to push the boundaries of what's achievable in AI-driven advertising and leave an indelible mark on the industry. Key job responsibilities • Seek to understand in depth the Sponsored Products offering at Amazon and identify areas of opportunities to grow our business using GenAI, LLM, and ML solutions. • Mentor and guide the applied scientists in our organization and hold us to a high standard of technical rigor and excellence in AI/ML. • Design and lead organization-wide AI/ML roadmaps to help our Amazon shoppers have a delightful shopping experience while creating long term value for our advertisers. • Work with our engineering partners and draw upon your experience to meet latency and other system constraints. • Identify untapped, high-risk technical and scientific directions, and devise new research directions that you will drive to completion and deliver. • Be responsible for communicating our Generative AI/ Traditional AI/ML innovations to the broader internal & external scientific community.
US, WA, Seattle
Amazon Advertising is one of Amazon's fastest growing and most profitable businesses, responsible for defining and delivering a collection of advertising products that drive discovery and sales. Our products and solutions are strategically important to enable our Retail and Marketplace businesses to drive long-term growth. We deliver billions of ad impressions and millions of clicks and break fresh ground in product and technical innovations every day! As a Data Scientist on this team you will: - Lead Data Science solutions from beginning to end. - Deliver with independence on challenging large-scale problems with complexity and ambiguity. - Write code (Python, R, Scala, SQL, etc.) to obtain, manipulate, and analyze data. - Build Machine Learning and statistical models to solve specific business problems. - Retrieve, synthesize, and present critical data in a format that is immediately useful to answering specific questions or improving system performance. - Analyze historical data to identify trends and support optimal decision making. - Apply statistical and machine learning knowledge to specific business problems and data. - Formalize assumptions about how our systems should work, create statistical definitions of outliers, and develop methods to systematically identify outliers. Work out why such examples are outliers and define if any actions needed. - Given anecdotes about anomalies or generate automatic scripts to define anomalies, deep dive to explain why they happen, and identify fixes. - Build decision-making models and propose effective solutions for the business problems you define. - Conduct written and verbal presentations to share insights to audiences of varying levels of technical sophistication. Why you will love this opportunity: Amazon has invested heavily in building a world-class advertising business. This team defines and delivers a collection of advertising products that drive discovery and sales. Our solutions generate billions in revenue and drive long-term growth for Amazon’s Retail and Marketplace businesses. We deliver billions of ad impressions, millions of clicks daily, and break fresh ground to create world-class products. We are a highly motivated, collaborative, and fun-loving team with an entrepreneurial spirit - with a broad mandate to experiment and innovate. Impact and Career Growth: You will invent new experiences and influence customer-facing shopping experiences to help suppliers grow their retail business and the auction dynamics that leverage native advertising; this is your opportunity to work within the fastest-growing businesses across all of Amazon! Define a long-term science vision for our advertising business, driven from our customers' needs, translating that direction into specific plans for research and applied scientists, as well as engineering and product teams. This role combines science leadership, organizational ability, technical strength, product focus, and business understanding. Team video ~ https://youtu.be/zD_6Lzw8raE