Alexa unveils new speech recognition, text-to-speech technologies

Leveraging large language models will make interactions with Alexa more natural and engaging.

Today in Arlington, Virginia, at Amazon’s new HQ2, Amazon senior vice president Dave Limp hosted an event at which the Devices and Services organization rolled out its new lineup of products and services. For part of the presentation, Limp was joined by Rohit Prasad, an Amazon senior vice president and head scientist for artificial general intelligence, who previewed a host of innovations from the Alexa team.

Prasad’s main announcement was the release of the new Alexa large language model (LLM), a larger and more generalized model that has been optimized for voice applications. This model can converse with customers on any topic; it’s been fine-tuned to reliably make the right API calls, so it will turn on the right lights and adjust the temperature in the right rooms; it’s capable of proactive, inference-based personalization, so it can highlight calendar events, recently played music, or even recipe recommendations based on a customer’s grocery purchases; it has several knowledge-grounding mechanisms, to make its factual assertions more reliable; and it has guardrails in place to protect customer privacy.

Alexa-Speech-Show-back.jpeg
New Amazon speech technologies leverage large language models to make interactions with Alexa more natural and engaging.

During the presentation, Prasad discussed several other upgrades to Alexa’s conversational-AI models, designed to make interactions with Alexa more natural. One is a new way of invoking Alexa by simply looking at the screen of a camera-enabled Alexa device, eliminating the need to say the wake word on every turn: on-device visual processing is combined with acoustic models to determine whether a customer is speaking to Alexa or someone else.

Related content
Alexa’s chief scientist on how customer-obsessed science is accelerating general intelligence.

Alexa has also had its automatic-speech-recognition (ASR) system overhauled — including machine learning models, algorithms, and hardware — and it’s moving to a new large text-to-speech (LTTS) model that’s based on the LLM architecture and is trained on thousands of hours of multispeaker, multilingual, multiaccent, and multi-speaking-style audio data.

Finally, Prasad unveiled Alexa’s new speech-to-speech model, an LLM-based model that produces output speech directly from input speech. With the speech-to-speech model, Alexa will exhibit humanlike conversational attributes, such as laughter, and it will be able to adapt its prosody not only to the content of its own utterances but to the speaker’s prosody as well — for instance, responding with excitement to the speaker’s excitement.

The ASR update will go live later this year; both LTTS and the speech-to-speech model will be deployed next year.

Speech recognition

The new Alexa ASR model is a multibillion-parameter model trained on a mix of short, goal-oriented utterances and longer-form conversations. Training required a careful alternation of data types and training targets to ensure best-in-class performance on both types of interactions.

To accommodate the larger ASR model, Alexa is moving from CPU-based speech processing to hardware-accelerated processing. The inputs to an ASR model are frames of data, or 30-millisecond snapshots of the speech signal’s frequency spectrum. On CPUs, frames are typically processed one at a time. But that’s inefficient on GPUs, which have many processing cores that run in parallel and need enough data to keep them all busy.

Related content
Determining on the fly how much additional audio to process to resolve ambiguities increases accuracy while reducing latency relative to fixed-lookahead approaches.

Alexa’s new ASR engine accumulates frames of input speech until it has enough data to ensure adequate work for all the cores in the GPUs. To minimize latency, it also tracks the pauses in the speech signal, and if the pause duration is long enough to indicate the possible end of speech, it immediately sends all accumulated frames.

The batching of speech data required for GPU processing also enables a new speech recognition algorithm that uses dynamic lookahead to improve ASR accuracy. Typically, when a streaming ASR application is interpreting an input frame, it uses the preceding frames as context: information about past frames can constrain its hypotheses about the current frame in a useful way. With batched data, however, the ASR model can use not only the preceding frames but also the following frames as context, yielding more accurate hypotheses.

The final determination of end-of-speech is made by an ASR engine’s end-pointer. The earliest end-pointers all relied on pause length. Since the advent of end-to-end speech recognition, ASR models have been trained on audio-text pairs whose texts include a special end-of-speech token at the end of each utterance. The model then learns to output the token as part of its ASR hypotheses, indicating end of speech.

Related content
Knowledge distillation and discriminative training enable efficient use of a BERT-based model to rescore automatic-speech-recognition hypotheses.

Alexa’s ASR engine has been updated with a new two-pass end-pointer that can better handle the type of mid-sentence pauses common in more extended conversational exchanges The second pass is performed by an end-pointing arbitrator, which takes as input the ASR model’s transcription of the current speech signal and its encoding of the signal. While the encoding captures features necessary for speech recognition, it also contains information useful for identifying acoustic and prosodic cues that indicate whether a user has finished speaking.

The end-pointing arbitrator is a separately trained deep-learning model that outputs a decision about whether the last frame of its input truly represents end of speech. Because it factors in both semantic and acoustic data, its judgments are more accurate than those of a model that prioritizes one or the other. And because it takes ASR encodings as input, it can leverage the ever-increasing scale of ASR models to continue to improve accuracy.

Once the new ASR model has generated a set of hypotheses about the text corresponding to the input speech, the hypotheses pass to an LLM that has been fine-tuned to rerank them, to yield more accurate results.

Alexa-Speech-Model_End Pointing.jpg
The architecture of the new two-stage end-pointer.

In the event that the new, improved end-pointer cuts off speech too soon, Alexa can still recover, thanks to a model that helps repair truncated speech. Applied scientist Marco Damonte and Angus Addlesee, a former intern studying artificial intelligence at Heriot-Watt University, described this model on the Amazon Science blog after presenting a paper about it at Interspeech.

The model produces a graph representation of the semantic relationships between words in an input text. From the map, downstream models can often infer the missing information; when they can’t, they can still often infer the semantic role of the missing words, which can help Alexa ask clarifying questions. This, too, makes conversation with Alexa more natural.

Large text-to-speech

Unlike earlier TTS models, LTTS is an end-to-end model. It consists of a traditional text-to-text LLM and a speech synthesis model that are fine-tuned in tandem, so the output of the LLM is tailored to the needs of the speech synthesizer. The fine-tuning dataset consists of thousands of hours of speech, versus the 100 or so hours used to train earlier models.

Related content
Senior principal scientist Jasha Droppo on the shared architectures of large language models and spectrum quantization text-to-speech models — and other convergences between the two fields.

The fine-tuned LTTS model learns to implicitly model the prosody, tonality, intonation, paralinguistics, and other aspects of speech, and its output is used to generate speech.

The result is speech that combines the complete range of emotional elements present in human communication — such as curiosity when asking questions and comic joke deliveries — with natural disfluencies and paralinguistic sounds (such as ums, ahs, or muttering) to create natural, expressive, and human-like speech output.

A comparison of Alexa's existing text-to-speech model and the new LTTS model.

Existing model
LTTS model

To further enhance the model’s expressivity, the LTTS model can be used in conjunction with another LLM fine-tuned to tag input text with “stage directions” indicating how the text should be delivered. The tagged text then passes to the TTS model for conversion to speech.

The speech-to-speech model

The Alexa speech-to-speech model will leverage a proprietary pretrained LLM to enable end-to-end speech processing: the input is an encoding of the customer’s speech signal, and the output is an encoding of Alexa’s speech signal in response.

That encoding is one of the keys to the approach. It’s a learned encoding, and it represents both semantic and acoustic features. The speech-to-speech model uses the same encoding for both input and output; the output is then decoded to produce an acoustic signal in one of Alexa’s voices. The shared “vocabulary” of input and output is what makes it possible to build the model atop a pretrained LLM.

A sample speech-to-speech interaction

The LLM is fine-tuned on an array of different tasks, such as speech recognition and speech-to-speech translation, to ensure its generality.

Alexa-Speech-S2S.jpeg
The speech-to-speech model has a multistep training procedure: (1) pretraining of modality-specific text and audio models; (2) multimodal training and intermodal alignment; (3) initialization of the speech-to-speech LLM; (4) fine-tuning of the LLM on a mix of self-supervised losses and supervised speech tasks; (5) alignment to desired customer experience.

Alexa’s new capabilities will begin rolling out over the next few months.

Research areas

Related content

US, CA, Pasadena
The Amazon Web Services (AWS) Center for Quantum Computing in Pasadena, CA, is looking to hire a Quantum Research Scientist in the Processor Test and Measurement group. You will join a multi-disciplinary team of theoretical and experimental physicists, materials scientists, and hardware and software engineers working at the forefront of quantum computing. You should have a deep and broad knowledge of experimental measurement techniques. Candidates with a track record of original scientific contributions will be preferred. We are looking for candidates with strong engineering principles, resourcefulness and a bias for action, superior problem solving, and excellent communication skills. Working effectively within a team environment is essential. As a research scientist you will be expected to work on new ideas and stay abreast of the field of experimental quantum computation. Key job responsibilities We are looking to hire a Research Scientist to develop and test novel calibration and optimization tools for Quantum Error Correction on large scale quantum processors. You will be on a team of engineers and scientists at the frontier of quantum processor control and error correction. You are expected to take part in high-impact research projects that intersect with our engineering roadmap. We are looking for candidates with strong engineering principles and resourcefulness. Organization and communication skills are essential. A day in the life About the team Why AWS? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. AWS Utility Computing (UC) provides product innovations — from foundational services such as Amazon’s Simple Storage Service (S3) and Amazon Elastic Compute Cloud (EC2), to consistently released new product innovations that continue to set AWS’s services and features apart in the industry. As a member of the UC organization, you’ll support the development and management of Compute, Database, Storage, Internet of Things (Iot), Platform, and Productivity Apps services in AWS. Within AWS UC, Amazon Dedicated Cloud (ADC) roles engage with AWS customers who require specialized security solutions for their cloud services. Inclusive Team Culture AWS values curiosity and connection. Our employee-led and company-sponsored affinity groups promote inclusion and empower our people to take pride in what makes us unique. Our inclusion events foster stronger, more collaborative teams. Our continual innovation is fueled by the bold ideas, fresh perspectives, and passionate voices our teams bring to everything we do. Diverse Experiences AWS values diverse experiences. Even if you do not meet all of the qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Mentorship & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. Export Control Requirement: Due to applicable export control laws and regulations, candidates must be either a U.S. citizen or national, U.S. permanent resident (i.e., current Green Card holder), or lawfully admitted into the U.S. as a refugee or granted asylum, or be able to obtain a US export license. If you are unsure if you meet these requirements, please apply and Amazon will review your application for eligibility.
US, WA, Seattle
Amazon's Worldwide Pricing & Promotions organization is seeking a strong Applied Scientist to help solve complex business problems involving promotional strategies at a global scale. This Applied Scientist will operate in a team of other scientists and economists. Our team applies causal inference, statistics, machine learning, forecasting, optimization, economics, and experimentation to drive actionable insights and to improve strategic business decision-making. This is an individual contributor role that requires collaboration across teams and functions to solve core business problems for the company around setting promotional strategies. The work is part of significant scientific investments in promotions intelligence systems that forecast customer demand and optimize promotions strategies across different surfaces. Key job responsibilities * Invent or adapt new scientific approaches, models, or algorithms inspired and driven by customers' needs and benefits * Produce research papers and reports that have the same level of correctness, scholarship, usefulness, completeness, depth, rigor, and originality as a top-tier external publication * Implement solutions that will be deployed into production or directly support production systems * Write clear, useful documentation describing algorithms and design choices in your components to make it possible for others to understand and reproduce your work * Contribute to operational excellence in the team's deliverable * Analyze the performance of your methods and models to understand the gaps, and iteratively propose solutions to improve * Champion the adoption of scientific advancements in the team * Help new teammates ramp up and understand who our customers are, what their needs are, how the team's solutions work, and how scientific components fit in those solutions A day in the life As an Applied Scientist on the WW Promotions Science team, you invent or adapt new scientific approaches, models, or algorithms to solve real-world business problems. Your work uses the latest (or the most appropriate) techniques from academic literature. You work semi-autonomously to successfully deliver solutions that are consistently of high quality (efficient, reproducible, testable code). You work collaboratively with teammates, partners, and stakeholders. You recognize discordant views and take part in constructive dialogue to resolve them. You adopt and identify opportunities to refine mechanisms to raise the general scientific knowledge in the team. About the team The WW Promotions Science team is responsible for driving scientific innovation to support pricing and promotions programs across Amazon's businesses. We specialize in experimental and observational causal methods, forecasting, and optimization. We apply these tools to drive business decision making at scale, leading to launch decisions of new pricing algorithms and new promotion strategies, understanding short- and long-term value of different programs, and the prioritization of budget allocations. We also develop models to set optimal prices and promotions, and define innovative price guardrails and incentives to optimize for long-term program health.
US, MA, North Reading
About the Role Amazon Robotics is transforming warehouse automation through edge AI and machine learning applied to real-world robotics challenges. We're seeking a Senior Applied Scientist to advance our mobile manipulation capabilities by developing learning-based approaches that enable robots to navigate and manipulate objects in dynamic fulfillment environments. This role offers the opportunity to apply state-of-the-art research to production systems operating at Amazon's unprecedented scale. What You'll Do As a Senior Applied Scientist, you'll develop and deploy machine learning models that enable mobile manipulators to perform complex tasks autonomously. You'll work at the intersection of perception, learning, and control to create intelligent systems that can adapt to diverse warehouse scenarios with minimal task-specific programming. Key job responsibilities • Design, develop, train, and deploy deep learning models for perception tasks (e.g., object detection, segmentation, pose estimation, tracking) • Develop and maintain robust camera calibration pipelines (intrinsic, extrinsic, hand-eye calibration, multi-camera systems) • Build perception systems for robotic manipulation including grasp detection, object pose estimation, and visual servoing • Improve model performance through architecture optimization, data curation, and training strategies • Build and maintain production-quality perception codebases with proper testing and documentation • Profile and optimize models for real-time inference on embedded/edge platforms • Collaborate with cross-functional teams (robotics, motion planning, controls) to integrate perception outputs for manipulation tasks • Establish best practices for model versioning, experiment tracking, and MLOps • Mentor junior engineers and contribute to technical roadmap planning A day in the life Amazon offers a full range of benefits that support you and eligible family members, including domestic partners and their children. Benefits can vary by location, the number of regularly scheduled hours you work, length of employment, and job status such as seasonal or temporary employment. The benefits that generally apply to regular, full-time employees include: 1. Medical, Dental, and Vision Coverage 2. Maternity and Parental Leave Options 3. Paid Time Off (PTO) 4. 401(k) Plan If you are not sure that every qualification on the list above describes you exactly, we'd still love to hear from you! At Amazon, we value people with unique backgrounds, experiences, and skillsets. If you’re passionate about this role and want to make an impact on a global scale, please apply! About the team Are you inspired by invention? Is problem solving through teamwork in your DNA? Do you like the idea of seeing how your work impacts the bigger picture? Answer yes to any of these and you’ll fit right in here at Amazon Robotics. We are a smart, collaborative team of enthusiastic doers that work passionately to apply innovative advances in robotics and software to solve real-world challenges that will transform our customers’ experiences in ways we can’t even image yet. We invent new improvements every day. We are Amazon Robotics and we will give you the tools and support you need to invent with us in ways that are rewarding, fulfilling and fun
US, WA, Seattle
Advertising is a complex, multi-sided market with many technologies at play within the industry. The industry is rapidly growing and evolving as viewers are shifting from traditional TV viewing to streaming video and publishers are increasingly adding video content to their online experiences. Amazon’s video advertising is a rising competitor in this industry. Amazon’s service has differentiated assets in our customer & audience insights, exclusive video content, and associated inventory that position us well as an end-to-end service for advertisers and agencies. We are innovating at the intersection of advertising, e-commerce, and entertainment. Amazon Publisher Monetization (APM) is looking for a a passionate and experienced scientist who is adept at a variety of skills; especially in generative AI, computer vision, and large language models that will accelerate our plans to maximize yield via AI-driven contextual targeting, Ads syndication and more. The ideal candidate will be an inventor at heart, they will provide science expertise, rapidly prototype, iterate, and launch, foster the spirit of collaboration and innovation within our larger sister teams and their scientists, and execute against a compelling product roadmap designed to bring AI-led science innovation to solve one of the most challenging problems in advertising. Key job responsibilities This role is focused on shaping our approach to the solving the trifecta of advertising - serving the right ad to the right viewer at the right moment - delivering engaging ads for viewers, improved performance for advertisers, and maximizing the yield of our supply inventory. Responsibilities include: * Partner deeply with Product and Engineering to develop AI-based solutions to generating contextual signals across both video (VOD and Live) and display ads. * Drive end-to-end applied science projects that have a high degree of ambiguity, scale, complexity. * Provide technical/science leadership related to computer vision, large language models and contextual targeting. * Research new and innovative machine learning approaches. * Partner with Applied Scientists across the broader org to make the most of prior art and contribute back to this community the innovation that you come up with.
IN, KA, Bengaluru
Alexa International is looking for passionate, talented, and inventive Senior Applied Scientists to help build industry-leading technology with Large Language Models (LLMs) and multimodal systems, requiring strong deep learning and generative models knowledge. Senior applied scientists will drive cross-team scientific strategy, influence partner teams, and deliver solutions that have broad impact across Alexa's international products and services. Key job responsibilities As a Applied Scientist with II the Alexa International team, you will work with talented peers to develop novel algorithms and modeling techniques to advance the state of the art with LLMs, particularly delivering industry-leading scientific research and applied AI for multi-lingual applications — a challenging area for the industry globally. Your work will directly impact our global customers in the form of products and services that support Alexa+. You will leverage Amazon's heterogeneous data sources and large-scale computing resources to accelerate advances in text, speech, and vision domains. The ideal candidate possesses a solid understanding of machine learning, speech and/or natural language processing, modern LLM architectures, LLM evaluation & tooling, and a passion for pushing boundaries in this vast and quickly evolving field. They thrive in fast-paced environment, like to tackle complex challenges, excel at swiftly delivering impactful solutions while iterating based on user feedback, and are able to influence and align multiple teams around a shared scientific vision. A day in the life * Analyze, understand, and model customer behavior and the customer experience based on large-scale data. * Build novel online & offline evaluation metrics and methodologies for multimodal personal digital assistants. * Fine-tune/post-train LLMs using advanced and innovative techniques like SFT, DPO, Reinforcement Learning (RLHF and RLAIF) for supporting model performance specific to a customer’s location and language. * Quickly experiment and set up experimentation framework for agile model and data analysis or A/B testing. * Contribute through industry-first research to drive innovation forward. * Drive cross-team scientific strategy and influence partner teams on LLM evaluation frameworks, post-training methodologies, and best practices for international speech and language systems. * Lead end-to-end delivery of scientifically complex solutions from research to production, including reusable science components and services that resolve architecture deficiencies across teams. * Serve as a scientific thought leader, communicating solutions clearly to partners, stakeholders, and senior leadership. * Actively mentor junior scientists and contribute to the broader internal and external scientific community through publications and community engagement.
US, MA, N.reading
Amazon is seeking exceptional talent to help develop the next generation of advanced robotics systems that will transform automation at Amazon's scale. We're building revolutionary robotic systems that combine cutting-edge AI, sophisticated control systems, and advanced mechanical design to create adaptable automation solutions capable of working safely alongside humans in dynamic environments. This is a unique opportunity to shape the future of robotics and automation at an unprecedented scale, working with world-class teams pushing the boundaries of what's possible in robotic dexterous manipulation, locomotion, and human-robot interaction. This role presents an opportunity to shape the future of robotics through innovative applications of deep learning and large language models. At Amazon we leverage advanced robotics, machine learning, and artificial intelligence to solve complex operational challenges at an unprecedented scale. Our fleet of robots operates across hundreds of facilities worldwide, working in sophisticated coordination to fulfill our mission of customer excellence. The ideal candidate will contribute to research that bridges the gap between theoretical advancement and practical implementation in robotics. You will be part of a team that's revolutionizing how robots learn, adapt, and interact with their environment. Join us in building the next generation of intelligent robotics systems that will transform the future of automation and human-robot collaboration. Key job responsibilities - Design and implement whole body control methods for balance, locomotion, and dexterous manipulation - Utilize state-of-the-art in methods in learned and model-based control - Create robust and safe behaviors for different terrains and tasks - Implement real-time controllers with stability guarantees - Collaborate effectively with multi-disciplinary teams to co-design hardware and algorithms for loco-manipulation - Mentor junior engineer and scientists
US, NY, New York
About the Role In this role, you will own the science strategy and technical vision for this intelligence layer, leading a team of applied scientists working across GenAI and predictive modeling. You will shape how heterogeneous signals — text, behavioral, network, temporal — come together to power talent applications at Amazon scale, from workforce forecasting to personalized development to compensation strategy. You will identify opportunities where science investment can have material impact on long-term objectives or annual goals and build consensus around needed investments, working comfortably across different modeling paradigms and data modalities to guide principal and senior scientists in their most challenging and strategic decisions while serving as the strategic science advisor to PXT leaders operating at the Director, VP, and SVP levels. As a hands-on leader, you will personally own development and delivery of the most complex science problems at the intersection of multiple ML disciplines, stay current with emergent AI/ML science and engineering trends to influence focus areas in a rapidly evolving landscape, and participate in organizational planning, hiring, mentorship, and leadership development. Key job responsibilities • Lead technical initiatives in people science models, driving breakthrough approaches through hands-on research and development in areas like foundation models for predictive modeling, efficient multi-modal LLMs, and zero-shot learning • Design and implement novel ML architectures that push the boundaries of how workforce signals are represented, fused, and predicted at scale • Guide technical direction for research initiatives across the team, ensuring robust performance in production environments serving hundreds of thousands of employees • Mentor and develop senior scientists while maintaining strong individual technical contributions on the most complex cross-domain problems • Collaborate with engineering teams to optimize and scale models for real-world talent applications • Influence technical decisions and implementation strategies across teams, shaping the long-term platform architecture About the team The People eXperience and Technology (PXT) Core Science Team uses science, engineering, and customer-obsessed problem solving to proactively identify mechanisms, process improvements, and products that simultaneously improve Amazon and Amazonians' lives, wellbeing, and value of work. As an interdisciplinary team combining talents from machine learning, statistics, economics, behavioral science, engineering, and product development, the Core Science team develops and delivers measurable solutions through innovation and rapid prototyping to accelerate informed, accurate, and reliable decision-making backed by science and data.
IN, KA, Bengaluru
Have you ever ordered a product on Amazon and when that box with the smile arrived you wondered how it got to you so fast? Have you wondered where it came from and how much it cost Amazon to deliver it to you? If so, the WW Amazon Logistics, Business Analytics team is for you. We manage the delivery of tens of millions of products every week to Amazon’s customers, achieving on-time delivery in a cost-effective manner. We are looking for an enthusiastic, customer obsessed, Applied Scientist with good analytical skills to help manage projects and operations, implement scheduling solutions, improve metrics, and develop scalable processes and tools. The primary role of an Operations Research Scientist within Amazon is to address business challenges through building a compelling case, and using data to influence change across the organization. This individual will be given responsibility on their first day to own those business challenges and the autonomy to think strategically and make data driven decisions. Decisions and tools made in this role will have significant impact to the customer experience, as it will have a major impact on how the final phase of delivery is done at Amazon. Ideal candidates will be a high potential, strategic and analytic graduate with a PhD in (Operations Research, Statistics, Engineering, and Supply Chain) ready for challenging opportunities in the core of our world class operations space. Great candidates have a history of operations research, and the ability to use data and research to make changes. This role requires robust program management skills and research science skills in order to act on research outcomes. This individual will need to be able to work with a team, but also be comfortable making decisions independently, in what is often times an ambiguous environment. Responsibilities may include: - Develop input and assumptions based preexisting models to estimate the costs and savings opportunities associated with varying levels of network growth and operations - Creating metrics to measure business performance, identify root causes and trends, and prescribe action plans - Managing multiple projects simultaneously - Working with technology teams and product managers to develop new tools and systems to support the growth of the business - Communicating with and supporting various internal stakeholders and external audiences
GB, London
Come build the future of entertainment with us. Are you interested in shaping the future of movies and television? Do you want to define the next generation of how and what Amazon customers are watching? Prime Video is a premium streaming service that offers customers a vast collection of TV shows and movies - all with the ease of finding what they love to watch in one place. We offer customers thousands of popular movies and TV shows including Amazon Originals and exclusive licensed content to exciting live sports events. We also offer our members the opportunity to subscribe to add-on channels which they can cancel at anytime and to rent or buy new release movies and TV box sets on the Prime Video Store. Prime Video is a fast-paced, growth business - available in over 200 countries and territories worldwide. The team works in a dynamic environment where innovating on behalf of our customers is at the heart of everything we do. If this sounds exciting to you, please read on. The Insights team is looking for an Applied Scientist for our London office experienced in generative AI and large models. This is a wide impact role working with development teams across the UK, India, and the US. This greenfield project will deliver features that reduce the operational load for internal Prime Video builders and for this, you will need to develop personalized recommendations for their services. You will have strong technical ability, excellent teamwork and communication skills, and a strong motivation to deliver customer value from your research. Our position offers opportunities to grow your technical and non-technical skills and make a global impact immediately. Key job responsibilities - Develop machine learning algorithms for high-scale recommendations problems - Rapidly design, prototype and test many possible hypotheses in a high-ambiguity environment, making use of both quantitative analysis and business judgement - Collaborate with software engineers to integrate successful experimental results into Prime Video wide processes - Communicate results and insights to both technical and non-technical audiences, including through presentations and written reports A day in the life You will lead the design of machine learning models that scale to very large quantities of data across multiple dimensions. You will embody scientific rigor, designing and executing experiments to demonstrate the technical effectiveness and business value of your methods. You will work alongside other scientists and engineering teams to deliver your research into production systems. About the team Our team owns Prime Video observability features for development teams. We consume PBs of data daily which feed into multiple observability features focussed on reducing the customer impact time.
IN, KA, Bengaluru
The AURORA: Alexa Understanding, Runtime, ORchestration, and Applied sciences org is looking for an Applied Science Manager with a background in Natural Language Processing, Machine/Deep Learning, and Large Language Models (LLMs). You will be working with a team of Applied Scientists to enhance existing features and explore new possibilities with LLM empowerment. You will own high visibility programs with broad visibility and global impact. You will interact with a cross-functional team of Science, Product, and Engineering leaders. We are looking for an Applied Science Manager who will play a key role in the next generation of AI powered Conversational Assistants. Core Leadership & Team Management * Lead and manage applied scientists working on conversational AI capabilities across the full Alexa+ agent execution path * Build and develop high-performing science teams focused on understanding, reasoning, evaluation, and runtime systems * Foster cross-functional collaboration between science and engineering teams to deliver both customer-facing (CX) and developer-facing (DX) experiences Platform & Innovation * Develop modular, reusable platforms that enable 1P and 3P engineers and scientists to accelerate innovation * Transform how builders create conversational AI solutions at scale * Drive evaluation tooling and assessment frameworks to measure conversational experience quality Cross-Organizational Collaboration * Partner with teams across the organization to align goals and accelerate delivery * Navigate organizational changes and simplify interactions between components and teams * Establish clear communication channels and workflows across reporting lines Strategic Impact * Contribute to the team's mission of being the "AI runtime backbone and horizontal intelligence team" for Alexa * Balance advancing research with delivering robust, scalable production solutions * Drive both innovation velocity and operational excellence simultaneously Key job responsibilities * Lead and manage a team of Applied and Data scientists responsible for building and enhancing capabilities for Alexa+ * Collaborate with cross-functional teams to build methods to align Amazon’s LLMs with human preferences. * Identify and prioritize research opportunities that have the potential to significantly impact our AI systems. * Mentor and guide team members to achieve their career goals and objectives. * Communicate research findings and progress to senior leadership and stakeholders. * Rapidly experiment and drive productisation to deliver customer impact. * Drive academic partnership with top tier Indian university as part of the org’s AI/ML Center initiative. * Participate in and drive science publications in peer-reviewed venues of repute. About the team Aurora is the AI runtime backbone and horizontal intelligence team that powers Alexa's core infrastructure, AI capabilities, and specialized conversational models. We revolutionize conversational AI through three core pillars: architecting mission-critical AI runtime systems, advancing science solutions that connect key conversational capabilities, and transforming how builders create at scale. We empower 1P and 3P engineers and scientists worldwide with modular, reusable platforms that accelerate innovation—while delivering accurate, responsive, and reliable conversational experiences to millions of end-users through operational excellence at scale.