Advances in trustworthy machine learning at Alexa AI

The team’s latest research on privacy-preserving machine learning, federated learning, and bias mitigation.

At Amazon, we take the protection of customer data very seriously. We are also committed to eliminating the biases that can exist in off-the-shelf language models — such as GPT-3 and RoBERTa — that are the basis of most modern natural-language processing. Trained on public texts, these language models are known to reflect the biases implicit in those texts.

Related content
Calibrating noise addition to word density in the embedding space improves utility of privacy-protected text.

These two topics — privacy protection and fairness — are at the core of trustworthy machine learning, an important area of research at Alexa AI. In 2021, we made contributions in the following areas:

  • Privacy-preserving machine learningDifferential privacy provides a rigorous way to quantify the privacy of machine learning models. We investigated vulnerabilities presented in the differential-privacy literature and propose computationally efficient mechanisms for protecting against them.
  • Federated learning: Federated learning (FL) is a distributed-training technique that keeps customer data on-device. Devices send only model parameter updates to the cloud, not raw data. We studied several FL challenges arising in an industrial setting.
  • Fairness in machine learning: Machine learning (ML) models should perform equally well regardless of who’s using them. But even knowing how to quantify fairness is a challenge. We introduced measures of fairness and methods to mitigate bias in ML models.
Counterfactuals.png
To reduce binary-gender disparity in a distilled GPT-2 language model, we introduce counterfactual examples, in which binary genders in real-world training examples are swapped.

Below, we summarize our research in these areas, which will be presented at ACL and ICASSP later this year. We also invite readers to participate in workshops and sessions we are organizing at NAACL 2022 and Interspeech 2022.

1. Privacy-preserving ML

The intuition behind differential privacy (DP) is that access to the outputs of a model should not provide any hint about what inputs were used to train the model. DP quantifies that intuition as a difference (in probabilities) between the outputs of a model trained on a given dataset and the outputs of the same model trained on the same dataset after a single input is removed.

One way to meet a DP privacy guarantee is to add some noise to the model parameters during training in order to obfuscate their relationship to training data. But this can compromise accuracy. The so-called privacy/utility tradeoff appears in every DP application.

Another side effect of adding a DP mechanism is increased training time. Given that training natural-language-understanding (NLU) models with large volumes of data can be prohibitively slow and that industry standards require fast training and deployment — e.g., when new features are being released — we developed a training method that meets DP requirements but remains efficient. We describe the method in a paper we’re presenting at this year’s ICASSP, “An efficient DP-SGD mechanism for large scale NLP models”.

In this work, we study the most popular DP mechanism for deep neural networks, DP-SGD, and build a computationally efficient alternative, eDP-SGD, in which we use a batch-processing scheme that leverages the GPU architecture and automates part of the hyperparameter-tuning process. While both DP-SGD and eDP-SGD provide the same privacy guarantees, we show that the training time for our mechanism is very similar to its non-DP counterpart’s. The original DP-SGD extends training time as much as 130-fold.

Related content
ADePT model transforms the texts used to train natural-language-understanding models while preserving semantic coherence.

Since we did our study, researchers have developed methods with stronger theoretical DP guarantees than the ones we impose in our paper, but our approach is consistent with those methods. Overall, this work makes DP more generally accessible and helps us integrate NLU models with DP guarantees into our production systems, where new models are frequently released, and a significant increase in training time is prohibitive.

While DP provides theoretical privacy guarantees, we are also interested in practical guarantees, i.e., measuring the amount of information that could potentially leak from a given model. In addition to the performance and training time of eDP-SGD, we also studied the correlation between theoretical and practical privacy guarantees. We measured practical privacy leakage using the most common method in the field, the success rate of membership inference attacks on a given model. Our experiments provide a general picture of how to optimize the privacy/utility trade-off using DP techniques for NLU models.

We also expanded the set of mechanisms for protecting NLU models against other types of attacks. In “Canary extraction in natural language understanding models”, which we will present at ACL 2022, we study the vulnerability of text classification models to a certain kind of white-box attack called a model inversion attack (ModIvA), where a fictional attack has access to the entire set of model parameters and intends to retrieve examples used during training. Existing model inversion techniques are applied to models with either continuous inputs or continuous outputs. In our work, we adopt a similar approach to text classification tasks where both inputs and outputs are discrete.

As new model architectures are developed that might display new types of vulnerabilities, we will continue innovating efficient ways of protecting our customers’ privacy.

Upcoming activities

2. Federated Learning

The idea behind federated learning (FL) is that, during the training of an ML model, part of the computation is delegated to customers’ devices, leveraging the processing power of those devices while avoiding the centralization of privacy-sensitive datasets. Each device modifies a common, shared model according to locally stored data, then sends an updated model to a central server that aggregates model updates and sends a new shared model to all the devices. At each round, the central server randomly selects a subset of active devices and requests that they perform updates.

Federated Learning Animation.gif
With federated learning, devices send model updates, not data, to a central server.

In the past year, we have made progress toward more-efficient FL and adapted common FL techniques to the industrial setting. For instance, in “Learnings from federated learning in the real world”, which we will present at ICASSP this year, we explore device selection strategies that differ from the standard uniform selection. In particular, we present the first study of device selection based on device “activity” — i.e., the number of available training samples.

These simple selection strategies are lightweight compared to existing methods, which require heavy computation from all the devices. They are thus more suitable to industrial applications, where millions of devices are involved. We study two different settings: the standard “static” setting, where all the data are available at once, and the more realistic “continual” setting, where customers generate new data over time, and past examples might have to be deleted to save storage space. Our experiments on training a language model with FL show that non-uniform sampling outperforms uniform sampling when applied to real-world data, for both the static and continual settings.

Related content
Amazon researchers optimize the distributed-training tool to run efficiently on the Elastic Fabric Adapter network interface.

We also expanded our understanding of FL for natural-language processing (NLP) and, in the process, made FL more accessible to the NLP community. In “FedNLP: A research platform for federated learning in natural language processing”, which will be presented later this year at NAACL, we and our colleagues at the University of Southern California and FedML systematically compare the most popular FL algorithms for four mainstream NLP tasks. We also present different methods to generate dataset partitions that are not independent and identically distributed (IID), as real-world FL methods must be robust against shifts in the distributions of the data used to train ML models.

Our analysis reveals that there is still a large gap between centralized and decentralized training under various settings, and we highlight several directions in which FL for NLP can advance. The paper represents Amazon’s contribution to the open-source framework FedNLP, which is capable of evaluating, analyzing, and developing FL methods for NLP. The codebase contains non-IID partitioning methods, enabling easy experimentation to advance the state of FL research for NLP.

We also designed methods to account for the naturally heterogeneous character of customer-generated data and applied FL to a wide variety of NLP tasks. We are aware that FL still presents many challenges, such as how to do evaluation when access to data is removed, on-device label generation for supervised tasks, and privacy-preserving communication between the server and the different devices. We are actively addressing each of these and plan to leverage our findings to improve FL-based model training and enhance associated capabilities such as analytics and model evaluation.

Upcoming activities

3. Fairness in ML

Natural-language-processing applications’ increased reliance on large language models trained on intrinsically biased web-scale corpora has amplified the importance of accurate fairness metrics and procedures for building more robust models.

In “On the intrinsic and extrinsic fairness evaluation metrics for contextualized language representations”, which we are presenting at ACL 2022, we compare two families of fairness metrics — namely extrinsic and intrinsic — that are widely used for language models. Intrinsic metrics directly probe into the fairness of language models, while extrinsic metrics evaluate the fairness of a whole system through predictions on downstream tasks.

Related content
Method significantly reduces bias while maintaining comparable performance on machine learning tasks.

For example, the contextualized embedding association test (CEAT), an intrinsic metric, measures bias through word embedding distances in semantic vector spaces, and the extrinsic metric HateXPlain measures the bias in a downstream hate speech detection system.

Our experiments show that inconsistencies between intrinsic and extrinsic metrics often reflect inconsistencies between the datasets used to evaluate them, and a clear understanding of bias in ML models requires more careful alignment of evaluation data. The results we report in the paper can help guide the NLP community as to how to best conduct fairness evaluations.

We have also designed new measures of fairness that are adapted to language-processing applications. In “Measuring fairness of text classifiers via prediction sensitivity”, which we will present at ACL 2022, we looked at sensitivity to perturbations of input as a way to measure fairness in ML models. The metric attempts to quantify the extent to which a single prediction depends on an input feature that encodes membership in an underrepresented group.

Accumulated prediction sensitivity.png
Our new bias measure, accumulated prediction sensitivity, combines the outputs of tow models, a task classifier (TC) and a protected status model (PSM).

We provide a theoretical analysis of our formulation and show a statistically significant difference between our metric’s correlation with the human notion of fairness and the existing counterfactual fairness metric’s.

Finally, we proposed a method to mitigate the biases of large language models during knowledge distillation, in which a smaller, more efficient model is trained to match the language model’s output on a particular task. Because large language models are trained on public texts, they can be biased in multiple ways, including the unfounded association of male or female genders with gender-neutral professions.

Distillation examples.png
Examples of texts generated by language models in response to gendered prompts before and after the application of our distillation method.

In another ACL paper, “Mitigating gender bias in distilled language models via counterfactual role reversal”, we introduce two modifications to the standard distillation mechanisms: data augmentation and teacher prediction perturbation.

We use our method to distill a GPT-2 language model for a text-generation task and demonstrate a substantial reduction in gender disparity, with only a minor reduction in utility. Interestingly, we find that reduced disparity in open-ended text generation may not necessarily lead to fairness on other downstream tasks. This finding underscores the importance of evaluating language model fairness along multiple metrics and tasks.

Our work on fairness in ML for NLP applications should help enable models that are more robust against the inherent biases of text datasets. There remain plenty of challenges in this field, but we strive to build models that offer the same experience to any customer, wherever and however they choose to interact with Alexa.

Upcoming activities

Related content

US, WA, Bellevue
The Artificial General Intelligence (AGI) team is looking for a passionate, talented, and inventive Senior Applied Scientist to work on methodologies for Generative Artificial Intelligence (GenAI) models. As a Senior Applied Scientist, you will be responsible for leading the development of novel algorithms and modeling techniques to advance the state of the art. Your work will directly impact our customers and will leverage Amazon’s heterogeneous data sources and large-scale computing resources to accelerate development with multi-modal Large Language Models (LLMs) and GenAI. You will have significant influence on our overall strategy by working at the intersection of engineering and applied science to scale pre-training and post-training workflows and build efficient models. You will support the system architecture and the best practices that enable a quality infrastructure. Key job responsibilities Join us to work as an integral part of a team that has experience with GenAI models in this space. We work on these areas: - Pre-training and post-training multimodal LLMs - Scale training, optimization methods, and learning objectives - Utilize, build, and extend upon industry-leading frameworks - Work with other team members to investigate design approaches, prototype new technology, scientific techniques and evaluate technical feasibility - Deliver results independently in a self-organizing Agile environment while constantly embracing and adapting new scientific advances About the team The AGI team has a mission to push the envelope in GenAI with Large Language Models (LLMs) and multimodal systems, in order to provide the best-possible experience for our customers.
CA, BC, Vancouver
Join our Amazon Private Brands Selection Guidance organization in building science and tech solutions at scale to delight our customers with products across our leading private brands such as Amazon Basics, Amazon Essentials, and by Amazon. The Selection Guidance team applies Generative AI, Machine Learning, Statistics, and Economics solutions to drive our private brands product assortment, strategic business decisions, and product inputs such as title, price, merchandising and ordering. We are an interdisciplinary team of Scientists, Economists, Engineers, and Product Managers incubating and building day one solutions using novel technology, to solve some of the toughest business problems at Amazon. As a Sr. Data Scientist you will invent novel solutions and prototypes, and directly contribute to bringing your ideas to life through production implementation. Current research areas include entity resolution, agentic AI, large language models, and product substitutes. You will review and guide scientists across the team on their designs and implementations, and raise the team bar for science research and prototypes. This is a unique, high visibility opportunity for someone who wants to develop ambitious science solutions and have direct business and customer impact. Key job responsibilities - Partner with business stakeholders to deeply understand APB business problems and frame ambiguous business problems as science problems and solutions. - Invent novel science solutions, develop prototypes, and deploy production software to solve business problems. - Review and guide science solutions across the team. - Publish and socialize your and the team's research across Amazon and external avenues as appropriate - Leverage industry best practices to establish repeatable applied science practices, principles & processes.
US, WA, Seattle
We are looking for a passionate Applied Scientist to help pioneer the next generation of agentic AI applications for Amazon advertisers. In this role, you will design agentic architectures, develop tools and datasets, and contribute to building systems that can reason, plan, and act autonomously across complex advertiser workflows. You will work at the forefront of applied AI, developing methods for fine-tuning, reinforcement learning, and preference optimization, while helping create evaluation frameworks that ensure safety, reliability, and trust at scale. You will work backwards from the needs of advertisers—delivering customer-facing products that directly help them create, optimize, and grow their campaigns. Beyond building models, you will advance the agent ecosystem by experimenting with and applying core primitives such as tool orchestration, multi-step reasoning, and adaptive preference-driven behavior. This role requires working independently on ambiguous technical problems, collaborating closely with scientists, engineers, and product managers to bring innovative solutions into production. Key job responsibilities - Design and build agents to guide advertisers in conversational and non-conversational experience. - Design and implement advanced model and agent optimization techniques, including supervised fine-tuning, instruction tuning and preference optimization (e.g., DPO/IPO). - Curate datasets and tools for MCP. - Build evaluation pipelines for agent workflows, including automated benchmarks, multi-step reasoning tests, and safety guardrails. - Develop agentic architectures (e.g., CoT, ToT, ReAct) that integrate planning, tool use, and long-horizon reasoning. - Prototype and iterate on multi-agent orchestration frameworks and workflows. - Collaborate with peers across engineering and product to bring scientific innovations into production. - Stay current with the latest research in LLMs, RL, and agent-based AI, and translate findings into practical applications. About the team The Sponsored Products and Brands team at Amazon Ads is re-imagining the advertising landscape through the latest generative AI technologies, revolutionizing how millions of customers discover products and engage with brands across Amazon.com and beyond. We are at the forefront of re-inventing advertising experiences, bridging human creativity with artificial intelligence to transform every aspect of the advertising lifecycle from ad creation and optimization to performance analysis and customer insights. We are a passionate group of innovators dedicated to developing responsible and intelligent AI technologies that balance the needs of advertisers, enhance the shopping experience, and strengthen the marketplace. If you're energized by solving complex challenges and pushing the boundaries of what's possible with AI, join us in shaping the future of advertising. The Advertiser Guidance team within Sponsored Products and Brands is focused on guiding and supporting 1.6MM advertisers to meet their advertising needs of creating and managing ad campaigns. At this scale, the complexity of diverse advertiser goals, campaign types, and market dynamics creates both a massive technical challenge and a transformative opportunity: even small improvements in guidance systems can have outsized impact on advertiser success and Amazon’s retail ecosystem. Our vision is to build a highly personalized, context-aware agentic advertiser guidance system that leverages LLMs together with tools such as auction simulations, ML models, and optimization algorithms. This agentic framework, will operate across both chat and non-chat experiences in the ad console, scaling to natural language queries as well as proactively delivering guidance based on deep understanding of the advertiser. To execute this vision, we collaborate closely with stakeholders across Ad Console, Sales, and Marketing to identify opportunities—from high-level product guidance down to granular keyword recommendations—and deliver them through a tailored, personalized experience. Our work is grounded in state-of-the-art agent architectures, tool integration, reasoning frameworks, and model customization approaches (including tuning, MCP, and preference optimization), ensuring our systems are both scalable and adaptive.
US, CA, Pasadena
The Amazon Web Services (AWS) Center for Quantum Computing (CQC) is a multi-disciplinary team of scientists, engineers, and technicians on a mission to develop a fault-tolerant quantum computer. You will be joining a team located in Pasadena, CA that conducts materials research to improve the performance of superconducting quantum processors. We seek a Quantum Research Scientist to investigate how material defects affect qubit performance. In this role, you will combine expertise in numerical simulations and materials characterization to study materials loss mechanisms such as two-level systems, quasiparticles, vortices, etc. Key job responsibilities Provide subject matter expertise on integrated experimental and computational studies of materials defects Develop and use computational tools for large-scale simulations of disordered structures Develop and implement multi-technique materials characterization workflows for thin films and devices, with a focus on the surfaces and interfaces Identify material properties that can be a reliable proxy for the performance of superconducting resonators and qubits Communicate findings to teammates, the broader CQC team and, when appropriate, publish findings in scientific journals A day in the life At the AWS CQC, we understand that developing quantum computing technology is a marathon, not a sprint. The work/life integration within our team encourages a culture where employees work hard and also have ownership over their downtime. We are committed to the growth and development of every employee at the AWS CQC, and that includes our research scientists. You will receive management and mentorship from within the team that is geared toward career growth, and also have the opportunity to participate in Amazon's mentorship programs for scientists and engineers. Working closely with other quantum research scientists in other disciplines – like design, measurement and cryogenic hardware – will provide opportunities to dive deep into an education on quantum computing. About the team Our team contributes to the fabrication of processors and other hardware that enable quantum computing technologies. Doing that necessitates the development of materials with tailored properties for superconducting circuits. Research Scientists and Engineers on the Materials team operate deposition and characterization systems in order to develop and optimize thin film processes for use in these devices. They work alongside other Research Scientists and Engineers to help deliver the fabricated devices for quantum computing experiments. Export Control Requirement: Due to applicable export control laws and regulations, candidates must be either a U.S. citizen or national, U.S. permanent resident (i.e., current Green Card holder), or lawfully admitted into the U.S. as a refugee or granted asylum, or be able to obtain a U.S export license. If you are unsure if you meet these requirements, please apply and Amazon will review your application for eligibility. About the team Diverse Experiences AWS values diverse experiences. Even if you do not meet all of the preferred qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why AWS? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Inclusive Team Culture AWS values curiosity and connection. Our employee-led and company-sponsored affinity groups promote inclusion and empower our people to take pride in what makes us unique. Our inclusion events foster stronger, more collaborative teams. Our continual innovation is fueled by the bold ideas, fresh perspectives, and passionate voices our teams bring to everything we do. Mentorship & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve. Export Control Requirement: Due to applicable export control laws and regulations, candidates must be either a U.S. citizen or national, U.S. permanent resident (i.e., current Green Card holder), or lawfully admitted into the U.S. as a refugee or granted asylum, or be able to obtain a U.S export license. If you are unsure if you meet these requirements, please apply and Amazon will review your application for eligibility.
US, CA, Sunnyvale
Prime Video is a first-stop entertainment destination offering customers a vast collection of premium programming in one app available across thousands of devices. Prime members can customize their viewing experience and find their favorite movies, series, documentaries, and live sports – including Amazon MGM Studios-produced series and movies; licensed fan favorites; and programming from Prime Video add-on subscriptions such as Apple TV+, Max, Crunchyroll and MGM+. All customers, regardless of whether they have a Prime membership or not, can rent or buy titles via the Prime Video Store, and can enjoy even more content for free with ads. Are you interested in shaping the future of entertainment? Prime Video's technology teams are creating best-in-class digital video experience. As a Prime Video technologist, you’ll have end-to-end ownership of the product, user experience, design, and technology required to deliver state-of-the-art experiences for our customers. You’ll get to work on projects that are fast-paced, challenging, and varied. You’ll also be able to experiment with new possibilities, take risks, and collaborate with remarkable people. We’ll look for you to bring your diverse perspectives, ideas, and skill-sets to make Prime Video even better for our customers. With global opportunities for talented technologists, you can decide where a career Prime Video Tech takes you! Key job responsibilities - Develop ML models for various recommendation & search systems using deep learning, online learning, and optimization methods - Work closely with other scientists, engineers and product managers to expand the depth of our product insights with data, create a variety of experiments to determine the high impact projects to include in planning roadmaps - Stay up-to-date with advancements and the latest modeling techniques in the field - Publish your research findings in top conferences and journals A day in the life We're using advanced approaches such as foundation models to connect information about our videos and customers from a variety of information sources, acquiring and processing data sets on a scale that only a few companies in the world can match. This will enable us to recommend titles effectively, even when we don't have a large behavioral signal (to tackle the cold-start title problem). It will also allow us to find our customer's niche interests, helping them discover groups of titles that they didn't even know existed. We are looking for creative & customer obsessed machine learning scientists who can apply the latest research, state of the art algorithms and ML to build highly scalable page personalization solutions. You'll be a research leader in the space and a hands-on ML practitioner, guiding and collaborating with talented teams of engineers and scientists and senior leaders in the Prime Video organization. You will also have the opportunity to publish your research at internal and external conferences. About the team Prime Video Recommendation Science team owns science solution to power recommendation and personalization experience on various Prime Video surfaces and devices. We work closely with the engineering teams to launch our solutions in production.
US, CA, Sunnyvale
Prime Video is a first-stop entertainment destination offering customers a vast collection of premium programming in one app available across thousands of devices. Prime members can customize their viewing experience and find their favorite movies, series, documentaries, and live sports – including Amazon MGM Studios-produced series and movies; licensed fan favorites; and programming from Prime Video add-on subscriptions such as Apple TV+, Max, Crunchyroll and MGM+. All customers, regardless of whether they have a Prime membership or not, can rent or buy titles via the Prime Video Store, and can enjoy even more content for free with ads. Are you interested in shaping the future of entertainment? Prime Video's technology teams are creating best-in-class digital video experience. As a Prime Video technologist, you’ll have end-to-end ownership of the product, user experience, design, and technology required to deliver state-of-the-art experiences for our customers. You’ll get to work on projects that are fast-paced, challenging, and varied. You’ll also be able to experiment with new possibilities, take risks, and collaborate with remarkable people. We’ll look for you to bring your diverse perspectives, ideas, and skill-sets to make Prime Video even better for our customers. With global opportunities for talented technologists, you can decide where a career Prime Video Tech takes you! Key job responsibilities - Develop ML models for various recommendation & search systems using deep learning, online learning, and optimization methods - Work closely with other scientists, engineers and product managers to expand the depth of our product insights with data, create a variety of experiments to determine the high impact projects to include in planning roadmaps - Stay up-to-date with advancements and the latest modeling techniques in the field - Publish your research findings in top conferences and journals A day in the life We're using advanced approaches such as foundation models to connect information about our videos and customers from a variety of information sources, acquiring and processing data sets on a scale that only a few companies in the world can match. This will enable us to recommend titles effectively, even when we don't have a large behavioral signal (to tackle the cold-start title problem). It will also allow us to find our customer's niche interests, helping them discover groups of titles that they didn't even know existed. We are looking for creative & customer obsessed machine learning scientists who can apply the latest research, state of the art algorithms and ML to build highly scalable page personalization solutions. You'll be a research leader in the space and a hands-on ML practitioner, guiding and collaborating with talented teams of engineers and scientists and senior leaders in the Prime Video organization. You will also have the opportunity to publish your research at internal and external conferences. About the team Prime Video Recommendation Science team owns science solution to power recommendation and personalization experience on various Prime Video surfaces and devices. We work closely with the engineering teams to launch our solutions in production.
US, CA, Sunnyvale
Prime Video is a first-stop entertainment destination offering customers a vast collection of premium programming in one app available across thousands of devices. Prime members can customize their viewing experience and find their favorite movies, series, documentaries, and live sports – including Amazon MGM Studios-produced series and movies; licensed fan favorites; and programming from Prime Video add-on subscriptions such as Apple TV+, Max, Crunchyroll and MGM+. All customers, regardless of whether they have a Prime membership or not, can rent or buy titles via the Prime Video Store, and can enjoy even more content for free with ads. Are you interested in shaping the future of entertainment? Prime Video's technology teams are creating best-in-class digital video experience. As a Prime Video technologist, you’ll have end-to-end ownership of the product, user experience, design, and technology required to deliver state-of-the-art experiences for our customers. You’ll get to work on projects that are fast-paced, challenging, and varied. You’ll also be able to experiment with new possibilities, take risks, and collaborate with remarkable people. We’ll look for you to bring your diverse perspectives, ideas, and skill-sets to make Prime Video even better for our customers. With global opportunities for talented technologists, you can decide where a career Prime Video Tech takes you! We are looking for a self-motivated, passionate and resourceful Applied Scientist to bring diverse perspectives, ideas, and skill-sets to make Prime Video even better for our customers. You will spend your time as a hands-on machine learning practitioner and a research leader. You will play a key role on the team, building and guiding machine learning models from the ground up. At the end of the day, you will have the reward of seeing your contributions benefit millions of Amazon.com customers worldwide. Key job responsibilities - Develop AI solutions for various Prime Video Search systems using Deep learning, GenAI, Reinforcement Learning, and optimization methods; - Work closely with engineers and product managers to design, implement and launch AI solutions end-to-end; - Design and conduct offline and online (A/B) experiments to evaluate proposed solutions based on in-depth data analyses; - Effectively communicate technical and non-technical ideas with teammates and stakeholders; - Stay up-to-date with advancements and the latest modeling techniques in the field; - Publish your research findings in top conferences and journals. About the team Prime Video Search Science team owns science solution to power search experience on various devices, from sourcing, relevance, ranking, to name a few. We work closely with the engineering teams to launch our solutions in production.
US, CA, Sunnyvale
Prime Video is a first-stop entertainment destination offering customers a vast collection of premium programming in one app available across thousands of devices. Prime members can customize their viewing experience and find their favorite movies, series, documentaries, and live sports – including Amazon MGM Studios-produced series and movies; licensed fan favorites; and programming from Prime Video add-on subscriptions such as Apple TV+, Max, Crunchyroll and MGM+. All customers, regardless of whether they have a Prime membership or not, can rent or buy titles via the Prime Video Store, and can enjoy even more content for free with ads. Are you interested in shaping the future of entertainment? Prime Video's technology teams are creating best-in-class digital video experience. As a Prime Video technologist, you’ll have end-to-end ownership of the product, user experience, design, and technology required to deliver state-of-the-art experiences for our customers. You’ll get to work on projects that are fast-paced, challenging, and varied. You’ll also be able to experiment with new possibilities, take risks, and collaborate with remarkable people. We’ll look for you to bring your diverse perspectives, ideas, and skill-sets to make Prime Video even better for our customers. With global opportunities for talented technologists, you can decide where a career Prime Video Tech takes you! We are looking for a self-motivated, passionate and resourceful Applied Scientist to bring diverse perspectives, ideas, and skill-sets to make Prime Video even better for our customers. You will spend your time as a hands-on machine learning practitioner and a research leader. You will play a key role on the team, building and guiding machine learning models from the ground up. At the end of the day, you will have the reward of seeing your contributions benefit millions of Amazon.com customers worldwide. Key job responsibilities - Develop AI solutions for various Prime Video Search systems using Deep learning, GenAI, Reinforcement Learning, and optimization methods; - Work closely with engineers and product managers to design, implement and launch AI solutions end-to-end; - Design and conduct offline and online (A/B) experiments to evaluate proposed solutions based on in-depth data analyses; - Effectively communicate technical and non-technical ideas with teammates and stakeholders; - Stay up-to-date with advancements and the latest modeling techniques in the field; - Publish your research findings in top conferences and journals. About the team Prime Video Search Science team owns science solution to power search experience on various devices, from sourcing, relevance, ranking, to name a few. We work closely with the engineering teams to launch our solutions in production.
US, CA, Cupertino
We are seeking a highly skilled Data Scientist to join our Machine Learning Architecture team, focusing on power and performance optimization for ML acceleration workloads across Amazon's global data center infrastructure. This role combines advanced data science techniques with deep technical understanding of ML hardware acceleration to drive efficiency improvements in training and inference workloads at massive scale. Key job responsibilities ata Analysis & Optimization * Analyze power consumption and performance metrics across all Amazon data centers for machine learning acceleration workloads * Develop predictive models and statistical frameworks to identify optimization opportunities and performance bottlenecks * Create automated monitoring and alerting systems for power and performance anomalies Strategic Planning & Deployment Guidance * Provide data-driven recommendations for server deployments and capacity planning decisions across Amazon's global data center network * Develop optimization scenarios and business cases to improve capacity delivery efficiency to customers worldwide * Support strategic decision-making through comprehensive analysis of power, performance, and cost trade-offs Cross-Functional Collaboration * Partner with software engineering teams to optimize ML frameworks, drivers, and runtime systems * Collaborate with hardware engineering teams to influence chip design, server architecture, and cooling system optimization * Work closely with data center operations teams to implement and validate optimization strategies Research & Development * Conduct applied research on emerging ML acceleration technologies and their power/performance characteristics * Develop novel methodologies for measuring and improving energy efficiency in large-scale ML workloads * Publish findings and contribute to industry best practices in sustainable ML infrastructure
IN, KA, Bengaluru
Amazon Devices is an inventive research and development company that designs and engineer high-profile devices like the Kindle family of products, Fire Tablets, Fire TV, Health Wellness, Amazon Echo & Astro products. This is an exciting opportunity to join Amazon in developing state-of-the-art techniques that bring Gen AI on edge for our consumer products. We are looking for exceptional scientists to join our Applied Science team and help develop the next generation of edge models, and optimize them while doing co-designed with custom ML HW based on a revolutionary architecture. Work hard. Have Fun. Make History. Key job responsibilities What will you do? - Quantize, prune, distill, finetune Gen AI models to optimize for edge platforms - Fundamentally understand Amazon’s underlying Neural Edge Engine to invent optimization techniques - Analyze deep learning workloads and provide guidance to map them to Amazon’s Neural Edge Engine - Use first principles of Information Theory, Scientific Computing, Deep Learning Theory, Non Equilibrium Thermodynamics - Train custom Gen AI models that beat SOTA and paves path for developing production models - Collaborate closely with compiler engineers, fellow Applied Scientists, Hardware Architects and product teams to build the best ML-centric solutions for our devices - Publish in open source and present on Amazon's behalf at key ML conferences - NeurIPS, ICLR, MLSys.