Fitzgerald keynote.png
Amazon senior applied scientist Jack FitzGerald, delivering a keynote talk at the joint Language Intelligence @ Work and SEMANTiCS conference in Vienna, Austria.

Scaling multilingual virtual assistants to 1,000 languages

Self-supervised training, distributed training, and knowledge distillation have delivered remarkable results, but they’re just the tip of the iceberg.

Yesterday at the joint Language Intelligence @ Work and SEMANTiCS conference in Vienna, Austria, Amazon senior applied scientist Jack FitzGerald delivered a keynote talk on multilingual virtual assistants and the path toward a massively multilingual future. This is an edited version of his talk.

The evolution of human-computer interaction paradigms

In the past 50 years, computing technology has progressed from text-based terminal inputs, to graphical user interfaces, to predominantly web-based applications, through the mobile era, and finally into the era of a voice user interface and ambient computing.

Interface timeline.png
A brief history of computing interfaces.

Each of these paradigms has its own challenges with respect to multilingualism, whether it was the migration from ASCII to Unicode or proper character rendering on a website. However, I would argue that a voice AI system is the most difficult paradigm yet with respect to massive multilingualism.

The first reason is that the input space for voice interface commands is unbounded: the user can phrase each command in hundreds of different ways, all of which are valid. Another reason is that even within a single language, there can be many different dialects and accents.

Related content
Amazon Visiting Academic Barbara Poblete helps to build safer, more-diverse online communities — and to aid disaster response.

Most important, the coupling between language and culture is inescapable. Whether it’s the level of formality used, preferred activities, or religious differences, there isn’t a one-size-fits-all solution. Instead, we must adapt the virtual assistant to understand cultural context and say only things that are appropriate for a given locale.

Voice AI systems today

A typical voice AI system includes automatic-speech-recognition models, which convert raw audio into text; natural-language understanding models, which determine the user’s intent and recognize named entities; a central service for arbitration and dialogue management, which routes commands to the proper services or skills; and finally, a text-to-speech model, which issues the output. Additional tasks might include expansion of the underlying knowledge graph and semantic parsing, localization of touch screen content, or local information services.

Alexa overview.png
An overview of Alexa’s design.

Let’s look at some of the operational considerations for supporting multiple languages in such models. One is the training data: they must be topically exhaustive, meaning that they cover the full spectrum of possible user utterances, and they must be culturally exhaustive — for instance, covering all of the holidays a user might celebrate. They must also remain up-to-date, and it’s not always easy to add something new to the model without regression on existing functionalities.

A second consideration is in-house testing. Though in many cases one can get away with synthetic or otherwise artificial data for model training, for testing it’s important to have realistic utterances. Those typically need to come from humans, and collecting them can be a major expense. It’s also useful to perform live, interactive testing, which requires people who can speak and understand each language that the system supports.

Related content
New approach corrects for cases when average improvements are accompanied by specific regressions.

Finally, it’s important to have the ability to support users and process their feedback. In most cases, this again requires staff who understand each of the supported languages.

Ultimately, human-based processes are not very scalable if our goal is to support thousands of languages. Instead, we must turn to technology to the greatest extent possible.

Multilingual modeling today

One of the leading reasons for the current success of multilingual text models is self-supervision.

In traditional supervised learning, a model would be trained from scratch on the desired task. If we wanted a model that would classify the sentiment of a product review, for example, we would manually annotate a bunch of product reviews, and we would use that dataset to train the model.

Today, however, we make use of transfer learning, in which text models are pretrained on terabytes of text data that don’t require manual annotation. Instead, the training procedure leverages the structure inherent to the text itself.

Self-supervision signals.png
Self-supervised-training objectives.

We’ll call this self-supervised pretraining With the masked-language-modeling training objective, for instance, the model is fed the input “for [MASK] out loud!”, and it must predict that “[MASK]” should be filled with the word “crying”. Other objectives, such as causal language modeling, span filling, deshuffling, and denoising can also be used.

Because the datasets required for self-supervised pretraining are unlabeled and monolingual, we can leverage troves of data, such as Common Crawl web scrapes, every Wikipedia page in existence, thousands of books and news articles, and more. Couple these large datasets with highly parallelizable architectures such as transformers, which can be trained on over a thousand GPUs with near linear scaling, and we can build models with tens or hundreds of billions of dense parameters. Such has been the focus for many people in the field for the past few years, including the Alexa Teacher Model team.

One incredible consequence of the transfer learning paradigm is called zero-shot learning. In the context of multilingual modeling, it works like this: the modeler begins by pretraining the model on some set of languages, using self-supervision. As an example, suppose that the modeler trains a model on English, French, and Japanese using every Wikipedia article in those three languages.

Related content
New end-to-end approach to zero-shot video classification dramatically outperforms predecessors.

The next step is to adapt the model to a particular task using labeled data. Suppose that the modeler has a labeled dataset for intent classification, but only in English. The modeler can go ahead and fine-tune the model on the English data, then run it on the remaining languages.

Despite the fact that the model was never trained to do intent classification with French or Japanese data, it can still classify intents in those languages, by leveraging what it learned about those languages during pretraining. Given that the acquisition of labeled data is often a bottleneck, this property of language models is highly valuable for language expansion. Of course, zero-shot learning is just the extreme end of a continuum: transfer learning helps even out performance when the labeled data in different languages is imbalanced.

Zero-shot multilingual.png
Zero-shot learning for multilingual adaptation.

The next step up the data efficiency ladder is performing tasks without any additional training or fine tuning, using only a couple of labeled records or none at all. This is possible through “in-context learning,” which was popularized in the GPT-3 paper.

To perform in-context learning, simply take a pretrained model and feed it the appropriate prompts. Think of a prompt is a hint to the model about the task it should perform. Suppose that we want the model to summarize a passage. We might prefix the passage with the word “Passage” and a colon and follow it with the word “Summary” and a colon. The model would then generate a summary of the passage.

Related content
In the past few years, advances in artificial intelligence have captured our imaginations and led to the widespread use of voice services on our phones and in our homes.

This is the zero-shot in-context learning case, meaning that no fine-tuning is performed, and no labeled data are needed. To improve task performance, we can feed a few examples to the model before asking it to perform the task. Though this does require some labeled data, the amount is small, usually in the tens of examples only.

Our Alexa Teacher Model team recently trained and tested a 20-billion-parameter sequence-to-sequence model that was multilingual and showed nice performance for in-context learning. For example, we showed state-of-the-art performance on machine translation with in-context learning. The model can achieve competitive BLEU scores even for some low-resource languages, which is incredible given that no parallel data was used during pretraining, and no labeled data besides a single example was used at any step in the process.

We were particularly proud of the relatively small size of this model, which could compete with much larger models because it was trained on more data. (The Chinchilla model from OpenAI showed a similar result.) Though a large model trained on a smaller dataset and a smaller model trained on a larger dataset may use the same total compute at training time, the smaller model will require less compute and memory during inference, which is a key factor in real applications.

Given that models demonstrate multilingual understanding even without labeled data or parallel data, you may be wondering what’s happening inside of the model. Since the days of word2vec and earlier, we’ve represented characters, words, sentences, documents, and other inputs as vectors of floats, also known as embeddings, hidden states, and representations. Concepts cluster in certain areas of the representational space.

Related content
Training a product discovery system on many languages at once improves performance in all of them.

As humans, we can think only in three dimensions, whereas these representations are high-dimensional, but you can visualize this clustering in two or three dimensions as a reductive approximation. All the languages the model supports would cluster the concept of sitting in a chair in one region of the representational space; the concept of the ocean would inhabit a different cluster; and so forth.

Indeed, Pires et al. have shown that synonymous words across languages cluster together in the mBERT model. When examining 5,000 sentence pairs from the WMT16 dataset, they found that, given a sentence and its embedding in one language, the correct translation from another language is the closest embedding to the source embedding up to 75% of the time.

This manner of clustering can also be manipulated by changing the objective function. In their work on speech-to-text-modeling, Adams et al., from Johns Hopkins, were seeing undesirable clustering by language, rather than by phonemes, in the representational space. They were able to correct by adding training objectives around phoneme prediction and language identification.

The Alexa Teacher Model distillation pipeline

Once we have multilingual models, how do we adapt them to a real system? At the recent KDD conference, we presented a paper describing the Alexa Teacher Model pipeline, consisting of the following steps.

First, a multilingual model with billions of parameters is trained on up to a trillion tokens taken from Common Crawl web scrapes, Wikipedia articles, and more. Second, the models are further trained on in-domain, unlabeled data from a real system. Third, the model is distilled into smaller sizes that can be used in production. The final models can then be fine-tuned using labeled data and deployed.

ATM pipeline.png
The Alexa Teacher Model (AlexaTM) pipeline. The Alexa Teacher Model is trained on a large set of GPUs (left), then distilled into smaller variants (center), whose size depends on their uses. The end user adapts a distilled model to its particular use by fine-tuning it on in-domain data (right).

In tests, we found that our model was more accurate than a publicly available pretrained model fine-tuned on labeled data, and it significantly reduced customer dissatisfaction relative to a model trained by a smaller teacher model (85 million parameters, say, instead of billions). In short, we’ve verified that we can leverage the additional learning capacity of large, multilingual models for production systems requiring low latency and low memory consumption.

Scaling to 1,000 languages

I mentioned the fascinating ability of language models to learn joint representations of multiple languages without labeled or parallel data. This ability is crucial for us to scale to many languages. However, as we scale, we need test data that we can trust so that we can evaluate our progress.

Related content
MASSIVE dataset and Massively Multilingual NLU (MMNLU-22) competition and workshop will help researchers scale natural-language-understanding technology to every language on Earth.

Toward this end, my team at Amazon recently released a new benchmark for multilingual natural-language understanding called MASSIVE, which is composed of one million labeled records spanning 51 languages, 18 domains, 60 intents, and 55 slots. All of the data were created by native speakers of the languages. We also released a GitHub repository with code that can be used as a baseline for creating multilingual NLU models, as well as leaderboards on eval.ai.

Now, you may retort that 51 languages is still a long ways from 1,000 languages. This is true, but we purposefully chose our languages in order to maximize typological diversity while staying within our budget. Our languages span 29 language genera, 14 language families, and 21 distinct scripts or alphabets. The diversity of the chosen languages allows a modeler to test technology that should scale to many more languages within each represented genus, family, and script.

That said, we certainly have some major gaps in language coverage, including across native North and South American languages, African languages, and Australian languages. Yet we are optimistic that our fellow researchers across the field will continue to produce new labeled benchmark datasets for the world’s thousands of low-resource languages.

Massive languages.cropped.png
The 51 languages of MASSIVE, including scripts and genera.

Another difficulty with our current modeling approaches is that they rely on data sources such as web scrapes, encyclopedic articles, and news articles, which are highly skewed toward a small set of languages. Wang, Ruder, and Neubig recently presented some fascinating work leveraging bilingual lexicons — corpora consisting of word-level translations — to improve language model performance for low-resource languages. Lexicons cover a far greater portion of the world’s languages than our typical data sources for language modeling, making this an exciting approach.

Related content
Self-learning system uses customers’ rephrased requests as implicit error signals.

Researchers, missionaries, and businesspeople have been created fundamental linguistic resources for decades, from Bible translations to the Unimorph corpus. The Unimorph datasets are used for the SIGMORPHON shared task, in which a model must predict the correct formulation of word given that word’s root and certain morphological transformations, such as part of speech, tense, and person. We must find more ways to leverage such resources when creating massively multilingual voice AI systems.

As a final technique for scaling to many more languages, we can consider what we in Alexa call “self-learning.” Some of my Alexa colleagues published a paper showing that we can mine past utterances to improve overall system performance. For example, if a user rephrases a request as part of a multiturn interaction, as shown on the left in the figure below, or if different users provide variations for the same desired goal, as shown on the right, then we can make soft assumptions that the different formulations are synonymous.

All of these cases can be statistically aggregated to form new training sets to update the system, without the need to manually annotate utterances. In a multilingual system, such technology is particularly valuable after the initial launch of a language, both to improve performance generally and to adapt to changes in the lexicon.

Self-learning.png
Alexa’s self-learning mechanism.

The road ahead

I hope that you share my wonder at the current state of the art — the scale of language-model training, the magic of zero-shot learning, and the distillation of knowledge into compact models that can run in latency-sensitive systems. All of this is incredible, but we’ve only scratched the surface of supporting the world’s 7,000 languages.

To move into the next era of massive multilingualism, we must build new and increasingly powerful models that can take advantage of low-cost data, particularly unlabeled monolingual data. We must also build models that can leverage existing and upcoming linguistic resources, such as bilingual lexicons and morphological-transformation databases. And finally, we must expand available language resources across more languages and domains, including more unlabeled monolingual corpora, more parallel resources, and more realistic, labeled, task-specific datasets.

Increased multilingualism is a win for all people everywhere. Each language provides a unique perspective on the world in which we live. A rich plurality of perspectives leads to a deeper understanding of our fellow people and of all creation.

Keep building.

Research areas

Related content

US, WA, Seattle
Are you seeking an environment where you can drive innovation? WW Amazon Stores Finance Science (ASFS) works to leverage science and economics to drive improved financial results, foster data backed decisions, and embed science within Finance. ASFS is focused on developing products that empower controllership, improve financial planning by understanding financial drivers, and innovate science capabilities for efficiency and scale. Our team owns sophisticated science capabilities for forecasting the WW Amazon Stores P&L, focusing on costs and the bottomline (profitability). We are looking for an outstanding Senior economist to lead new high visibility initiatives for forecasting the WW Amazon Stores P&L (focusing on costs and the bottomline). The forecasting models will be used to enable better financial planning and decision making for senior leadership up to VP level. You will build new econometric models from the ground up. The role will develop new driver based forecasting models for Retail related P&L lines that incorporate business drivers. The Sr Economist will also help generate new insights on how macroeconomic factors impact the P&L. This role will have very high visibility with senior leadership up to VP level. We prize creative problem solvers with the ability to draw on an expansive methodological toolkit to transform financial planning and decision-making through economics. The ideal candidate combines econometric acumen with strong business judgment. You have versatile modeling skills and are comfortable owning and extracting insights from data. You are excited to learn from and alongside seasoned scientists, engineers, economists, and business leaders. You are an excellent communicator and effectively translate technical findings into business action.
US, CA, East Palo Alto
The Customer Engagement Technology team leads AI/LLM-driven customer experience transformation using task-oriented dialogue systems. We develop multi-modal, multi-turn, goal-oriented dialog systems that can handle customer issues at Amazon scale across multiple languages. These systems are designed to adapt to changing company policies and invoke correct APIs to automate solutions to customer problems. Additionally, we enhance associate productivity through response/action recommendation, summarization to capture conversation context succinctly, retrieving precise information from documents to provide useful information to the agent, and machine translation to facilitate smoother conversations when the customer and agent speak different languages. Key focus areas include: 1. Task-Oriented Dialog Systems: Building reliable, scalable, and adaptive LLM-based agents for understanding intents, determining eligibilities, making API calls, confirming outcomes, and exploring alternatives across hundreds of customer service intents, while adapting to changing policies. 2. Lifelong Learning: Researching continuous learning approaches for injecting new domain knowledge while retaining the model's foundational abilities and prevent catastrophic forgetting. 3. Agentic Systems: Developing a modular agentic framework to handle multi domain conversations through appropriate system abstractions. 4. Complex Multi-turn Instruction Following: Identifying approaches to guarantee compliance with instructions that specify standard operating procedures for handling multi-turn complex scenarios. 5. Inference-Time Adaptability: Researching inference-time scaling methods and improving in-context learning abilities of custom models to enable real-time adaptability to new features, actions, or bug fixes without solely relying on retraining. 6. Context Adherence: Exploring methods to ground responses in specific customer attributes, account information, and behavioral data to prevent hallucinations and ensure high-fidelity responses. 7. Policy Grounding: Investigating techniques to align bot behavior with evolving company policies by grounding on complex, unstructured policy documents, ensuring consistent and compliant actions. 1. End to End Dialog Policy Optimization: Researching alignment approaches to optimize successful dialog completions. 2. Scalable Evaluations: Developing automated approaches to evaluate quality of experience, and correctness of agentic resolutions Key job responsibilities 1. Research and development of LLM-based chatbots and conversational AI systems for customer service applications. 2. Design and implement state-of-the-art NLP and ML models for tasks such as language understanding, dialogue management, and response generation. 3. Collaborate with cross-functional teams, including data scientists, software engineers, and product managers, to integrate LLM-based solutions into Amazon's customer service platforms. 4. Develop and implement strategies for data collection, annotation, and model training to ensure high-quality and robust performance of the chatbots. 5. Conduct experiments and evaluations to measure the performance of the developed models and systems, and identify areas for improvement. 6. Stay up-to-date with the latest advancements in NLP, LLMs, and conversational AI, and explore opportunities to incorporate new techniques and technologies into Amazon's customer service solutions. 7. Collaborate with internal and external research communities, participate in conferences and publications, and contribute to the advancement of the field.
IN, TN, Chennai
DESCRIPTION The Digital Acceleration (DA) team in India is seeking a talented, self-driven Applied Scientist to work on prototyping, optimizing, and deploying ML algorithms for solving Digital businesses problems. Key job responsibilities - Research, experiment and build Proof Of Concepts advancing the state of the art in AI & ML. - Collaborate with cross-functional teams to architect and execute technically rigorous AI projects. - Thrive in dynamic environments, adapting quickly to evolving technical requirements and deadlines. - Engage in effective technical communication (written & spoken) with coordination across teams. - Conduct thorough documentation of algorithms, methodologies, and findings for transparency and reproducibility. - Publish research papers in internal and external venues of repute - Support on-call activities for critical issues BASIC QUALIFICATIONS - Experience building machine learning models or developing algorithms for business application - PhD, or a Master's degree and experience in CS, CE, ML or related field - Knowledge of programming languages such as C/C++, Python, Java or Perl - Experience in any of the following areas: algorithms and data structures, parsing, numerical optimization, data mining, parallel and distributed computing, high-performance computing - Proficiency in coding and software development, with a strong focus on machine learning frameworks. - Understanding of relevant statistical measures such as confidence intervals, significance of error measurements, development and evaluation data sets, etc. - Excellent communication skills (written & spoken) and ability to collaborate effectively in a distributed, cross-functional team setting. PREFERRED QUALIFICATIONS - 5+ years of building machine learning models or developing algorithms for business application experience - Have publications at top-tier peer-reviewed conferences or journals - Track record of diving into data to discover hidden patterns and conducting error/deviation analysis - Ability to develop experimental and analytic plans for data modeling processes, use of strong baselines, ability to accurately determine cause and effect relations - Exceptional level of organization and strong attention to detail - Comfortable working in a fast paced, highly collaborative, dynamic work environment
CN, 11, Beijing
职位:Applied scientist 应用科学家实习生 毕业时间:2025年10月 - 2026年9月之间毕业的应届毕业生 · 入职日期:2025年6月及之前 · 实习时间:保证一周实习4-5天全职实习,至少持续5个月 · 工作地点:北京朝阳区酒仙桥路恒通商务园区 · 校招信息请参考校园招聘申请手册: https://amazonexteu.qualtrics.com/CP/File.php?F=F_55YI0e7rNdeoB6e 投递须知: 1 填写简历申请时,请把必填和非必填项都填写完整。提交简历之后就无法修改了哦! 2 学校的英文全称请准确填写。 如果您正在攻读计算机视觉、生成式AI或多模态领域的博士或硕士研究生,而且对应用科学家的实习工作感兴趣。 如果您也喜爱深入研究棘手的技术问题并提出解决方案,用成功的产品显著地改善人们的生活。 那么,我们诚挚邀请您加入亚马逊的International Technology自动化营销团队改善亚马逊节假日促销的用户体验。我们的目标是帮助亚马逊的客户找到他们所需的产品,并发现他们感兴趣的新产品。 这会是一份收获满满的工作。您每天的工作都与全球数百万亚马逊客户的体验紧密相关。您将提出和探索LLM和CV领域的创新,例如如何精准控制最前沿的基座大语言模型和图像生成模型以满足自动化的需求。您将集成这些模型到工具链中生成个性化的促销广告图,通过标注数据、建模和客户反馈来完成闭环。您对模型的选择需要能够平衡业务指标和响应时间的需求。
CN, 11, Beijing
职位:Applied scientist 应用科学家实习生 毕业时间:2025年10月 - 2026年9月之间毕业的应届毕业生 · 入职日期:2025年6月及之前 · 实习时间:保证一周实习4-5天全职实习,至少持续3个月 · 工作地点:北京朝阳区酒仙桥路恒通商务园区 · 校招信息请参考校园招聘申请手册: https://amazonexteu.qualtrics.com/CP/File.php?F=F_55YI0e7rNdeoB6e 投递须知: 1 填写简历申请时,请把必填和非必填项都填写完整。提交简历之后就无法修改了哦! 2 学校的英文全称请准确填写。 如果您正在攻读NLP,IR或搜索领域专业的博士或硕士研究生,而且对应用科学家的实习工作感兴趣。如果您也喜爱深入研究棘手的技术问题并提出解决方案,用成功的产品显著地改善人们的生活。 那么,我们诚挚邀请您加入亚马逊的International Technology搜索团队改善Amazon的产品搜索服务。我们的目标是帮助亚马逊的客户找到他们所需的产品,并发现他们感兴趣的新产品。 这会是一份收获满满的工作。您每天的工作都与全球数百万亚马逊客户的体验紧密相关。您将提出和探索NLP和IR领域的创新,基于TB级别的产品和流量数据设计机器学习模型。您将集成这些模型到搜索引擎中为客户提供服务,通过数据,建模和客户反馈来完成闭环。您对模型的选择需要能够平衡业务指标和响应时间的需求。
IL, Haifa
Come build the future of entertainment with us. Are you interested in helping shape the future of movies and television? Do you want to help define the next generation of how and what Amazon customers are watching? Prime Video is a premium streaming service that offers customers a vast collection of TV shows and movies - all with the ease of finding what they love to watch in one place. We offer customers thousands of popular movies and TV shows from Originals and Exclusive content to exciting live sports events. We also offer our members the opportunity to subscribe to add-on channels which they can cancel at any time and to rent or buy new release movies and TV box sets on the Prime Video Store. Prime Video is a fast-paced, growth business - available in over 240 countries and territories worldwide. The team works in a dynamic environment where innovating on behalf of our customers is at the heart of everything we do. If this sounds exciting to you, please read on We are seeking an exceptional Applied Scientist to join our Prime Video Sports tech team in Israel. Our team is dedicated to developing state-of-the-art science to allow for personalizing the customers’ experience and customers to seamlessly find any live event in our selection. You will have the opportunity to work on innovative, large-scale projects that push the boundaries of what's possible in sports content delivery and engagement. Your expertise will be crucial in tackling complex challenges such as temporal information retrieval, leveraging Generative AI and Large Language Models (LLMs), and building state-of-the-art recommender systems. Key job responsibilities We are looking for an Applied Scientist with domain expertise in Personalization, Information Retrieval, and Recommender Systems, or general ML to lead the development of new algorithms and end-to-end solutions. As part of our team of applied scientists and software development engineers, you will be responsible for researching, designing, developing, and deploying algorithms into production pipelines. Your role will involve working with cutting-edge technologies such as Gen AI/LLMs to enhance content discovery and search capabilities. You'll also tackle unique challenges like temporal information retrieval to improve real-time sports content recommendations. As a technologist, you will drive the publication of original work in top-tier conferences in Machine Learning and Information Retrieval. We expect you to thrive in ambiguous situations, demonstrating outstanding analytical abilities and comfort in collaborating with cross-functional teams and systems. The ideal candidate is a self-starter with the ability to learn and adapt quickly in our fast-paced environment. About the team We are the Prime Video Sports team. In September 2018 Prime Video launched its first full-scale live streaming experience to world-wide Prime customers with NFL Thursday Night Football. That was just the start. Now Amazon has exclusive broadcasting rights to major leagues like NFL Thursday Night Football, Tennis major like Roland-Garros and English Premium League to list few and are broadcasting live events across 30+ sports world-wide. Prime Video is expanding not just the breadth of live content that it offers, but the depth of the experience. This is a transformative opportunity, the chance to be at the vanguard of a program that will revolutionize Prime Video, and the live streaming experience of customers everywhere.
IL, Haifa
Come build the future of entertainment with us. Are you interested in helping shape the future of movies and television? Do you want to help define the next generation of how and what Amazon customers are watching? Prime Video is a premium streaming service that offers customers a vast collection of TV shows and movies - all with the ease of finding what they love to watch in one place. We offer customers thousands of popular movies and TV shows from Originals and Exclusive content to exciting live sports events. We also offer our members the opportunity to subscribe to add-on channels which they can cancel at any time and to rent or buy new release movies and TV box sets on the Prime Video Store. Prime Video is a fast-paced, growth business - available in over 240 countries and territories worldwide. The team works in a dynamic environment where innovating on behalf of our customers is at the heart of everything we do. If this sounds exciting to you, please read on. We are seeking an exceptional Sr. Applied Scientist to join our Prime Video Sports tech team in Israel. Our team is dedicated to developing state-of-the-art science to allow for personalizing the customers’ experience and customers to seamlessly find any live event in our selection. You will have the opportunity to work on innovative, large-scale projects that push the boundaries of what's possible in sports content delivery and engagement. Your expertise will be crucial in tackling complex challenges such as temporal information retrieval, leveraging Generative AI and Large Language Models (LLMs), and building state-of-the-art recommender systems. Key job responsibilities We are looking for a Senior Applied Scientist with domain expertise in Personalization, Information Retrieval, and Recommender Systems, or general ML to lead the development of new algorithms and end-to-end solutions. As part of our team of applied scientists and software development engineers, you will be responsible for researching, designing, developing, and deploying algorithms into production pipelines. Your role will involve working with cutting-edge technologies such as GenAI/LLMs to enhance content discovery and search capabilities. You'll also tackle unique challenges like temporal information retrieval to improve real-time sports content recommendations. As a technologist, you will drive the publication of original work in top-tier conferences in Machine Learning and Information Retrieval. We expect you to thrive in ambiguous situations, demonstrating outstanding analytical abilities and comfort in collaborating with cross-functional teams and systems. The ideal candidate is a self-starter with the ability to learn and adapt quickly in our fast-paced environment. About the team We are the Prime Video Sports team. As part of this team, you will be working on the science behind the Discovery, Personalization and Search experiences of PV Sports. In September 2018 Prime Video launched its first full-scale live streaming experience to world-wide Prime customers with NFL Thursday Night Football. That was just the start. Now Amazon has exclusive broadcasting rights to major leagues like NFL Thursday Night Football, Tennis major like Roland-Garros and English Premium League to list few and are broadcasting live events across 30+ sports world-wide. Prime Video is expanding not just the breadth of live content that it offers, but the depth of the experience. This is a transformative opportunity, the chance to be at the vanguard of a program that will revolutionize Prime Video, and the live streaming experience of customers everywhere.
CA, QC, Montreal
Amazon Games recherche un.e scientifique en apprentissage automatique sénior.e pour développer et intégrer de nouvelles approches d'apprentissage automatique (ML), d'apprentissage par renforcement (RL) et d'IA générative (Gen AI) dans nos processus de développement de jeux et dans nos expériences de jeux. Dans ce rôle, vous travaillerez en étroite collaboration avec nos studios de développement de jeux et nos équipes opérationnelles pour imaginer et développer des outils, des processus et des fonctionnalités alimentés par l'IA générative à travers Amazon Games. Chez Amazon Games, notre ambition est de créer de expériences inédites et audacieuses qui rassemblent et cultivent les communautés de joueurs et de joueuses. Notre équipe d'experts de l'industrie développe des jeux multijoueurs AAA et des propriétés intellectuelles originales, avec des équipes à Seattle, Orange County, San Diego, Montréal et Bucarest. À travers nos divisions - Studios, Publishing et Prime Gaming et en collaboration avec des partenaires externes, nous développons, publions et livrons des jeux et des expériences de contenu exceptionnelles pour les joueurs et joueuses. /// Amazon Games is seeking a highly effective Senior Machine Learning Scientist to build and integrate novel ML, RL and Generative AI (Gen AI) approaches into our game pipelines and customer experiences. In this role, you will work closely with our game development studios and operations teams to research and develop generative AI-powered tools, pipelines and features across Amazon Games. At Amazon Games, our ambition is to create bold new experiences that foster community in and around our games. Our team of game industry veterans develops AAA multiplayer games and original IPs, with teams in Seattle, Orange County, San Diego, Montreal, and Bucharest. Amazon Games, through its Studios, Publishing, and Prime Gaming divisions collaborating with external partners, aims to develop, publish, and deliver compelling AAA games and content experiences for gamers to discover. Key job responsibilities Responsabilités - Diriger la recherche, l'implémentation et la mise en production d'initiatives ambitieuses et complexes en IA/ML pour Amazon Games. - Collaborer avec les équipes de programmation, de conception et artistique pour concevoir, développer et intégrer de nouveaux outils d'IA générative dans les flux de travail des développeuses et développeurs. - Identifier et résoudre de manière proactive les problèmes qui affectent la qualité de vie des joueurs, des opérations et des autres développeurs. - Se tenir au courant et analyser les dernières avancées en matière de technologie d'IA générative, et améliorer continuellement les fonctionnalités des produits lorsque des améliorations significatives en termes de coût, d'évolutivité, de qualité ou de fonctionnalité peuvent être réalisées. - Consulter et contribuer aux évaluations d'autres services internes ou tiers de ML, RL et Gen AI qui pourraient être utilisés par le projet ou l'organisation. /// Responsibilities - Drive the research, implementation, and productionizing for ambitious and complex AI/ML initiatives for Amazon Games. - Collaborate with game team engineers, designers and artists to design, develop, and integrate new generative AI tools into developer workflows. - Proactively identify and solve problems that affect the quality of life for players, operations, and other developers. - Stay up to date with and analyze the latest advancements, in generative AI technology, and continuously improve product features where meaningful improvements in cost, scalability, quality, or functionality can be achieved. - Consult and contribute to evaluations of other internal or 3rd ML, RL and Gen AI services that could be leveraged by the project or the organization. A day in the life Une journée type - Vous vous épanouissez dans un environnement collaboratif où vos décisions ont un impact et une influence significatifs. - Vous exprimer votre passion par la création d'expériences de jeu qui ravissent les joueurs et les joueuses. - Vous proposez d'excellents flux de travail, outils et innovations de jeu à vos collègues et aux équipes de développement et recherchez constamment l'amélioration. - Vous souhaitez faire partie de quelque chose d'excitant et unique dans l'écosystème du jeu. /// A day in the life - You thrive in a collaborative environment where your decisions have significant impact and influence. - You are passionate about building game experiences that delight players. - You deliver great workflows, tools, and game innovations to your fellow developers and constantly seek improvement. - You want to be part of something exciting and unique in the gaming ecosystem. About the team À propos de l'équipe L'équipe de recherche en IA d'Amazon Games Studio se concentre sur l'innovation en intelligence artificielle dans le domaine du jeu vidéo. Notre équipe hautement qualifiée et multidisciplinaire travaille sur l'apprentissage automatique, l'apprentissage par renforcement et l'IA générative pour réinventer le développement des jeux. Nous travaillons de près avec les équipe internes et nos studios partenaires pour donner vie à leur vision créative. Notre mission est d'utiliser l'IA de manière responsable pour transformer l'expérience de jeu, enrichir les récits, et fournir aux créateurs et créatrices des outils pratiques pour optimiser leurs chaînes de production. /// About the Team The Amazon Games Studio AI Research team focuses on artificial intelligence innovation in gaming. Our highly skilled, multi-discipline team works across Machine Learning, Reinforcement Learning, and Generative AI to reimagine game development. We work closely with first-party game developers and partner studios to bring creative visions to life. Our mission is to use AI responsibly to transform gameplay experiences, enrich narratives, and provide creators with practical tools to optimize their production pipelines.
US, CA, San Diego
Amazon Games is seeking a highly effective Senior Machine Learning Scientist to build and integrate novel ML, RL and Generative AI (Gen AI) approaches into our game pipelines and customer experiences. In this role, you will work closely with our game development studios and operations teams to research and develop generative AI-powered tools, pipelines and features across Amazon Games. At Amazon Games, our ambition is to create bold new experiences that foster community in and around our games. Our team of game industry veterans develops AAA multiplayer games and original IPs, with teams in Seattle, Orange County, San Diego, Montreal, and Bucharest. Amazon Games, through its Studios, Publishing, and Prime Gaming divisions collaborating with external partners, aims to develop, publish, and deliver compelling AAA games and content experiences for gamers to discover. Key job responsibilities - Drive the research, implementation, and productionizing for ambitious and complex AI/ML initiatives for Amazon Games. - Collaborate with game team engineers, designers and artists to design, develop, and integrate new generative AI tools into developer workflows. - Proactively identify and solve problems that affect the quality of life for players, operations, and other developers. - Stay up to date with and analyze the latest advancements, in generative AI technology, and continuously improve product features where meaningful improvements in cost, scalability, quality, or functionality can be achieved. - Consult and contribute to evaluations of other internal or 3rd ML, RL and Gen AI services that could be leveraged by the project or the organization. A day in the life - You thrive in a collaborative environment where your decisions have significant impact and influence. - You are passionate about building game experiences that delight players. - You deliver great workflows, tools, and game innovations to your fellow developers and constantly seek improvement. - You want to be part of something exciting and unique in the gaming ecosystem. About the team The Amazon Games Studio AI Research team focuses on artificial intelligence innovation in gaming. Our highly skilled, multi-discipline team works across Machine Learning, Reinforcement Learning, and Generative AI to reimagine game development. We work closely with first-party game developers and partner studios to bring creative visions to life. Our mission is to use AI responsibly to transform gameplay experiences, enrich narratives, and provide creators with practical tools to optimize their production pipelines.
US, WA, Seattle
We’re working to improve shopping on Amazon using the conversational capabilities of large language models, and are searching for pioneers who are passionate about technology, innovation, and customer experience, and are ready to make a lasting impact on the industry. You'll be working with talented scientists, engineers, and technical program managers (TPM) to innovate on behalf of our customers. If you're fired up about being part of a dynamic, driven team, then this is your moment to join us on this exciting journey!