Auto Machine Translation and Synchronization for "Dive into Deep Learning"

A system built on Amazon Translate reduces the workload of human translators.

Dive into Deep Learning (D2L.ai) is an open-source textbook that makes deep learning accessible to everyone. It features interactive Jupyter notebooks with self-contained code in PyTorch, JAX, TensorFlow, and MXNet, as well as real-world examples, exposition figures, and math. So far, D2L has been adopted by more than 400 universities around the world, such as the University of Cambridge, Stanford University, the Massachusetts Institute of Technology, Carnegie Mellon University, and Tsinghua University.

The latest updates to "Dive into Deep Learning"

Learn about the newest additions to the popular open-source, interactive book, including the addition of a Google JAX implementation and three new chapters in volume 2.

As a result of the book’s widespread adoption, a community of contributors has formed to work on translations in various languages, including Chinese, Japanese, Korean, Portuguese, Turkish, and Vietnamese. To efficiently handle these multiple languages, we have developed the Auto Machine Translation and Synchronization (AMTS) system using Amazon Translate, which aims to reduce the workload of human translators by 80%. The AMTS can be applied to all the languages for translation, and each language-specific sub-AMTS pipeline has its own unique features based on language characteristics and translator preferences.

In this blog post, we will discuss how we build the AMTS framework architecture, its sub-pipelines, and the building blocks of the sub-pipeline. We will demonstrate and analyze the translations between two language pairs: English ↔ Chinese and English ↔ Spanish. Through these analyses, we will recommend best practices for ensuring translation quality and efficiency.

Framework overview

Customers can use Amazon Translate’s Active Custom Translation (ACT) feature to customize translation output on the fly by providing tailored translation examples in the form of parallel data. Parallel data consists of a collection of textual examples in a source language and the desired translations in one or more target languages. During translation, ACT automatically selects the most relevant segments from the parallel data and updates the translation model on the fly based on those segment pairs. This results in translations that better match the style and content of the parallel data.

The AMTS framework consists of multiple sub-pipelines, each of which handles one language translation — English to Chinese, English to Spanish, etc. Multiple translation sub-pipelines can be processed in parallel.

Fundamentally, the sub-pipeline consists of the following steps:

  • Prepare parallel data: The parallel data consists of a list of textual example pairs, in a source language (e.g., English) and a target language (e.g., Chinese). With AMTS, we first prepare the two language datasets and then combine them into one-to-one pairs.
  • Translate through batch jobs: We use the Amazon Translate API call CreateParallelData to import the input file from the Amazon Simple Storage Service (S3) and create a parallel-data resource in Amazon Translate, ready for batch translation jobs. With the parallel-data resource built in the last step, we customize Amazon Translate and use its asynchronous batch process operation to translate a set of documents in the source language in bulk. The translated documents in the target language are stored in Amazon S3.
AMT_paradata_e2e_v2.png

Parallel-data preparation and creation

In the parallel-data preparation step, we build the parallel-data set out of the source documents (sections of the D2L-enbook) and translations produced by professional human translators (e.g., parallel sections from the D2L-zh book). The software module extracts the text from both documents — ignoring code and picture blocks — and pairs them up, storing them in a CSV file. Examples of parallel data are shown in the table below.

English

Chinese

Nonetheless, language models are of great service even in their limited form. For instance, the phrases “to recognize speech” and “to wreck a nice beach” sound very similar. This can cause ambiguity in speech recognition, which is easily resolved through a language model that rejects the second translation as outlandish. Likewise, in a document summarization algorithm it is worthwhile knowing that “dog bites man” is much more frequent than “man bites dog”, or that “I want to eat grandma” is a rather disturbing statement, whereas “I want to eat, grandma” is much more benign.

尽管如此,语言模型依然是非常有用的。例如,短语“to recognize speech”和“to wreck a nice beach”读音上听起来非常相似。这种相似性会导致语音识别中的歧义,但是这很容易通过语言模型来解决,因为第二句的语义很奇怪。同样,在文档摘要生成算法中,“狗咬人”比“人咬狗”出现的频率要高得多,或者“我想吃奶奶”是一个相当匪夷所思的语句,而“我想吃,奶奶”则要正常得多。

Machine translation refers to the automatic translation of a sequence from one language to another. In fact, this field may date back to 1940s soon after digital computers were invented, especially by considering the use of computers for cracking language codes in World War II. For decades, statistical approaches had been dominant in this field before the rise of end-to-end learning using neural networks. The latter is often called neural machine translation to distinguish itself from statistical machine translation that involves statistical analysis in components such as the translation model and the language model.

机器翻译(machine translation)指的是将序列从一种语言自动翻译成另一种语言。事实上,这个研究领域可以追溯到数字计算机发明后不久的20世纪40年代,特别是在第二次世界大战中使用计算机破解语言编码。几十年来,在使用神经网络进行端到端学习的兴起之前,统计学方法在这一领域一直占据主导地位

Emphasizing end-to-end learning, this book will focus on neural machine translation methods. Different from our language model problem in the last section, whose corpus is in one single language, machine translation datasets are composed of pairs of text sequences that are in the source language and the target language, respectively. Thus, instead of reusing the preprocessing routine for language modeling, we need a different way to preprocess machine translation datasets. In the following, we show how to load the preprocessed data into mini batches for training.

本书的关注点是神经网络机器翻译方法,强调的是端到端的学习。与 上节中的语料库是单一语言的语言模型问题存在不同,机器翻译的数据集是由源语言和目标语言的文本序列对组成的。因此,我们需要一种完全不同的方法来预处理机器翻译数据集,而不是复用语言模型的预处理程序。下面,我们看一下如何将预处理后的数据加载到小批量中用于训练

When the parallel data file is created and ready to use, we upload it to a folder in an S3 bucket and use CreateParallelData to kick off a creation job in Amazon Translate. If we only want to update an existing parallel-data resource with new inputs, the UpdateParallelData API call is the right one to make.

Once the job is completed, we can find the parallel-data resource in the Amazon Translate management console. The resource can be further managed in the AWS Console through the download, update, and delete buttons, as well as through AWS CLI and the public API.

Asynchronous batch translation with parallel data

After the parallel-data resource is created, the next step in the sub-pipeline is to use the Amazon Translate StartTextTranslationJob API call to initiate a batch asynchronous translation. The sub-pipeline uploads the source files into an Amazon S3 bucket folder.

One batch job can handle translation of multiple source documents, and the output files will be put in another S3 bucket folder. In addition to the input and output data configurations, the source language, target language, and prepared parallel-data resource are also specified as parameters in the API invocation.

src_lang = "en" 
tgt_lang =  "zh"
src_fdr = "input-short-test-en2zh"

pd_name = "d2l-parallel-data_v2"

response = translate_client.start_text_translation_job(
            JobName='D2L1',
            InputDataConfig={
                'S3Uri': 's3://'+S3_BUCKET+'/'+src_fdr+'/',
                'ContentType': 'text/html'
            },
            OutputDataConfig={
                'S3Uri': 's3://'+S3_BUCKET+'/output/',
            },
            DataAccessRoleArn=ROLE_ARN,
            SourceLanguageCode=src_lang,
            TargetLanguageCodes=[tgt_lang, ],
            ParallelDataNames=pd_name
)

Depending on the number of input files, the job takes minutes to hours to complete. We can find the job configurations and statuses, including the output file location, on the Amazon Translate management console.

The translated documents are available in the output S3 folder, with the filename <target language>.<source filename>. Users can download them and perform further evaluation.

Using parallel data yields better translation

To evaluate translation performance in each sub-pipeline, we selected five articles from the English version of D2L and translated them into Chinese through the en-zh sub-pipeline. Then we calculated the BLEU score of each translated document. The BLEU (BiLingual Evaluation Understudy) score calculates the similarity of the AMTS translated output to the reference translation by human translator. The number is between 0 and 1; the higher the score, the better the quality of the translation.

We then compare the AMTS-generated results with the translation of the same document using the traditional method (without parallel data). The traditional method is implemented by the TranslateText API call, whose parameters include the name of the source text and the source and target languages.

src_lang = "en" 
tgt_lang =  "zh"    
    
 response = translate_client.translate_text(
         Text = text, 
         TerminologyNames = [],
         SourceLanguageCode = src_lang, 
         TargetLanguageCode = tgt_lang
)

The translation results are compared in the following table, for both English-to-Chinese and Chinese-to-English translation. We observe that the translation with parallel data shows improvement over the traditional method.

Article

EN to ZH

ZH to EN

Without ACT

With ACT

Without ACT

With ACT

approx-training

0.553

0.549

0.717

0.747

bert-dataset

0.548

0.612

0.771

0.831

language-models-and-dataset

0.502

0.518

0.683

0.736

machine-translation-and-dataset

0.519

0.546

0.706

0.788

sentiment-analysis-and-dataset

0.558

0.631

0.725

0.828

Average

0.536

0.5712

0.7204

0.786

Fine-tuning the parallel data to improve translation quality

To further improve the translation quality, we construct the parallel-data pairs in a more granular manner. Instead of extracting parallel paragraphs from source and reference documents and pairing them up, we further split each paragraph into multiple sentences and use sentence pairs as training examples.

EN

ZH

Likewise, in a document summarization algorithm it is worthwhile knowing that “dog bites man” is much more frequent than “man bites dog”, or that “I want to eat grandma” is a rather disturbing statement, whereas “I want to eat, grandma” is much more benign

同样,在文档摘要生成算法中,“狗咬人”比“人咬狗”出现的频率要高得多,或者“我想吃奶奶”是一个相当匪夷所思的语句,而“我想吃,奶奶”则要正常得多

For decades, statistical approaches had been dominant in this field before the rise of end-to-end learning using neural networks

几十年来,在使用神经网络进行端到端学习的兴起之前,统计学方法在这一领域一直占据主导地位

In the following, we show how to load the preprocessed data into minibatches for training

下面,我们看一下如何将预处理后的数据加载到小批量中用于训练

We tested both the paragraph pair and sentence pair methods and found that more-granular data (sentence pairs) yields better translation quality than less-granular data (paragraph paragraphs). The comparison is shown in the table below for English ↔ Chinese translation.

Article

EN to ZH

ZH to EN

ACT by “pair of paragraph”

ACT by “pair of sentence”

ACT by “pair of paragraph”

ACT by “pair of sentence”

approx-training

0.549

0.589

0.747

0.77

bert-dataset

0.612

0.689

0.831

0.9

language-models-and-dataset

0.518

0.607

0.736

0.806

machine-translation-and-dataset

0.546

0.599

0.788

0.89

sentiment-analysis-and-dataset

0.631

0.712

0.828

0.862

Average

0.5712

0.6392

0.786

0.8456

Extend usage of parallel data to general machine translation

To extend the usability of parallel data to general machine translation, we need to construct parallel-data sets from a large volume of translated documents. To maximize translation accuracy, the parallel datasets should have the same contexts and subjects as the documents to be translated.

We tested this approach in the English ↔ Spanish sub-pipeline. The parallel data pairs were built from English ↔ Spanish articles crawled from the web using the keyword “machine learning”.

We applied this parallel data in translating an English article (abbreviated DLvsML in the results table) into Spanish and compared the results with those of traditional translation, without parallel data. The BLEU scores show that parallel data with the same subject (“machine learning”) does help to improve the performance of general machine translation.

EN to ES

ES to EN

Without ACT

With ACT

Without ACT

With ACT

DLvsML

0.792

0.824

0.809

0.827

The relative fluency of translations from English to Spanish, with and without ACT, can be seen in the table below.

EN source text

ES reference text (human translation)

ES translation without ACT

ES translation with ACT

Moves through the learning process by resolving the problem on an end-to-end basis.

Pasa por el proceso de aprendizaje mediante la resolución del problema de un extremo a otro.

Avanza en el proceso de aprendizaje resolviendo el problema de un extremo a otro.

Avanza el proceso de aprendizaje resolviendo el problema de forma integral.

Deep learning use cases

Casos de uso del aprendizaje profundo

Casos de uso de aprendizaje profundo

Casos prácticos de aprendizaje profundo

Image caption generation

Generación de subtítulos para imágenes

Generación de leyendas de imágenes

Generación de subtítulos de imagen

Conclusion and best practices

In this post, we introduced the Auto Machine Translation and Synchronization (AMTS) framework and pipelines and their application to English ↔ Chinese and English ↔ Spanish D2L.ai auto-translation. We also discussed best practices for using the Amazon Translate service in the translation pipeline, particularly the advantages of the Active Custom Translation (ACT) feature with parallel data.

  • Leveraging the Amazon Translate service, the AMTS pipeline provides fluent translations. Informal qualitative assessments suggest that the translated texts read naturally and are mostly grammatically correct.
  • In general, the ACT feature with parallel data improves translation quality in the AMTS sub-pipeline. We show that using the ACT feature leads to better performance than using the traditional Amazon Translate real-time translation service.
  • The more granular the parallel data pairs are, the better the translation performance. We recommend constructing the parallel data as pairs of sentences, rather than pairs of paragraphs.

We are working on further improving the AMTS framework to improve translation quality for other languages. Your feedback is always welcome.

Research areas

Related content

US, MA, N.reading
Amazon Industrial Robotics is seeking exceptional talent to help develop the next generation of advanced robotics systems that will transform automation at Amazon's scale. We're building revolutionary robotic systems that combine cutting-edge AI, sophisticated control systems, and advanced mechanical design to create adaptable automation solutions capable of working safely alongside humans in dynamic environments. This is a unique opportunity to shape the future of robotics and automation at unprecedented scale, working with world-class teams pushing the boundaries of what's possible in robotic manipulation, locomotion, and human-robot interaction. This role presents an opportunity to shape the future of robotics through innovative applications of deep learning and large language models. At Amazon Industrial Robotics we leverage advanced robotics, machine learning, and artificial intelligence to solve complex operational challenges at unprecedented scale. Our fleet of robots operates across hundreds of facilities worldwide, working in sophisticated coordination to fulfill our mission of customer excellence. We are pioneering the development of robotics foundation models that: - Enable unprecedented generalization across diverse tasks - Enable unprecedented robustness and reliability, industry-ready - Integrate multi-modal learning capabilities (visual, tactile, linguistic) - Accelerate skill acquisition through demonstration learning - Enhance robotic perception and environmental understanding - Streamline development processes through reusable capabilities The ideal candidate will contribute to research that bridges the gap between theoretical advancement and practical implementation in robotics. You will be part of a team that's revolutionizing how robots learn, adapt, and interact with their environment. Join us in building the next generation of intelligent robotics systems that will transform the future of automation and human-robot collaboration. Key job responsibilities As an Applied Science Manager in the Foundations Model team, you will: - Build and lead a team of scientists and developers responsible for foundation model development - Define the right ‘FM recipe’ to reach industry ready solutions - Define the right strategy to ensure fast and efficient development, combining state of the art methods, research and engineering. - Lead Model Development and Training: Designing and implementing the model architectures, training and fine tuning the foundation models using various datasets, and optimize the model performance through iterative experiments - Lead Data Management: Process and prepare training data, including data governance, provenance tracking, data quality checks and creating reusable data pipelines. - Lead Experimentation and Validation: Design and execute experiments to test model capabilities on the simulator and on the embodiment, validate performance across different scenarios, create a baseline and iteratively improve model performance. - Lead Code Development: Write clean, maintainable, well commented and documented code, contribute to training infrastructure, create tools for model evaluation and testing, and implement necessary APIs - Research: Stay current with latest developments in foundation models and robotics, assist in literature reviews and research documentation, prepare technical reports and presentations, and contribute to research discussions and brainstorming sessions. - Collaboration: Work closely with senior scientists, engineers, and leaders across multiple teams, participate in knowledge sharing, support integration efforts with robotics hardware teams, and help document best practices and methodologies.
CA, QC, Montreal
Join the next revolution in robotics at Amazon's Frontier AI & Robotics team, where you'll work alongside world-renowned AI pioneers to push the boundaries of what's possible in robotic intelligence. As an Applied Scientist, you'll be at the forefront of developing breakthrough foundation models that enable robots to perceive, understand, and interact with the world in unprecedented ways. You'll drive independent research initiatives in areas such as perception, manipulation, scene understanding, sim2real transfer, multi-modal foundation models, and multi-task learning, designing novel algorithms that bridge the gap between state-of-the-art research and real-world deployment at Amazon scale. In this role, you'll balance innovative technical exploration with practical implementation, collaborating with platform teams to ensure your models and algorithms perform robustly in dynamic real-world environments. You'll have access to Amazon's vast computational resources, enabling you to tackle ambitious problems in areas like very large multi-modal robotic foundation models and efficient, promptable model architectures that can scale across diverse robotic applications. Key job responsibilities - Design and implement novel deep learning architectures that push the boundaries of what robots can understand and accomplish - Drive independent research initiatives in robotics foundation models, focusing on breakthrough approaches in perception, and manipulation, for example open-vocabulary panoptic scene understanding, scaling up multi-modal LLMs, sim2real/real2sim techniques, end-to-end vision-language-action models, efficient model inference, video tokenization - Lead technical projects from conceptualization through deployment, ensuring robust performance in production environments - Collaborate with platform teams to optimize and scale models for real-world applications - Contribute to the team's technical strategy and help shape our approach to next-generation robotics challenges A day in the life - Design and implement novel foundation model architectures, leveraging our extensive compute infrastructure to train and evaluate at scale - Collaborate with our world-class research team to solve complex technical challenges - Lead technical initiatives from conception to deployment, working closely with robotics engineers to integrate your solutions into production systems - Participate in technical discussions and brainstorming sessions with team leaders and fellow scientists - Leverage our massive compute cluster and extensive robotics infrastructure to rapidly prototype and validate new ideas - Transform theoretical insights into practical solutions that can handle the complexities of real-world robotics applications About the team At Frontier AI & Robotics, we're not just advancing robotics – we're reimagining it from the ground up. Our team is building the future of intelligent robotics through ground breaking foundation models and end-to-end learned systems. We tackle some of the most challenging problems in AI and robotics, from developing sophisticated perception systems to creating adaptive manipulation strategies that work in complex, real-world scenarios. What sets us apart is our unique combination of ambitious research vision and practical impact. We leverage Amazon's massive computational infrastructure and rich real-world datasets to train and deploy state-of-the-art foundation models. Our work spans the full spectrum of robotics intelligence – from multimodal perception using images, videos, and sensor data, to sophisticated manipulation strategies that can handle diverse real-world scenarios. We're building systems that don't just work in the lab, but scale to meet the demands of Amazon's global operations. Join us if you're excited about pushing the boundaries of what's possible in robotics, working with world-class researchers, and seeing your innovations deployed at unprecedented scale.
US, WA, Bellevue
Amazon is looking for a Principal Applied Scientist world class scientists to join its AWS Fundamental Research Team working within a variety of machine learning disciplines. This group is entrusted with developing core machine learning solutions for AWS services. At the AWS Fundamental Research Team you will invent, implement, and deploy state of the art machine learning algorithms and systems. You will build prototypes and explore conceptually large scale ML solutions across different domains and computation platforms. You will interact closely with our customers and with the academic community. You will be at the heart of a growing and exciting focus area for AWS and work with other acclaimed engineers and world famous scientists. About the team About the team Diverse Experiences AWS values diverse experiences. Even if you do not meet all of the qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why AWS? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Inclusive Team Culture Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences, inspire us to never stop embracing our uniqueness. Mentorship & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud.
IN, KA, Bengaluru
Alexa+ is Amazon’s next-generation, AI-powered virtual assistant. Building on the original Alexa, it uses generative AI to deliver a more conversational, personalised, and effective experience. Alexa Sensitive Content Intelligence (ASCI) team is developing responsible AI (RAI) solutions for Alexa+, empowering it to provide useful information responsibly. The team is currently looking for Senior Applied Scientists with a strong background in NLP and/or CV to design and develop ML solutions in the RAI space using generative AI across all languages and countries. A Senior Applied Scientist will be a tech lead for a team of exceptional scientists to develop novel algorithms and modeling techniques to advance the state of the art in NLP or CV related tasks. You will work in a hybrid, fast-paced organization where scientists, engineers, and product managers work together to build customer facing experiences. You will collaborate with and mentor other scientists to raise the bar of scientific research in Amazon. Your work will directly impact our customers in the form of products and services that make use of speech, language, and computer vision technologies. We are looking for a leader with strong technical experiences a passion for building scientific driven solutions in a fast-paced environment. You should have good understanding of Artificial Intelligence (AI), Natural Language Understanding (NLU), Machine Learning (ML), Dialog Management, Automatic Speech Recognition (ASR), and Audio Signal Processing where to apply them in different business cases. You leverage your exceptional technical expertise, a sound understanding of the fundamentals of Computer Science, and practical experience of building large-scale distributed systems to creating reliable, scalable, and high-performance products. In addition to technical depth, you must possess exceptional communication skills and understand how to influence key stakeholders. You will be joining a select group of people making history producing one of the most highly rated products in Amazon's history, so if you are looking for a challenging and innovative role where you can solve important problems while growing as a leader, this may be the place for you. Key job responsibilities You'll lead the science solution design, run experiments, research new algorithms, and find new ways of optimizing customer experience. You set examples for the team on good science practice and standards. Besides theoretical analysis and innovation, you will work closely with talented engineers and ML scientists to put your algorithms and models into practice. Your work will directly impact the trust customers place in Alexa, globally. You contribute directly to our growth by hiring smart and motivated Scientists to establish teams that can deliver swiftly and predictably, adjusting in an agile fashion to deliver what our customers need. A day in the life You will be working with a group of talented scientists on researching algorithm and running experiments to test scientific proposal/solutions to improve our sensitive contents detection and mitigation. This will involve collaboration with partner teams including engineering, PMs, data annotators, and other scientists to discuss data quality, policy, and model development. You will mentor other scientists, review and guide their work, help develop roadmaps for the team. You work closely with partner teams across Alexa to deliver platform features that require cross-team leadership. About the hiring group About the team The mission of the Alexa Sensitive Content Intelligence (ASCI) team is to (1) minimize negative surprises to customers caused by sensitive content, (2) detect and prevent potential brand-damaging interactions, and (3) build customer trust through appropriate interactions on sensitive topics. The term “sensitive content” includes within its scope a wide range of categories of content such as offensive content (e.g., hate speech, racist speech), profanity, content that is suitable only for certain age groups, politically polarizing content, and religiously polarizing content. The term “content” refers to any material that is exposed to customers by Alexa (including both 1P and 3P experiences) and includes text, speech, audio, and video.
US, WA, Seattle
Innovators wanted! Are you an entrepreneur? A builder? A dreamer? This role is part of an Amazon Special Projects team that takes the company’s Think Big leadership principle to the limits. If you’re interested in innovating at scale to address big challenges in the world, this is the team for you. As an Research Scientist on our team, you will focus on building state-of-the-art ML models for healthcare. Our team rewards curiosity while maintaining a laser-focus in bringing products to market. Competitive candidates are responsive, flexible, and able to succeed within an open, collaborative, entrepreneurial, startup-like environment. At the forefront of both academic and applied research in this product area, you have the opportunity to work together with a diverse and talented team of scientists, engineers, and product managers and collaborate with other teams. This role offers a unique opportunity to work on projects that could fundamentally transform healthcare outcomes. Key job responsibilities In this role, you will: • Analyze complex healthcare data to identify patterns, trends, and insights • Develop and validate statistical methodologies • Collaborate with Applied Scientists to support model development efforts • Drive advancements in machine learning and data science • Balance theoretical knowledge with practical implementation • Work closely with customers and partners to understand their requirements • Navigate ambiguity and create clarity in early-stage product development • Collaborate with cross-functional teams while fostering innovation in a collaborative work environment to deliver impactful solutions • Establish best practices for data analysis, data curation, and model evaluation • Partner with leadership to define roadmap and strategic initiatives You’ll need a strong background in statistics, knowledge of the complications of longitudinal healthcare data, proven leadership skills, and the ability to translate complex concepts into actionable plans. You’ll also need to effectively translate research findings into practical solutions. A day in the life You'll work with large-scale healthcare datasets, conducting sophisticated statistical analyses to generate actionable insights. You'll collaborate with Applied Scientists to prepare data, build ML models, validate model predictions and ensure statistical rigor in our approach. The team is driven by business needs, which requires collaboration with other Scientists, Engineers, and Product Managers across the Special Projects organization. You will prepare written and verbal presentations to share insights to audiences of varying levels of technical sophistication. About the team We represent Amazon's ambitious vision to solve the world's most pressing challenges. We are exploring new approaches to enhance research practices in the healthcare space, leveraging Amazon's scale and technological expertise. We operate with the agility of a startup while backed by Amazon's resources and operational excellence. We're looking for builders who are excited about working on ambitious, undefined problems and are comfortable with ambiguity.
US, NJ, Newark
At Audible, we believe stories have the power to transform lives. It’s why we work with some of the world’s leading creators to produce and share audio storytelling with our millions of global listeners. We are dreamers and inventors who come from a wide range of backgrounds and experiences to empower and inspire each other. Imagine your future with us. ABOUT THIS ROLE As an Applied Scientist, you will solve large complex real-world problems at scale, draw inspiration from the latest science and technology to empower undefined/untapped business use cases, delve into customer requirements, collaborate with tech and product teams on design, and create production-ready models that span various domains, including Machine Learning (ML), Artificial Intelligence (AI), Natural Language Processing (NLP), Reinforcement Learning (RL), real-time and distributed systems. As an Applied Scientist on our AI Acceleration Team, you will be at the forefront of transforming how Audible harnesses the power of AI to enhance productivity, unlock new value, and reimagine how we work. In this unique role, you'll apply ML/AI approaches to solve complex real-world problems while helping build the blueprint for how Audible works with AI. ABOUT YOU You are passionate about applying scientific approaches to real business challenges, with deep expertise in Machine Learning, Natural Language Processing, GenAI, and large language models. You thrive in collaborative environments where you can both build solutions and empower others to leverage AI effectively. You have a track record of developing production-ready models that balance scientific excellence with practical implementation. You're excited about not just building AI solutions, but also creating frameworks, evaluation methodologies, and knowledge management systems that elevate how entire organizations work with AI. As an Applied Scientist, you will... - Design and implement innovative AI solutions across our three pillars: driving internal productivity, building the blueprint for how Audible works with AI, and unlocking new value through ML & AI-powered product features - Develop machine learning models, frameworks, and evaluation methodologies that help teams streamline workflows, automate repetitive tasks, and leverage collective knowledge - Enable self-service workflow automation by developing tools that allow non-technical teams to implement their own solutions - Collaborate with product, design and engineering teams to rapidly prototype new product ideas that could unlock new audiences and revenue streams - Build evaluation frameworks to measure AI system quality, effectiveness, and business impact - Mentor and educate colleagues on AI best practices, helping raise the AI fluency across the organization ABOUT AUDIBLE Audible is the leading producer and provider of audio storytelling. We spark listeners’ imaginations, offering immersive, cinematic experiences full of inspiration and insight to enrich our customers daily lives. We are a global company with an entrepreneurial spirit. We are dreamers and inventors who are passionate about the positive impact Audible can make for our customers and our neighbors. This spirit courses throughout Audible, supporting a culture of creativity and inclusion built on our People Principles and our mission to build more equitable communities in the cities we call home.
IL, Haifa
Are you a scientist interested in pushing the state of the art in Information Retrieval, Large Language Models and Recommendation Systems? Are you interested in innovating on behalf of millions of customers, helping them accomplish their every day goals? Do you wish you had access to large datasets and tremendous computational resources? Do you want to join a team of capable scientist and engineers, building the future of e-commerce? Answer yes to any of these questions, and you will be a great fit for our team at Amazon. Our team is part of Amazon’s Personalization organization, a high-performing group that leverages Amazon’s expertise in machine learning, generative AI, large-scale data systems, and user experience design to deliver the best shopping experiences for our customers. Our team is building next-generation personalization systems powered by Large Language Models. We are tackling novel research challenges to help customers discover products they'll love - at Amazon scale and latency requirements. We are a team uniquely placed within Amazon, to have a direct window of opportunity to influence how customers will think about their shopping journey in the future. As an Applied Science Manager, you will lead a team of scientists working at the frontier of LLM-based personalization. You will set the technical vision, drive the research agenda, and ensure your team delivers production-ready solutions. You will hire, mentor, and develop world-class scientists while fostering a culture of innovation and scientific rigor. You will partner closely with engineering and product teams to translate ambitious research into customer-facing impact, and represent your team's work to senior leadership. Please visit https://www.amazon.science for more information.
IL, Tel Aviv
We are looking for a Data Scientist to join our Prime Video team in Israel, focusing on personalizing customer experiences through Search and Recommendations. Our team leverages Machine Learning (ML) to deliver tailored content discovery, helping millions of customers find the entertainment they love. You will work on large-scale experimentation, measurement frameworks, and data-driven decision-making that directly shapes how customers interact with Prime Video. Key job responsibilities - Design metrics frameworks and evaluation systems to measure the quality, performance, and reliability of algorithmic solutions - Lead the design, execution, and analysis of A/B tests to validate product hypotheses and quantify customer impact - Communicate analytical findings and recommendations clearly to both technical teams and business stakeholders, driving data-informed decisions - Partner with Applied Scientists, Software Engineers, and Product Managers to define requirements, evaluate models, and drive data-informed product decisions - Act as the subject matter expert for data structures, metrics definitions, and analytical best practices - Identify opportunities for improving customer experience through deep-dive analyses of user behavior and algorithm performance
US, WA, Seattle
We are seeking a Senior Applied Scientist to join our team in developing pioneering AI research, Generative AI, Agentic AI, Large Language Models (LLMs), Diffusion and Flow Models, and other advanced Machine Learning and Deep Learning solutions for Amazon Selection and Catalog Systems, within the AI Lab Team. This role offers a unique opportunity to work on AI research and AI products that will shape the future of online shopping experiences. Our team operates at the forefront of AI research and development, working on challenges that directly impact millions of customers worldwide. We push the boundaries of AI at both the foundational and application layers. As a Senior Applied Scientist, you will have the chance to experiment with LLMs and deep learning techniques, apply your research to solve real-world problems at an unprecedented scale, and collaborate with experienced scientists to contribute to Amazon's scientific innovation. Join us in redefining the future of shopping. Your work will directly influence how customers interact with the world's largest online store. Key job responsibilities - Design and implement novel AI solutions for Amazon catalog of products - Develop and train state-of-the-art LLMs, Diffusion Models, and other Generative AI models - Build and deploy autonomous AI Agents in Amazon production ecosystem - Scale AI models to handle billions of diverse products across multiple languages and geographies - Conduct research in areas such as Autonomous AI Agents, Generative AI, Language Modeling, Multi-modality Computer Vision, Diffusion Models, Reinforcement Learning - Collaborate with cross-functional teams to integrate AI models into Amazon's production ecosystem - Contribute to the scientific community through publications and conference presentations
US, WA, Seattle
We are open to hiring candidates to work out of one of the following locations: Seattle, WA, USA Are you interested in building Agentic AI solutions that solve complex builder experience challenges with significant global impact? The Security Tooling team designs and builds high-performance AI systems using LLMs and machine learning that identify builder bottlenecks, automate security workflows, and optimize the software development lifecycle—empowering engineering teams worldwide to ship secure code faster while maintaining the highest security standards. As a Senior Applied Scientist on our Security Tooling team, you will focus on building state-of-the-art ML models to enhance builder experience and productivity. You will identify builder bottlenecks and pain points across the software development lifecycle, design and apply experiments to study developer behavior, and measure the downstream impacts of security tooling on engineering velocity and code quality. Our team rewards curiosity while maintaining a laser-focus on bringing products to market that empower builders while maintaining security excellence. Competitive candidates are responsive, flexible, and able to succeed within an open, collaborative, entrepreneurial, startup-like environment. At the forefront of both academic and applied research in builder experience and security automation, you have the opportunity to work together with a diverse and talented team of scientists, engineers, and product managers and collaborate with other teams. This role offers a unique opportunity to work on projects that could fundamentally transform how builders interact with security tools and how organizations balance security requirements with developer productivity. Key job responsibilities • Design and implement novel AI/ML solutions for complex security challenges and improve builder experience • Drive advancements in machine learning and science • Balance theoretical knowledge with practical implementation • Navigate ambiguity and create clarity in early-stage product development • Collaborate with cross-functional teams while fostering innovation in a collaborative work environment to deliver impactful solutions • Design and execute experiments to evaluate the performance of different algorithms and models, and iterate quickly to improve results • Establish best practices for ML experimentation, evaluation, development and deployment You’ll need a strong background in AI/ML, proven leadership skills, and the ability to translate complex concepts into actionable plans. You’ll also need to effectively translate research findings into practical solutions. A day in the life • Integrate ML models into production security tooling with engineering teams • Build and refine ML models and LLM-based agentic systems that understand builder intent • Create agentic AI solutions that reduce security friction while maintaining high security standards • Prototype LLM-powered features that automate repetitive security tasks • Design and conduct experiments (A/B tests, observational studies) to measure downstream impacts of tooling changes on engineering productivity • Present experimental results and recommendations to leadership and cross-functional teams • Gather feedback from builder communities to validate hypotheses About the team Diverse Experiences Amazon Security values diverse experiences. Even if you do not meet all of the qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why Amazon Security? At Amazon, security is central to maintaining customer trust and delivering delightful customer experiences. Our organization is responsible for creating and maintaining a high bar for security across all of Amazon’s products and services. We offer talented security professionals the chance to accelerate their careers with opportunities to build experience in a wide variety of areas including cloud, devices, retail, entertainment, healthcare, operations, and physical stores Inclusive Team Culture In Amazon Security, it’s in our nature to learn and be curious. Ongoing DEI events and learning experiences inspire us to continue learning and to embrace our uniqueness. Addressing the toughest security challenges requires that we seek out and celebrate a diversity of ideas, perspectives, and voices. Training & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, training, and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why flexible work hours and arrangements are part of our culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve.