Making machine translation more robust, consistent, and stable

Training on pseudo-labeled data limits the consequences of slight input variations and prevents updated models from backsliding on particular tasks.

Like many other machine learning applications, neural machine translation (NMT) benefits from overparameterized deep neural models — models so large that they would seem to risk overfitting but whose performance, for some reason, keeps scaling with the number of parameters.

Recently, larger models have mounted impressive improvements in translation quality, but like the models used in other applications, the NMT models are brittle: predictions are sensitive to small input changes, and there can be significant changes in model predictions when models are retrained. Users can be negatively affected, especially if they come to expect certain outputs for downstream tasks.

Especially jarring are cases where the model suddenly produces worse outputs on identical input segments. While these effects have been studied earlier in classification tasks, where an input is sorted into one of many existing categories, they haven’t been as well explored for generation tasks, where the output is a novel data item or sequence.

Translation model brittleness.png
Larger neural networks have mounted impressive results in machine translation, but these models are brittle. For example, (a) misspellings can change the output of a machine translation model, or (b) a change in the random seed during training, which can occur for reasons unrelated to the model itself (e.g., changing hardware), can lead to different results.

In a paper we recently presented at the International Conference on Learning Representations (ICLR), we investigated the issue of model robustness, consistency, and stability to updates — a set of properties we call model inertia. We found that the technique of using pseudo-labeled data in model training — i.e., pseudo-label training (PLT) — has the underreported side effect of improving model inertia.

Related content
Transfer learning using limited contrastive data improves formality accuracy without compromising performance.

In particular, we looked at bidirectional arcs between low- and high-resourced languages (en ↔ de, en ↔ ru, and en ↔ ja), and PLT improved model inertia across all of them. We also introduced a means of measuring regression — in which an updated model backslides on particular tasks — in generation models and show that it is also reduced by PLT. Having observed these effects, we hypothesize that a distribution simplification effect is at play and may hold more generally for other generation tasks.

Experiments

In our experiments, we examined several different flavors of PLT common in machine translation. In certain applications (e.g., non-autoregressive machine translation), unlabeled data or monolingual data is made into parallel data by translating (pseudo-labeling) the monolingual data. This is typically known as self-training or forward translation. In other contexts (e.g., knowledge distillation), it is common to use a larger model (a teacher model) to pseudo-label the training data and train a smaller model (a student model) on the combination of pseudo-labeled and parallel training data.

PLT for NMT.png
In this work, we studied how pseudo-label training (PLT) influences model inertia — i.e., model consistency, robustness, and stability to updates. We studied (a) how outputs change when inputs are changed; (b) how outputs are changed when random seeds used in training are changed; and (c) the number of negative flips, or regressions in quality, that occur after updates.

First, we tested the impact that adding pseudo-labeled data has on model robustness to minor variations in the inputs. We looked at synthetically generated misspellings where one character is randomly replaced by another and also at naturally occurring grammatical errors. We then compared the outputs of the machine translation models with and without these variations and measured how consistent (i.e., similar) the outputs are and the robustness of the models (i.e., how much quality degrades). We found that training on pseudo-labeled data makes models more consistent and that this wasn’t a function of the amount of training data or the size of the teacher model.

Random seed.png
We studied how PLT produces models that are more stable under incremental updates, which we model as changes in a random seed. We find that even with such minor updates, less than 10% of outputs remain the same. The inclusion of pseudo-labeled data almost doubles the number of outputs that are exact matches (EM) and increases stability (St.), defined as the lexical similarity between outputs, by roughly 20%.

We also considered the scenario in which models are incrementally updated (i.e., no changes to the model architecture, no major changes to the data, etc.) and tested whether models were more stable when we changed random seeds in student models or teacher models. We looked at the number of segments that were exact matches (EM) of each other and the stability (St.) of the models, which we defined as the lexical similarity of the outputs under changes in random seed. Surprisingly, we found that upwards of 90% of outputs are changed, just by changes in random seeds. We found that with pseudo-label data, the models are more stable by 20%, and close to double the number of segments are similar.

Related content
Test set includes 1,150 text segments, each in nine languages.

Given the large number of output changes, we naturally asked if the model makes worse translations for specific inputs, i.e., negative flips. Previously, negative flips have been studied in the context of classification, but in machine translation, the concept is more nebulous, since metrics can be noisy on the level of sentence segments. Consequently, we used human evaluations of our models to see if models had regressed.

Given the limitations in human evaluations, we also looked at a targeted error category that allowed us to measure segment-level regression automatically. In this work, we adopted gender translation accuracy as the targeted error and tested on the WinoMT dataset. We found that PLT methods reduce the number of negative flips in terms of regressions on the targeted and generic quality metric.

A hypothesis

Having observed an improvement in the model inertia of models trained on pseudo-labeled data, we set out to investigate the reasons behind it. We hypothesized that the improvement comes from a distribution-simplification effect similar to one seen in non-autoregressive MT. To test this idea, we conducted experiments comparing pseudo-label training with several other techniques well-known in MT for producing more robust models: BPE-Dropout, Back-translation, and n-best sampling.

We then looked at how each of these methods reduced the complexity of the training data, using a metric called the conditional entropy. Across the methods we experimented with, we found that model stability is correlated with simpler training distributions as measured by the conditional entropy.

PLT-NMT results.png
Across the methods we experimented with, we found that model stability is correlated with simpler training distributions as measured by conditional entropy, C(d).

As we enter an era where ever larger neural networks come into wider use to solve a variety of generation tasks, with the potential to shape the user experience in unimaginable ways, controlling these models to produce more robust, consistent, and stable outputs becomes paramount. We hope that by sharing our results, we can help make progress toward a world where AI evolves gracefully over time.

Related content

US, CA, Palo Alto
Amazon is the 4th most popular site in the US. Our product search engine, one of the most heavily used services in the world, indexes billions of products and serves hundreds of millions of customers world-wide. We are working on a new initiative to transform our search engine into a shopping engine that assists customers with their shopping missions. We look at all aspects of search CX, query understanding, Ranking, Indexing and ask how we can make big step improvements by applying advanced Machine Learning (ML) and Deep Learning (DL) techniques. We’re seeking a thought leader to direct science initiatives for the Search Relevance and Ranking at Amazon. This person will also be a deep learning practitioner/thinker and guide the research in these three areas. They’ll also have the ability to drive cutting edge, product oriented research and should have a notable publication record. This intellectual thought leader will help enhance the science in addition to developing the thinking of our team. This leader will direct and shape the science philosophy, planning and strategy for the team, as we explore multi-modal, multi lingual search through the use of deep learning . We’re seeking an individual that can enhance the science thinking of our team: The org is made of 60+ applied scientists, (2 Principal scientists and 5 Senior ASMs). This person will lead and shape the science philosophy, planning and strategy for the team, as we push into Deep Learning to solve problems like cold start, discovery and personalization in the Search domain. Joining this team, you’ll experience the benefits of working in a dynamic, entrepreneurial environment, while leveraging the resources of Amazon [Earth's most customer-centric internet company]. We provide a highly customer-centric, team-oriented environment in our offices located in Palo Alto, California.
JP, 13, Tokyo
Our mission is to help every vendor drive the most significant impact selling on Amazon. Our team invent, test and launch some of the most innovative services, technology, processes for our global vendors. Our new AVS Professional Services (ProServe) team will go deep with our largest and most sophisticated vendor customers, combining elite client-service skills with cutting edge applied science techniques, backed up by Amazon’s 20+ years of experience in Japan. We start from the customer’s problem and work backwards to apply distinctive results that “only Amazon” can deliver. Amazon is looking for a talented and passionate Applied Science Manager to manage our growing team of Applied Scientists and Business Intelligence Engineers to build world class statistical and machine learning models to be delivered directly to our largest vendors, and working closely with the vendors' senior leaders. The Applied Science Manager will set the strategy for the services to invent, collaborating with the AVS business consultants team to determine customer needs and translating them to a science and development roadmap, and finally coordinating its execution through the team. In this position, you will be part of a larger team touching all areas of science-based development in the vendor domain, not limited to Japan-only products, but collaborating with worldwide science and business leaders. Our current projects touch on the areas of causal inference, reinforcement learning, representation learning, anomaly detection, NLP and forecasting. As the AVS ProServe Applied Science Manager, you will be empowered to expand your scope of influence, and use ProServe as an incubator for products that can be expanded to all Amazon vendors, worldwide. We place strong emphasis on talent growth. As the Applied Science Manager, you will be expected in actively growing future Amazon science leaders, and providing mentoring inside and outside of your team. Key job responsibilities The Applied Science Manager is accountable for: (1) Creating a vision, a strategy, and a roadmap tackling the most challenging business questions from our leading vendors, assess quantitatively their feasibility and entitlement, and scale their scope beyond the ProServe team. (2) Coordinate execution of the roadmap, through direct reports, consisting of scientists and business intelligence engineers. (3) Grow and manage a technical team, actively mentoring, developing, and promoting team members. (4) Work closely with other science managers, program/product managers, and business leadership worldwide to scope new areas of growth, creating new partnerships, and proposing new business initiatives. (5) Act as a technical supervisor, able to assess scientific direction, technical design documents, and steer development efforts to maximize project delivery.
US, NY, New York
Amazon Advertising is one of Amazon's fastest growing and most profitable businesses. As a core product offering within our advertising portfolio, Sponsored Products (SP) helps merchants, retail vendors, and brand owners succeed via native advertising, which grows incremental sales of their products sold through Amazon. The SP team's primary goals are to help shoppers discover new products they love, be the most efficient way for advertisers to meet their business objectives, and build a sustainable business that continuously innovates on behalf of customers. Our products and solutions are strategically important to enable our Retail and Marketplace businesses to drive long-term growth. We deliver billions of ad impressions and millions of clicks and break fresh ground in product and technical innovations every day! The Supply team (within Sponsored Products) is looking for an Applied Scientist to join a fast-growing team with the mandate of creating new ad experiences that elevate the shopping experience for hundreds of millions customers worldwide. The Applied Scientist will take end-to-end ownership of driving new product/feature innovation by applying advanced statistical and machine learning models. The role will handle petabytes of unstructured data (images, text, videos) to extract insights into what metadata can be useful for us to highlight to simplify purchase decisions, and propose new experiences that increase shopper engagement. Why you love this opportunity Amazon is investing heavily in building a world-class advertising business. This team is responsible for defining and delivering a collection of advertising products that drive discovery and sales. Our solutions generate billions in revenue and drive long-term growth for Amazon’s Retail and Marketplace businesses. We deliver billions of ad impressions, millions of clicks daily, and break fresh ground to create world-class products. We are highly motivated, collaborative, and fun-loving team with an entrepreneurial spirit - with a broad mandate to experiment and innovate. Key job responsibilities As an Applied Scientist on this team you will: Build machine learning models and perform data analysis to deliver scalable solutions to business problems. Perform hands-on analysis and modeling with very large data sets to develop insights that increase traffic monetization and merchandise sales without compromising shopper experience. Work closely with software engineers on detailed requirements, technical designs and implementation of end-to-end solutions in production. Design and run A/B experiments that affect hundreds of millions of customers, evaluate the impact of your optimizations and communicate your results to various business stakeholders. Work with scientists and economists to model the interaction between organic sales and sponsored content and to further evolve Amazon's marketplace. Establish scalable, efficient, automated processes for large-scale data analysis, machine-learning model development, model validation and serving. Research new predictive learning approaches for the sponsored products business. Write production code to bring models into production. A day in the life You will invent new experiences and influence customer-facing shopping experiences to help suppliers grow their retail business and the auction dynamics that leverage native advertising; this is your opportunity to work within the fastest-growing businesses across all of Amazon! Define a long-term science vision for our advertising business, driven fundamentally from our customers' needs, translating that direction into specific plans for research and applied scientists, as well as engineering and product teams. This role combines science leadership, organizational ability, technical strength, product focus, and business understanding.
US, CA, Palo Alto
The Amazon Search team creates powerful, customer-focused search solutions and technologies. Whenever a customer visits an Amazon site worldwide and types in a query or browses through product categories, Amazon Search services go to work. We design, develop, and deploy high performance, fault-tolerant distributed search systems used by millions of Amazon customers every day. We are seeking a Principal Scientist with deep expertise in Search. Your responsibility will be to advance the state-of-the-art for search science that leads to world-class products that impact Amazon's customers. Key job responsibilities You will be responsible for defining key research directions, adopting or inventing new machine learning techniques, conducting rigorous experiments, publishing results, and ensuring that research is translated into practice. You will develop long-term strategies, persuade teams to adopt those strategies, propose goals and deliver on them. You will also participate in organizational planning, hiring, mentorship and leadership development. You will be technically fearless and with a passion for building scalable science and engineering solutions. You will serve as a key scientific resource in full-cycle development (conception, design, implementation, testing to documentation, delivery, and maintenance). About the team This is a position on Core Ranking and Experimentation team in Palo Alto, CA. The team works on a variety of topics in search ranking and relevance, such as multi-objective optimization, personalization, and fast online experimentation. We work closely with teams in various parts of the stack to ensure that our science is translated to customer facing products.
US, WA, Bellevue
Amazon is looking for a passionate, talented, and inventive Applied Scientists with a strong machine learning background to help build industry-leading Speech and Language technology. Our mission is to provide a delightful experience to Amazon’s customers by pushing the envelope in Automatic Speech Recognition (ASR), Machine Translation (MT), Natural Language Understanding (NLU), Machine Learning (ML) and Computer Vision (CV). As part of our AI team in Amazon AWS, you will work alongside internationally recognized experts to develop novel algorithms and modeling techniques to advance the state-of-the-art in human language technology. Your work will directly impact millions of our customers in the form of products and services that make use of speech and language technology. You will gain hands on experience with Amazon’s heterogeneous speech, text, and structured data sources, and large-scale computing resources to accelerate advances in spoken language understanding. We are hiring in all areas of human language technology: ASR, MT, NLU, text-to-speech (TTS), and Dialog Management, in addition to Computer Vision.
IN, KA, Bangalore
Amazon is investing heavily in building a world class advertising business and we are responsible for defining and delivering a collection of self-service performance advertising products that drive discovery and sales. Our products are strategically important to our Retail and Marketplace businesses driving long term growth. We deliver billions of ad impressions and millions of clicks daily and are breaking fresh ground to create world-class products. The ATT team, based in Bangalore, is responsible for ensuring that ads are compliant to world-wide advertising policies and are of high quality, leading to higher conversion for the advertisers and providing a great experience for the shoppers. Machine learning, particularly multi-modal data understanding, is fundamental to the way we drive our business, meet our goals and satisfy our customers. ATT team invests in researching and developing state of art models that analyze various type of ad assets – text, audio, images and videos - to ensure compliance to advertising policies. We also help advertisers create more successful ads by creating ML models to assist ad generation as well as to provide data-driven interpretable insights. Key job responsibilities Major responsibilities · Deliver key goals to enhance advertiser experience and protect shopper trust by innovative use of computer vision, NLP and statistical techniques · Drive core business analytics and data science explorations to inform key business decisions and algorithm roadmap · Establish scalable, efficient, automated processes for large scale data analyses, model development, model validation and model implementation · Hire and develop top talent in machine learning and data science and accelerate the pace of innovation in the group · Work proactively with engineering teams and product managers to evangelize new algorithms and drive the implementation of large-scale complex ML models in production
US, WA, Seattle
We are seeking a talented applied researcher to join the Search team responsible for developing reinforcement learning systems for Amazon's shopping experience and delivering it to millions of customers. We believe that shopping on Amazon should be simple, delightful, and full of "wow" moments for everyone.
US, NY, New York
We are looking for detail-oriented, organized, and responsible individuals who are eager to learn how to work with large and complicated data sets. Some knowledge of econometrics, as well as basic familiarity with Python is necessary, and experience with SQL and UNIX would be a plus. These are full-time positions at 40 hours per week, with compensation being awarded on an hourly basis. You will learn how to build data sets and perform applied econometric analysis at Internet speed collaborating with economists, scientists, and product managers. These skills will translate well into writing applied chapters in your dissertation and provide you with work experience that may help you with placement. Roughly 85% of interns from previous cohorts have converted to full time economics employment at Amazon. If you are interested, please send your CV to our mailing list at econ-internship@amazon.com. About the team Amazon's Weblab team enables experimentation at massive scale to help Amazon build better products for customers. A/B testing is in Amazon's DNA and we're at the core of how Amazon innovates on behalf of customers.
US, WA, Seattle
Amazon Advertising is one of Amazon's fastest growing and most profitable businesses, responsible for defining and delivering a collection of advertising products that drive discovery and sales. As a core product offering within our advertising portfolio, Sponsored Products (SP) helps merchants, retail vendors, and brand owners succeed via native advertising, which grows incremental sales of their products sold through Amazon. The SP team's primary goals are to help shoppers discover new products they love, be the most efficient way for advertisers to meet their business objectives, and build a sustainable business that continuously innovates on behalf of customers. Our products and solutions are strategically important to enable our Retail and Marketplace businesses to drive long-term growth. We deliver billions of ad impressions and millions of clicks and break fresh ground in product and technical innovations every day! As an Applied Science Manager in Machine Learning, you will: Directly manage and lead a cross-functional team of Applied Scientists, Data Scientists, Economists, and Business Intelligence Engineers. Develop and manage a research agenda that balances short term deliverables with measurable business impact as well as long term investments. Lead marketplace design and development based on economic theory and data analysis. Provide technical and scientific guidance to team members. Rapidly design, prototype and test many possible hypotheses in a high-ambiguity environment, making use of both quantitative and business judgment Advance the team's engineering craftsmanship and drive continued scientific innovation as a thought leader and practitioner. Develop science and engineering roadmaps, run annual planning, and foster cross-team collaboration to execute complex projects. Perform hands-on data analysis, build machine-learning models, run regular A/B tests, and communicate the impact to senior management. Collaborate with business and software teams across Amazon Ads. Stay up to date with recent scientific publications relevant to the team. Hire and develop top talent, provide technical and career development guidance to scientists and engineers within and across the organization. Why you will love this opportunity: Amazon is investing heavily in building a world-class advertising business. This team defines and delivers a collection of advertising products that drive discovery and sales. Our solutions generate billions in revenue and drive long-term growth for Amazon’s Retail and Marketplace businesses. We deliver billions of ad impressions, millions of clicks daily, and break fresh ground to create world-class products. We are a highly motivated, collaborative, and fun-loving team with an entrepreneurial spirit - with a broad mandate to experiment and innovate. Impact and Career Growth: You will invent new experiences and influence customer-facing shopping experiences to help suppliers grow their retail business and the auction dynamics that leverage native advertising; this is your opportunity to work within the fastest-growing businesses across all of Amazon! Define a long-term science vision for our advertising business, driven from our customers' needs, translating that direction into specific plans for research and applied scientists, as well as engineering and product teams. This role combines science leadership, organizational ability, technical strength, product focus, and business understanding. Team video ~ https://youtu.be/zD_6Lzw8raE
US, CA, Palo Alto
The Amazon Search team creates powerful, customer-focused search and advertising solutions and technologies. Whenever a customer visits an Amazon site worldwide and types in a query or browses through product categories, Amazon Search services go to work. We design, develop, and deploy high performance, fault-tolerant distributed search systems used by millions of Amazon customers every day. Our Search Relevance team works to maximize the quality and effectiveness of the search experience for visitors to Amazon websites worldwide. Amazon’s large scale brings with it unique problems to solve in designing, testing, and deploying relevance models. We are seeking a strong applied Scientist to join the Experimentation Infrastructure and Methods team. This team’s charter is to innovate and evaluate ranking at Amazon Search. In practice, we aim to create infrastructure and metrics, enable new experimental methods, and do proof-of-concept experiments, that enable Search Relevance teams to introduce new features faster, reduce the cost of experimentation, and deliver faster against Search goals. Key job responsibilities You will build search ranking systems and evaluation framework that extend to Amazon scale -- thousands of product types, billions of queries, and hundreds of millions of customers spread around the world. As a Senior Applied Scientist you will find the next set of big improvements to ranking evaluation, get your hands dirty by building models to help understand complexities of customer behavior, and mentor junior engineers and scientists. In addition to typical topics in ranking, we are particularly interested in evaluation, feature selection, explainability. A day in the life Our primary focus is improving search ranking systems. On a day-to-day this means building ML models, analyzing data from your recent A/B tests, and guiding teams on best practices. You will also find yourself in meetings with business and tech leaders at Amazon communicating your next big initiative. About the team We are a team consisting of software engineers and applied scientists. Our interests and activities span machine learning for better ranking, experimentation, statistics for better decision making, and infrastructure to make it all happen efficiently at scale.