Graceful AI

How to make trained systems evolve gracefully.

As machine-learning-based decision systems improve rapidly, we are discovering that it is no longer enough for them to perform well on their own. They should also behave nicely toward their predecessors. When we replace an old trained classifier with a new one, we should expect a smooth transition and a peaceful transfer of decision powers.

Stefano 2.jpg
Stefano Soatto, vice president of applied science for AWS AI.
Credit: Todd Cheney

At Amazon Web Services (AWS), we are constantly working to improve the performance of our learning-based classification systems. Performance is typically measured by average error on test data that are representative of future use cases. We scientists get very excited when we can reduce the average error, and we hope that customers will be delighted when they replace the existing system with a new and improved one. 

However, it is possible for a new model to significantly improve average performance and yet introduce errors that the old model did not make. Those errors can be rare yet so detrimental as to nullify the benefit of the improved model. In some cases, post-processing pipelines built on top of a model can break. In other cases, users are so accustomed to the behavior of the old system that any introduced error contributes to a perceived “regression” in performance.

Regression in model update.png
When updating an old classifier (red) to a new one (dashed blue line), we correct mistakes (top right, white), but we also introduce new ones (negative flips, bottom-left, red). While on average, the errors decrease (from 57% to 42% in this toy example), regression can wreak havoc with downstream processing, nullifying the benefit of the update.
From "Positive-congruent training: Towards regression-free model updates"

You may have experienced this phenomenon when using the search feature in your photo collection. Occasionally, the provider updates the photo management software, presumably improving it. However, if an image that you were able to retrieve previously suddenly goes missing from the search, the natural reaction is surprise: How is this version any better? Give me the old one back!

When the software update occurs, the search feature is usually unavailable for a period of time; the larger your photo collection, the longer the interruption typically lasts. During this time, the system reprocesses old images to create indices and clusters them based on identities. If the model introduces new mistakes, old images may be left out of searches that used to retrieve them.

Which prompts the question, Why is it necessary to reprocess old data? Can we design and train new learning-based models in a manner that is compatible with previous ones, so that it is not necessary to reprocess the entire gallery?

These questions generally pertain to the need to train machine-learning-based systems, not in isolation, but in reference to other models. Specifically, we want the new models to be compatible with classifiers or clustering algorithms designed for the old models, and we want them to not introduce new mistakes. 

Compatible updates

Today, requirements beyond accuracy have begun to drive the machine learning process. These demands include explainability, transparency, fairness, and, now, compatibility and regression minimization. We call the ability to meet those demands “graceful AI”. 

We at AWS first faced this challenge when responding to a customer request to reduce the cost of re-indexing data, which can be significant for large photo collections. 

At the time, there was no literature on the topic. We trained a deep-learning model to minimize the average error while using the “classifier head” of an old model — the last few layers of the model, which issue the final classification decision. In other words, we forced the data representation computed by the new model to live in the same space as the old one, so the same clustering or decision rules could be used without the need to re-index old data. 

Backward-compatible model update.png
Without backward-compatible representation, updating the embedding model for a retrieval/search system means that all previously processed gallery features have to be recomputed by the new model (backfilling), as the new embedding cannot be directly compared with the old one. With a backward-compatible representation, direct comparison becomes possible, eliminating the need to backfill.
From "Towards backward-compatible representation learning"

If this approach worked, customers could start using new models immediately, with no re-indexing time or cost, and the old indexed data could be combined with the new. And it did work, as we described in the paper “Towards backward-compatible representation learning”, presented at last year's Conference on Computer Vision and Pattern Recognition (CVPR). It was the first paper in this increasingly important area of investigation in machine learning, around which we are organizing a tutorial at the upcoming International Conference on Computer Vision (ICCV).

For services that require more complex post-processing than clustering, it is paramount to minimize the number of new errors introduced by model updates. In a forthcoming oral presentation at CVPR, our team will present an approach that we call positive-congruent training, or PC training, which aims to train a new classifier without introducing errors relative to the old one. This is a first step towards regression constrained training. PC training is necessary to avoid rare but harmful mistakes that you wish to never make.

PC training is not just a matter of forcing the new model to mimic the old one — a process known as model distillation. Model distillation mimics the old model, including its errors; we want to be close to the old model only when it gets it right. 

Even when the average error is reduced to a minimum, it is still possible to reduce what we call the “negative flip rate” (NFR), which measures the percentage of new errors compared to the old model. This can be done by trading errors, keeping the average error rate constant (unless the average error rate is precisely zero, which is almost never the case in the real world). So minimizing the NFR is a separate criterion from the standard error rate, and PC training represents a new branch of research in machine learning.

It is possible for a new model to significantly improve average performance and yet introduce errors that the old model did not make. Those errors can be rare yet so detrimental as to nullify the benefit of the improved model.
Stefano Soatto

Machine-learning-based systems will continue to evolve, and eventually we will do away with the artificial separation of training (when the model parameters are learned from a fixed training dataset) and inference (when new data is presented to elicit a decision or action). As we make steps toward such “lifelong learning”, it is important for new models developed in the meantime to play nicely with existing ones. 

We have sown the first seeds of work in this area, but much remains to be done. As models are repeatedly updated, a growing set of compatibility constraints will ultimately weigh negatively on overall performance, much as backward compatibility with all previous versions makes some software so unwieldy. 

We are pleased that some of our models at AWS AI Applications are already backward-compatible, which means that customers will be able to upgrade to new models without having to change their processing pipelines or re-index old data. In 2021, any transfer of decision power should occur without drama. 

Modified models

Another version of the incompatibility problem arises when one wishes to deploy the same system on different devices with diverse resource constraints. One might, for instance, have a large and powerful model running in the cloud and smaller versions of it running on edge devices such as smartphones.

We’ve found that, to ensure compatibility, it’s not enough for the smaller models to approximate the accuracy of the large model; they also need to approximate its architecture. Again at the next CVPR, we will present a paper on “heterogeneous visual search”, which shows how to enforce this type of compatibility across platforms.

Finally, all of the above would be easier if deep neural networks were linear systems, and training consisted of minimizing a convex loss function. As we all know, this is not the case. The niche literature on linearizing deep neural networks has mostly focused on analyzing those networks’ behavior; their performance has been far below that of the full nonlinear, nonconvex originals. 

However, we have recently shown that, if linearization is done right, by modifying the loss function, the model, and the optimization, we can train linear models that perform just as well as their nonlinear counterparts. “LQF: Linear quadratic fine-tuning”, also to be presented at CVPR, describes modifying the architecture of a ResNet backbone by replacing ReLu with leaky ReLu, modifying the loss function from cross-entropy to least-square, and modifying the optimization by preconditioning using Kronecker factorization.

We are excited to continue exploring how these and other developments can lead to more transparent, more interpretable, and more “gracious” AI systems.

Related content

US, VA, Arlington
The People eXperience and Technology Central Science Team (PXTCS) uses economics, behavioral science, statistics, and machine learning to proactively identify mechanisms and process improvements which simultaneously improve Amazon and the lives, wellbeing, and the value of work to Amazonians. We are an interdisciplinary team that combines the talents of science and engineering to develop and deliver solutions that measurably achieve this goal. We are looking for economists who are able to apply economic methods to address business problems. The ideal candidate will work with engineers and computer scientists to estimate models and algorithms on large scale data, design pilots and measure their impact, and transform successful prototypes into improved policies and programs at scale. We are looking for creative thinkers who can combine a strong technical economic toolbox with a desire to learn from other disciplines, and who know how to execute and deliver on big ideas as part of an interdisciplinary technical team. Ideal candidates will work in a team setting with individuals from diverse disciplines and backgrounds. They will work with teammates to develop scientific models and conduct the data analysis, modeling, and experimentation that is necessary for estimating and validating models. They will work closely with engineering teams to develop scalable data resources to support rapid insights, and take successful models and findings into production as new products and services. They will be customer-centric and will communicate scientific approaches and findings to business leaders, listening to and incorporate their feedback, and delivering successful scientific solutions. Key job responsibilities Use causal inference methods to evaluate the impact of policies on employee outcomes. Examine how external labor market and economic conditions impact Amazon's ability to hire and retain talent. Use scientifically rigorous methods to develop and recommend career paths for employees. A day in the life Work with teammates to apply economic methods to business problems. This might include identifying the appropriate research questions, writing code to implement a DID analysis or estimate a structural model, or writing and presenting a document with findings to business leaders. Our economists also collaborate with partner teams throughout the process, from understanding their challenges, to developing a research agenda that will address those challenges, to help them implement solutions. About the team We are a multidisciplinary team that combines the talents of science and engineering to develop innovative solutions to make Amazon Earth's Best Employer.
US, WA, Seattle
We are looking for detail-oriented, organized, and responsible individuals who are eager to learn how to work with large and complicated data sets. Knowledge of econometrics, as well as basic familiarity with Python (or R, Matlab, or equivalent) is necessary, and experience with SQL would be a plus. These are full-time positions at 40 hours per week, with compensation being awarded on an hourly basis. You will learn how to build data sets and perform applied econometric analysis at Internet speed collaborating with economists, scientists, and product managers. These skills will translate well into writing applied chapters in your dissertation and provide you with work experience that may help you with placement. Roughly 85% of previous cohorts have converted to full time scientist employment at Amazon. If you are interested, please send your CV to our mailing list at econ-internship@amazon.com.
US, WA, Virtual Contact Center-WA
We are looking for detail-oriented, organized, and responsible individuals who are eager to learn how to work with large and complicated data sets. Some knowledge of econometrics, as well as basic familiarity with Python is necessary, and experience with SQL and UNIX would be a plus. These are full-time positions at 40 hours per week, with compensation being awarded on an hourly basis. You will learn how to build data sets and perform applied econometric analysis at Internet speed collaborating with economists, scientists, and product managers. These skills will translate well into writing applied chapters in your dissertation and provide you with work experience that may help you with placement. Roughly 85% of previous cohorts have converted to full time scientist employment at Amazon. If you are interested, please send your CV to our mailing list at econ-internship@amazon.com. About the team The Selling Partner Fees team owns the end-to-end fees experience for two million active third party sellers. We own the fee strategy, fee seller experience, fee accuracy and integrity, fee science and analytics, and we provide scalable technology to monetize all services available to third-party sellers. Within the Science team, our goal is to understand the impact of changing fees on Seller (supply) and Customers (demand) behavior (e.g. price changes, advertising strategy changes, introducing new selection etc.) as well as using this information to optimize our fee structure and maximizing our long term profitability.
US, WA, Seattle
This is a unique opportunity to build technology and science that millions of people will use every day. Are you excited about working on large scale Natural Language Processing (NLP), Machine Learning (ML), and Deep Learning (DL)? We are embarking on a multi-year journey to improve the shopping experience for customers globally. Amazon Search team creates customer-focused search solutions and technologies that makes shopping delightful and effortless for our customers. Our goal is to understand what customers are looking for in whatever language happens to be their choice at the moment and help them find what they need in Amazon's vast catalog of billions of products. As Amazon expands to new geographies, we are faced with the unique challenge of maintaining the bar on Search Quality due to the diversity in user preferences, multilingual search and data scarcity in new locales. We are looking for an applied researcher to work on improving search on Amazon using NLP, ML, and DL technology. As an Applied Scientist, you will lead our efforts in query understanding, semantic matching (e.g. is a drone the same as quadcopter?), relevance ranking (what is a "funny halloween costume"?), language identification (did the customer just switch to their mother tongue?), machine translation (猫の餌を注文する). This is a highly visible role with a huge impact on Amazon customers and business. As part of this role, you will develop high precision, high recall, and low latency solutions for search. Your solutions should work for all languages that Amazon supports and will be used in all Amazon locales world-wide. You will develop scalable science and engineering solutions that work successfully in production. You will work with leaders to develop a strategic vision and long term plans to improve search globally. We are growing our collaborative group of engineers and applied scientists by expanding into new areas. This is a position on Global Search Quality team in Seattle Washington. We are moving fast to change the way Amazon search works. Together with a multi-disciplinary team you will work on building solutions with NLP/ML/DL at its core. Along the way, you’ll learn a ton, have fun and make a positive impact on millions of people. Come and join us as we invent new ways to delight Amazon customers.
US, WA, Seattle
This is a unique opportunity to build technology and science that millions of people will use every day. Are you excited about working on large scale Natural Language Processing (NLP), Machine Learning (ML), and Deep Learning (DL)? We are embarking on a multi-year journey to improve the shopping experience for customers globally. Amazon Search team creates customer-focused search solutions and technologies that makes shopping delightful and effortless for our customers. Our goal is to understand what customers are looking for in whatever language happens to be their choice at the moment and help them find what they need in Amazon's vast catalog of billions of products. As Amazon expands to new geographies, we are faced with the unique challenge of maintaining the bar on Search Quality due to the diversity in user preferences, multilingual search and data scarcity in new locales. We are looking for an applied researcher to work on improving search on Amazon using NLP, ML, and DL technology. As an Applied Scientist, you will lead our efforts in query understanding, semantic matching (e.g. is a drone the same as quadcopter?), relevance ranking (what is a "funny halloween costume"?), language identification (did the customer just switch to their mother tongue?), machine translation (猫の餌を注文する). This is a highly visible role with a huge impact on Amazon customers and business. As part of this role, you will develop high precision, high recall, and low latency solutions for search. Your solutions should work for all languages that Amazon supports and will be used in all Amazon locales world-wide. You will develop scalable science and engineering solutions that work successfully in production. You will work with leaders to develop a strategic vision and long term plans to improve search globally. We are growing our collaborative group of engineers and applied scientists by expanding into new areas. This is a position on Global Search Quality team in Seattle Washington. We are moving fast to change the way Amazon search works. Together with a multi-disciplinary team you will work on building solutions with NLP/ML/DL at its core. Along the way, you’ll learn a ton, have fun and make a positive impact on millions of people. Come and join us as we invent new ways to delight Amazon customers.
US, WA, Seattle
The retail pricing science and research group is a team of scientists and economists who design and implement the analytics powering pricing for Amazon’s on-line retail business. The team uses world-class analytics to make sure that the prices for all of Amazon’s goods and services are aligned with Amazon’s corporate goals. We are seeking an experienced high-energy Economist to help envision, design and build the next generation of retail pricing capabilities. You will work at the intersection of economic theory, statistical inference, and machine learning to design new methods and pricing strategies to deliver game changing value to our customers. Roughly 85% of previous intern cohorts have converted to full time scientist employment at Amazon. If you are interested, please send your CV to our mailing list at econ-internship@amazon.com. Key job responsibilities Amazon’s Pricing Science and Research team is seeking an Economist to help envision, design and build the next generation of pricing capabilities behind Amazon’s on-line retail business. As an economist on our team, you will work at the intersection of economic theory, statistical inference, and machine learning to design new methods and pricing strategies with the potential to deliver game changing value to our customers. This is an opportunity for a high-energy individual to work with our unprecedented retail data to bring cutting edge research into real world applications, and communicate the insights we produce to our leadership. This position is perfect for someone who has a deep and broad analytic background and is passionate about using mathematical modeling and statistical analysis to make a real difference. You should be familiar with modern tools for data science and business analysis. We are particularly interested in candidates with research background in applied microeconomics, econometrics, statistical inference and/or finance. A day in the life Discussions with business partners, as well as product managers and tech leaders to understand the business problem. Brainstorming with other scientists and economists to design the right model for the problem in hand. Present the results and new ideas for existing or forward looking problems to leadership. Deep dive into the data. Modeling and creating working prototypes. Analyze the results and review with partners. Partnering with other scientists for research problems. About the team The retail pricing science and research group is a team of scientists and economists who design and implement the analytics powering pricing for Amazon’s on-line retail business. The team uses world-class analytics to make sure that the prices for all of Amazon’s goods and services are aligned with Amazon’s corporate goals.
US, CA, San Francisco
The retail pricing science and research group is a team of scientists and economists who design and implement the analytics powering pricing for Amazon's on-line retail business. The team uses world-class analytics to make sure that the prices for all of Amazon's goods and services are aligned with Amazon's corporate goals. We are seeking an experienced high-energy Economist to help envision, design and build the next generation of retail pricing capabilities. You will work at the intersection of statistical inference, experimentation design, economic theory and machine learning to design new methods and pricing strategies for assessing pricing innovations. Roughly 85% of previous intern cohorts have converted to full time scientist employment at Amazon. If you are interested, please send your CV to our mailing list at econ-internship@amazon.com. Key job responsibilities Amazon's Pricing Science and Research team is seeking an Economist to help envision, design and build the next generation of pricing capabilities behind Amazon's on-line retail business. As an economist on our team, you will will have the opportunity to work with our unprecedented retail data to bring cutting edge research into real world applications, and communicate the insights we produce to our leadership. This position is perfect for someone who has a deep and broad analytic background and is passionate about using mathematical modeling and statistical analysis to make a real difference. You should be familiar with modern tools for data science and business analysis. We are particularly interested in candidates with research background in experimentation design, applied microeconomics, econometrics, statistical inference and/or finance. A day in the life Discussions with business partners, as well as product managers and tech leaders to understand the business problem. Brainstorming with other scientists and economists to design the right model for the problem in hand. Present the results and new ideas for existing or forward looking problems to leadership. Deep dive into the data. Modeling and creating working prototypes. Analyze the results and review with partners. Partnering with other scientists for research problems. About the team The retail pricing science and research group is a team of scientists and economists who design and implement the analytics powering pricing for Amazon's on-line retail business. The team uses world-class analytics to make sure that the prices for all of Amazon's goods and services are aligned with Amazon's corporate goals.
US, WA, Bellevue
We are looking for detail-oriented, organized, and responsible individuals who are eager to learn how to work with large and complicated data sets. Some knowledge of econometrics, as well as basic familiarity with Python is necessary, and experience with SQL and UNIX would be a plus. These are full-time positions at 40 hours per week, with compensation being awarded on an hourly basis. You will learn how to build data sets and perform applied econometric analysis at Internet speed collaborating with economists, scientists, and product managers. These skills will translate well into writing applied chapters in your dissertation and provide you with work experience that may help you with placement. Roughly 85% of interns from previous cohorts have converted to full time economics employment at Amazon. If you are interested, please send your CV to our mailing list at econ-internship@amazon.com.
US
The Amazon Supply Chain Optimization Technology (SCOT) organization is looking for an Intern in Economics to work on exciting and challenging problems related to Amazon's worldwide inventory planning. SCOT provides unique opportunities to both create and see the direct impact of your work on billions of dollars’ worth of inventory, in one of the world’s most advanced supply chains, and at massive scale. We are looking for detail-oriented, organized, and responsible individuals who are eager to learn how to work with large and complicated data sets. We are looking for a PhD candidate with exposure to Program Evaluation/Causal Inference. Knowledge of econometrics and Stata/R/or Python is necessary, and experience with SQL, Hadoop, and Spark would be a plus. These are full-time positions at 40 hours per week, with compensation being awarded on an hourly basis. You will learn how to build data sets and perform applied econometric analysis at Internet speed collaborating with economists, scientists, and product managers. These skills will translate well into writing applied chapters in your dissertation and provide you with work experience that may help you with placement. Roughly 85% of previous cohorts have converted to full time scientist employment at Amazon. If you are interested, please send your CV to our mailing list at econ-internship@amazon.com.
US, WA, Seattle
The Selling Partner Fees team owns the end-to-end fees experience for two million active third party sellers. We own the fee strategy, fee seller experience, fee accuracy and integrity, fee science and analytics, and we provide scalable technology to monetize all services available to third-party sellers. We are looking for an Intern Economist with excellent coding skills to design and develop rigorous models to assess the causal impact of fees on third party sellers’ behavior and business performance. As a Science Intern, you will have access to large datasets with billions of transactions and will translate ambiguous fee related business problems into rigorous scientific models. You will work on real world problems which will help to inform strategic direction and have the opportunity to make an impact for both Amazon and our Selling Partners.