Graceful AI

How to make trained systems evolve gracefully.

As machine-learning-based decision systems improve rapidly, we are discovering that it is no longer enough for them to perform well on their own. They should also behave nicely toward their predecessors. When we replace an old trained classifier with a new one, we should expect a smooth transition and a peaceful transfer of decision powers.

Stefano 2.jpg
Stefano Soatto, vice president of applied science for AWS AI.
Credit: Todd Cheney

At Amazon Web Services (AWS), we are constantly working to improve the performance of our learning-based classification systems. Performance is typically measured by average error on test data that are representative of future use cases. We scientists get very excited when we can reduce the average error, and we hope that customers will be delighted when they replace the existing system with a new and improved one. 

However, it is possible for a new model to significantly improve average performance and yet introduce errors that the old model did not make. Those errors can be rare yet so detrimental as to nullify the benefit of the improved model. In some cases, post-processing pipelines built on top of a model can break. In other cases, users are so accustomed to the behavior of the old system that any introduced error contributes to a perceived “regression” in performance.

Regression in model update.png
When updating an old classifier (red) to a new one (dashed blue line), we correct mistakes (top right, white), but we also introduce new ones (negative flips, bottom-left, red). While on average, the errors decrease (from 57% to 42% in this toy example), regression can wreak havoc with downstream processing, nullifying the benefit of the update.
From "Positive-congruent training: Towards regression-free model updates"

You may have experienced this phenomenon when using the search feature in your photo collection. Occasionally, the provider updates the photo management software, presumably improving it. However, if an image that you were able to retrieve previously suddenly goes missing from the search, the natural reaction is surprise: How is this version any better? Give me the old one back!

When the software update occurs, the search feature is usually unavailable for a period of time; the larger your photo collection, the longer the interruption typically lasts. During this time, the system reprocesses old images to create indices and clusters them based on identities. If the model introduces new mistakes, old images may be left out of searches that used to retrieve them.

Which prompts the question, Why is it necessary to reprocess old data? Can we design and train new learning-based models in a manner that is compatible with previous ones, so that it is not necessary to reprocess the entire gallery?

These questions generally pertain to the need to train machine-learning-based systems, not in isolation, but in reference to other models. Specifically, we want the new models to be compatible with classifiers or clustering algorithms designed for the old models, and we want them to not introduce new mistakes. 

Compatible updates

Today, requirements beyond accuracy have begun to drive the machine learning process. These demands include explainability, transparency, fairness, and, now, compatibility and regression minimization. We call the ability to meet those demands “graceful AI”. 

We at AWS first faced this challenge when responding to a customer request to reduce the cost of re-indexing data, which can be significant for large photo collections. 

At the time, there was no literature on the topic. We trained a deep-learning model to minimize the average error while using the “classifier head” of an old model — the last few layers of the model, which issue the final classification decision. In other words, we forced the data representation computed by the new model to live in the same space as the old one, so the same clustering or decision rules could be used without the need to re-index old data. 

Backward-compatible model update.png
Without backward-compatible representation, updating the embedding model for a retrieval/search system means that all previously processed gallery features have to be recomputed by the new model (backfilling), as the new embedding cannot be directly compared with the old one. With a backward-compatible representation, direct comparison becomes possible, eliminating the need to backfill.
From "Towards backward-compatible representation learning"

If this approach worked, customers could start using new models immediately, with no re-indexing time or cost, and the old indexed data could be combined with the new. And it did work, as we described in the paper “Towards backward-compatible representation learning”, presented at last year's Conference on Computer Vision and Pattern Recognition (CVPR). It was the first paper in this increasingly important area of investigation in machine learning, around which we are organizing a tutorial at the upcoming International Conference on Computer Vision (ICCV).

For services that require more complex post-processing than clustering, it is paramount to minimize the number of new errors introduced by model updates. In a forthcoming oral presentation at CVPR, our team will present an approach that we call positive-congruent training, or PC training, which aims to train a new classifier without introducing errors relative to the old one. This is a first step towards regression constrained training. PC training is necessary to avoid rare but harmful mistakes that you wish to never make.

PC training is not just a matter of forcing the new model to mimic the old one — a process known as model distillation. Model distillation mimics the old model, including its errors; we want to be close to the old model only when it gets it right. 

Even when the average error is reduced to a minimum, it is still possible to reduce what we call the “negative flip rate” (NFR), which measures the percentage of new errors compared to the old model. This can be done by trading errors, keeping the average error rate constant (unless the average error rate is precisely zero, which is almost never the case in the real world). So minimizing the NFR is a separate criterion from the standard error rate, and PC training represents a new branch of research in machine learning.

It is possible for a new model to significantly improve average performance and yet introduce errors that the old model did not make. Those errors can be rare yet so detrimental as to nullify the benefit of the improved model.
Stefano Soatto

Machine-learning-based systems will continue to evolve, and eventually we will do away with the artificial separation of training (when the model parameters are learned from a fixed training dataset) and inference (when new data is presented to elicit a decision or action). As we make steps toward such “lifelong learning”, it is important for new models developed in the meantime to play nicely with existing ones. 

We have sown the first seeds of work in this area, but much remains to be done. As models are repeatedly updated, a growing set of compatibility constraints will ultimately weigh negatively on overall performance, much as backward compatibility with all previous versions makes some software so unwieldy. 

We are pleased that some of our models at AWS AI Applications are already backward-compatible, which means that customers will be able to upgrade to new models without having to change their processing pipelines or re-index old data. In 2021, any transfer of decision power should occur without drama. 

Modified models

Another version of the incompatibility problem arises when one wishes to deploy the same system on different devices with diverse resource constraints. One might, for instance, have a large and powerful model running in the cloud and smaller versions of it running on edge devices such as smartphones.

We’ve found that, to ensure compatibility, it’s not enough for the smaller models to approximate the accuracy of the large model; they also need to approximate its architecture. Again at the next CVPR, we will present a paper on “heterogeneous visual search”, which shows how to enforce this type of compatibility across platforms.

Finally, all of the above would be easier if deep neural networks were linear systems, and training consisted of minimizing a convex loss function. As we all know, this is not the case. The niche literature on linearizing deep neural networks has mostly focused on analyzing those networks’ behavior; their performance has been far below that of the full nonlinear, nonconvex originals. 

However, we have recently shown that, if linearization is done right, by modifying the loss function, the model, and the optimization, we can train linear models that perform just as well as their nonlinear counterparts. “LQF: Linear quadratic fine-tuning”, also to be presented at CVPR, describes modifying the architecture of a ResNet backbone by replacing ReLu with leaky ReLu, modifying the loss function from cross-entropy to least-square, and modifying the optimization by preconditioning using Kronecker factorization.

We are excited to continue exploring how these and other developments can lead to more transparent, more interpretable, and more “gracious” AI systems.

Related content

US, CA, Sunnyvale
The Artificial General Intelligence (AGI) team is looking for a passionate, talented, and inventive Applied Scientist; to support the development and implementation of Generative AI (GenAI) algorithms and models for supervised fine-tuning, and advance the state of the art with Large Language Models (LLMs), As an Applied Scientist, you will play a critical role in supporting the development of GenAI technologies that can handle Amazon-scale use cases and have a significant impact on our customers' experiences. Key job responsibilities - Collaborate with cross-functional teams of engineers and scientists to identify and solve complex problems in GenAI - Design and execute experiments to evaluate the performance of different algorithms and models, and iterate quickly to improve results - Think big about the arc of development of GenAI over a multi-year horizon, and identify new opportunities to apply these technologies to solve real-world problems - Communicate results and insights to both technical and non-technical audiences, including through presentations and written reports
LU, Luxembourg
Are you a MS student interested in a 2026 internship in the field of machine learning, deep learning, generative AI, large language models and speech technology, robotics, computer vision, optimization, operations research, quantum computing, automated reasoning, or formal methods? If so, we want to hear from you! We are looking for a customer obsessed Data Scientist Intern who can innovate in a business environment, building and deploying machine learning models to drive step-change innovation and scale it to the EU/worldwide. If this describes you, come and join our Data Science teams at Amazon for an exciting internship opportunity. If you are insatiably curious and always want to learn more, then you’ve come to the right place. You can find more information about the Amazon Science community as well as our interview process via the links below; https://www.amazon.science/ https://amazon.jobs/content/en/career-programs/university/science Key job responsibilities As a Data Science Intern, you will have following key job responsibilities: • Work closely with scientists and engineers to architect and develop new algorithms to implement scientific solutions for Amazon problems. • Work on an interdisciplinary team on customer-obsessed research • Experience Amazon's customer-focused culture • Create and Deliver Machine Learning projects that can be quickly applied starting locally and scaled to EU/worldwide • Build and deploy Machine Learning models using large data-sets and cloud technology. • Create and share with audiences of varying levels technical papers and presentations • Define metrics and design algorithms to estimate customer satisfaction and engagement A day in the life At Amazon, you will grow into the high impact person you know you’re ready to be. Every day will be filled with developing new skills and achieving personal growth. How often can you say that your work changes the world? At Amazon, you’ll say it often. Join us and define tomorrow. Some more benefits of an Amazon Science internship include; • All of our internships offer a competitive stipend/salary • Interns are paired with an experienced manager and mentor(s) • Interns receive invitations to different events such as intern program initiatives or site events • Interns can build their professional and personal network with other Amazon Scientists • Interns can potentially publish work at top tier conferences each year About the team Applicants will be reviewed on a rolling basis and are assigned to teams aligned with their research interests and experience prior to interviews. Start dates are available throughout the year and durations can vary in length from 3-6 months for full time internships. This role may available across multiple locations in the EMEA region (Austria, France, Germany, Ireland, Israel, Italy, Luxembourg, Netherlands, Poland, Romania, Spain and the UK). Please note these are not remote internships.
US, CA, San Francisco
The Artificial General Intelligence (AGI) team is looking for a passionate, talented, and inventive Member of Technical Staff with a strong deep learning background, to build industry-leading Generative Artificial Intelligence (GenAI) technology with Large Language Models (LLMs) and multimodal systems. Key job responsibilities As a Member of Technical Staff with the AGI team, you will lead the development of algorithms and modeling techniques, to advance the state of the art with LLMs. You will lead the foundational model development in an applied research role, including model training, dataset design, and pre- and post-training optimization. Your work will directly impact our customers in the form of products and services that make use of GenAI technology. You will leverage Amazon’s heterogeneous data sources and large-scale computing resources to accelerate advances in LLMs. About the team The AGI team has a mission to push the envelope in GenAI with LLMs and multimodal systems, in order to provide the best-possible experience for our customers.
US, CA, San Francisco
The Artificial General Intelligence (AGI) team is looking for a passionate, talented, and inventive Member of Technical Staff with a strong deep learning background, to build industry-leading Generative Artificial Intelligence (GenAI) technology with Large Language Models (LLMs) and multimodal systems. Key job responsibilities As a Member of Technical Staff with the AGI team, you will lead the development of algorithms and modeling techniques, to advance the state of the art with LLMs. You will lead the foundational model development in an applied research role, including model training, dataset design, and pre- and post-training optimization. Your work will directly impact our customers in the form of products and services that make use of GenAI technology. You will leverage Amazon’s heterogeneous data sources and large-scale computing resources to accelerate advances in LLMs. About the team The AGI team has a mission to push the envelope in GenAI with LLMs and multimodal systems, in order to provide the best-possible experience for our customers.
US, CA, San Francisco
The Artificial General Intelligence (AGI) team is looking for a passionate, talented, and inventive Member of Technical Staff with a strong deep learning background, to build industry-leading Generative Artificial Intelligence (GenAI) technology with Large Language Models (LLMs) and multimodal systems. Key job responsibilities As a Member of Technical Staff with the AGI team, you will lead the development of algorithms and modeling techniques, to advance the state of the art with LLMs. You will lead the foundational model development in an applied research role, including model training, dataset design, and pre- and post-training optimization. Your work will directly impact our customers in the form of products and services that make use of GenAI technology. You will leverage Amazon’s heterogeneous data sources and large-scale computing resources to accelerate advances in LLMs. About the team The AGI team has a mission to push the envelope in GenAI with LLMs and multimodal systems, in order to provide the best-possible experience for our customers.
US, CA, San Francisco
The Artificial General Intelligence (AGI) team is looking for a passionate, talented, and inventive Member of Technical Staff with a strong deep learning background, to build industry-leading Generative Artificial Intelligence (GenAI) technology with Large Language Models (LLMs) and multimodal systems. Key job responsibilities As a Member of Technical Staff with the AGI team, you will lead the development of algorithms and modeling techniques, to advance the state of the art with LLMs. You will lead the foundational model development in an applied research role, including model training, dataset design, and pre- and post-training optimization. Your work will directly impact our customers in the form of products and services that make use of GenAI technology. You will leverage Amazon’s heterogeneous data sources and large-scale computing resources to accelerate advances in LLMs. About the team The AGI team has a mission to push the envelope in GenAI with LLMs and multimodal systems, in order to provide the best-possible experience for our customers.
US, CA, San Francisco
The Artificial General Intelligence (AGI) team is looking for a passionate, talented, and inventive Member of Technical Staff with a strong deep learning background, to build industry-leading Generative Artificial Intelligence (GenAI) technology with Large Language Models (LLMs) and multimodal systems. Key job responsibilities As a Member of Technical Staff with the AGI team, you will lead the development of algorithms and modeling techniques, to advance the state of the art with LLMs. You will lead the foundational model development in an applied research role, including model training, dataset design, and pre- and post-training optimization. Your work will directly impact our customers in the form of products and services that make use of GenAI technology. You will leverage Amazon’s heterogeneous data sources and large-scale computing resources to accelerate advances in LLMs. About the team The AGI team has a mission to push the envelope in GenAI with LLMs and multimodal systems, in order to provide the best-possible experience for our customers.
US, CA, Sunnyvale
Prime Video is a first-stop entertainment destination offering customers a vast collection of premium programming in one app available across thousands of devices. Prime members can customize their viewing experience and find their favorite movies, series, documentaries, and live sports – including Amazon MGM Studios-produced series and movies; licensed fan favorites; and programming from Prime Video add-on subscriptions such as Apple TV+, Max, Crunchyroll and MGM+. All customers, regardless of whether they have a Prime membership or not, can rent or buy titles via the Prime Video Store, and can enjoy even more content for free with ads. Are you interested in shaping the future of entertainment? Prime Video's technology teams are creating best-in-class digital video experience. As a Prime Video technologist, you’ll have end-to-end ownership of the product, user experience, design, and technology required to deliver state-of-the-art experiences for our customers. You’ll get to work on projects that are fast-paced, challenging, and varied. You’ll also be able to experiment with new possibilities, take risks, and collaborate with remarkable people. We’ll look for you to bring your diverse perspectives, ideas, and skill-sets to make Prime Video even better for our customers. With global opportunities for talented technologists, you can decide where a career Prime Video Tech takes you! We are looking for a self-motivated, passionate and resourceful Sr. Applied Scientists with Recommender System or Search Ranking or Ads Ranking experience to bring diverse perspectives, ideas, and skill-sets to make Prime Video even better for our customers. You will spend your time as a hands-on machine learning practitioner and a research leader. You will play a key role on the team, building and guiding machine learning models from the ground up. At the end of the day, you will have the reward of seeing your contributions benefit millions of Amazon.com customers worldwide. Key job responsibilities - Develop AI solutions for various Prime Video Recommendation/Search systems using Deep learning, GenAI, Reinforcement Learning, and optimization methods; - Work closely with engineers and product managers to design, implement and launch AI solutions end-to-end; - Design and conduct offline and online (A/B) experiments to evaluate proposed solutions based on in-depth data analyses; - Effectively communicate technical and non-technical ideas with teammates and stakeholders; - Stay up-to-date with advancements and the latest modeling techniques in the field; - Publish your research findings in top conferences and journals. About the team Prime Video Recommendation/Search Science team owns science solution to power search experience on various devices, from sourcing, relevance, ranking, to name a few. We work closely with the engineering teams to launch our solutions in production.
US, WA, Seattle
We are open to hiring candidates to work out of one of the following locations: San Francisco, CA, USA | Santa Clara, CA, USA | Seattle, WA, USA | Sunnyvale, CA, USA Amazon is seeking an innovative and high-judgement Senior Applied Scientist to join the Privacy Engineering team in the Amazon Privacy Services org. We own products and programs that deliver technical innovation for ensuring compliance with high-impact, urgent regulation across Amazon services worldwide. The Senior Applied Scientist will contribute to the strategic direction for Amazon’s privacy practices while building/owning the compliance approach for individual regulations such as General Data Protection Regulation (GDPR), DMA, Quebec 25 etc. This will require helping to frame, and participating in, high judgment debates and decision making across senior business, technology, legal, and public policy leaders. A great candidate will have a unique combination of experience with innovative data governance technology, high judgement in system architecture decisions and ability to set detailed technical design from ambiguous compliance requirements. You will drive foundational, cross-service decisions, set technical requirements, oversee technical design, and have end to end accountability for delivering technical changes across dozens of different systems. You will have high engagement with WW senior leadership via quarterly reviews, annual organizational planning, and s-team goal updates. Key job responsibilities * Develop information retrieval benchmarks related to code analysis and invent algorithms to optimize identification of privacy requirements and controls. * Develop semantic and syntactic code analysis tools to assess privacy implementations within application code, and automatic code replacement tools to enhance privacy implementations. * Leverage Amazon’s heterogeneous data sources and large-scale computing resources to accelerate advances in generative artificial intelligence for privacy compliance. * Collaborate with other science and engineering teams as well as business stakeholders to maximize the velocity and impact of your contributions. A day in the life Amazon Privacy Services own products and programs that deliver technical innovation for ensuring Privacy Amazon services worldwide. We are hiring an innovative and high-judgement Senior Applied Scientist to develop AI solutions for builders across Amazon’s consumer and digital businesses including but not limited to Amazon.com, Amazon Ads, Amazon Go, Prime Video, Devices and more. Our ideal candidate is creative, has excellent problem-solving skills, a solid understanding of computer science fundamentals, deep learning and a customer-focused mindset. The Senior Scientist will serve as the resident expert on the development of AI agents for privacy. They build on their experiences to develop LLMs to develop AI implementations across privacy workflows. They will have responsibilities to mentor junior scientists and engineers develop AI skills. About the team Diverse Experiences Amazon Security values diverse experiences. Even if you do not meet all of the qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why Amazon Security? At Amazon, security is central to maintaining customer trust and delivering delightful customer experiences. Our organization is responsible for creating and maintaining a high bar for security across all of Amazon’s products and services. We offer talented security professionals the chance to accelerate their careers with opportunities to build experience in a wide variety of areas including cloud, devices, retail, entertainment, healthcare, operations, and physical stores Inclusive Team Culture In Amazon Security, it’s in our nature to learn and be curious. Ongoing DEI events and learning experiences inspire us to continue learning and to embrace our uniqueness. Addressing the toughest security challenges requires that we seek out and celebrate a diversity of ideas, perspectives, and voices. Training & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, training, and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why flexible work hours and arrangements are part of our culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve.
US, WA, Seattle
Amazon's Price Perception and Evaluation team is seeking a driven Principal Applied Scientist to harness planet scale multi-modal datasets, and navigate a continuously evolving competitor landscape, in order to build and scale an advanced self-learning scientific price estimation and product understanding system, regularly generating fresh customer-relevant prices on billions of Amazon and Third Party Seller products worldwide. We are looking for a talented, organized, and customer-focused technical leader with a charter to derive deep neural product relationships, quantify substitution and complementarity effects, and publish trust-preserving probabilistic price ranges on all products listed on Amazon. This role requires an individual with excellent scientific modeling and system design skills, bar-raising business acumen, and an entrepreneurial spirit. We are looking for an experienced leader who is a self-starter comfortable with ambiguity, demonstrates strong attention to detail, and has the ability to work in a fast-paced and ever-changing environment. Key job responsibilities - Develop the team. Mentor a highly talented group of applied machine learning scientists & researchers. - See the big picture. Shape long term vision for Amazon's science-based competitive, perception-preserving pricing techniques - Build strong collaborations. Partner with product, engineering, and science teams within Pricing & Promotions to deploy machine learning price estimation and error correction solutions at Amazon scale - Stay informed. Establish mechanisms to stay up to date on latest scientific advancements in machine learning, neural networks, natural language processing, probabilistic forecasting, and multi-objective optimization techniques. Identify opportunities to apply them to relevant Pricing & Promotions business problems - Keep innovating for our customers. Foster an environment that promotes rapid experimentation, continuous learning, and incremental value delivery. - Deliver Impact. Develop, Deploy, and Scale Amazon's next generation foundational price estimation and understanding system