Learning computational tasks from single examples

New “meta-learning” approach improves on the state of the art in “one-shot” learning.

In the past decade, deep-learning systems have proven remarkably successful at many artificial-intelligence tasks, but their applications tend to be narrow. A computer vision system trained to recognize cats and dogs, for instance, would need significant retraining to start recognizing sharks and sea turtles.

Meta-learning is a paradigm intended to turn machine learning systems into generalists. A meta-learning model is trained on a range of related tasks, but it learns not only how to perform those tasks but also how to learn to perform them. The idea is that it could then be adapted to new tasks with only a handful of labeled training examples, drastically reducing the need for labor-intensive data annotation.

At the (virtual) International Conference on Learning Representations, we will present an approach that improves performance on meta-learning tasks without increasing the data annotation requirements. The key idea is to adapt the meta-learning procedure so that it can leverage small sets of unlabeled data, in addition to the traditional labeled examples.

Meta-learning
In meta-learning, a machine learning model learns how to learn. During meta-training, the model is trained on a group of related tasks — using data from “support sets” — and tested using data from “query sets”. But the query sets are labeled, so the model can assess how effectively it's learning. During meta-testing, the model is again trained on a group of support sets, but it's evaluated on its ability to classify unlabeled query data.
Stacy Reilly

The intuition is that even without labels, these extra data still contain a lot of useful information. Suppose, for instance, that a meta-learning system trained on images of terrestrial animals (such as cats and dogs) is being adapted to recognize aquatic animals. Unlabeled images of aquatic animals (i.e., images that don’t indicate whether an animal is a shark or a sea turtle) still tell the model something about the learning task, such as the lighting conditions and background colors typical of underwater photos.

In experiments, we compared models trained through our approach to 16 different baselines on an object recognition meta-learning task. We found that our approach improved performance on one-shot learning, or learning a new object classification task from only a single labeled example, by 11% to 16%, depending on the architectures of the underlying neural networks.

Meta-learning

In conventional machine learning, a model is fed a body of labeled data and learns to correlate data features with the labels. Then it’s fed a separate body of test data and evaluated on how well it predicts the labels for that data. For evaluation purposes, the system designers have access to the test-data labels, but the model itself doesn’t.

Meta-learning adds another layer of complexity. During meta-training — the analogue of conventional training — the model learns to perform a range of related tasks. Each task has its own sets of training data and test data, and the model sees both. That is, part of its meta-training is learning how particular ways of responding to training data tend to affect its performance on test data.

During meta-testing, it is again trained on a range of tasks. These are related to but not identical to the tasks it saw during meta-training — recognizing aquatic animals, for instance, as opposed to terrestrial animals. Again, for each task, the model sees both training data and test data. But whereas, during meta-training, the test data were labeled, during meta-testing, the labels are unknown and must be predicted.

The terminology can get a bit confusing, so meta-learning researchers typically refer to the meta-learning “training” sets as support sets and the meta-learning “test” sets as query sets. During meta-training, the learning algorithm has access to the labels for both the support sets and the query sets, and it uses them to produce a global model. During meta-testing, it has access only to the labels for the support sets, which it uses to adapt the global model to each of the new tasks.

Our approach has two key innovations. First, during meta-training, we do not learn a single global model. Instead, we train an auxiliary neural network to produce a local model for each task, based on the corresponding support set. Second and more important, during meta-training we also train a second auxiliary network to leverage the unlabeled data of the query sets. Then, during meta-testing, we can use the query sets to fine-tune the local models, improving performance.

Leveraging unlabeled data

A machine learning system is governed by a set of parameters, and in meta-learning, meta-training optimizes them for a particular family of tasks — such as recognizing animals. During meta-testing or operational deployment, the model uses a handful of training examples to optimize those parameters for a new task.

A particular set of parameter values defines a point in a multidimensional space, and adaptation to a new task can be thought of as searching the space for the point representing the optimal new settings.

Meta-learning parameter space
In traditional meta-learning (left), the result of training is a model (φ) that can be adapted to a new set of related tasks (1 – 4). Adaptation involves searching for the optimal settings 1 – θ4) of the model parameters, based on a small set of labeled data (dl1 – dl4). Our system (right), by contrast, uses the labeled data and the available unlabeled data (x1 – x4) to better approximate those settings.

A traditional meta-learning system might begin its search at the point defined by the global model (φ in the figure above); this is the initialization step. Then, using the labeled data of the support set, it would work its way toward the settings that correspond to the new task; this is the adaptation step.

With our approach, by contrast, the initialization network selects a starting search location on the basis of the data in the support set 01(dl1) – θ04(dl4) in the figure above). Then it works its way toward the optimal settings using the unlabeled data of the query set (x1 – x4, above). More precisely, the second auxiliary neural network estimates the gradient implied by the query set data.

In the same way that the parameter settings of a machine learning model can be interpreted as a point in a representational space, so can the parameter settings and the resulting error rate on a particular data set. The multidimensional graph that results is like a topological map, with depressions that represent low error rates and peaks that represent high error rates. In this context, machine learning is a matter of identifying the slope of a depression — a gradient — and moving down it, toward a region of low error.

This is how many machine learning systems learn, but typically, they have the advantage of knowing, from training data labels, what the true error rate is for a given set of system parameters. In our case, because we’re relying on the unlabeled query set data, we can only guess at the true gradients.

That’s where the second auxiliary neural network comes in: it infers gradients from query set data. The system as a whole then uses the inferred gradients to fine-tune the initial parameter settings supplied by the first neural network.

The approach can be explained and justified through connections to two topics in theoretical machine learning, namely empirical Bayes and information bottleneck. These theoretical developments are beyond the scope of this blog post, but the interested reader can consult the full manuscript.

The associated software code has also been open-sourced as part of the Xfer repository.

Although our system beat all 16 baselines on the task of one-shot learning, there were several baseline systems that outperformed it on five-shot learning, or learning with five examples per new task. The approaches used by those baselines are complementary to our approach, and we believe that combining approaches could yield even lower error rates. Going forward, that’s one of several extensions of this work that we will be pursuing.

Research areas

Related content

US, WA, Bellevue
Conversational AI ModEling and Learning (CAMEL) team is part of Amazon Devices organization where our mission is to build a best-in-class Conversational AI that is intuitive, intelligent, and responsive, by developing superior Large Language Models (LLM) solutions and services which increase the capabilities built into the model and which enable utilizing thousands of APIs and external knowledge sources to provide the best experience for each request across millions of customers and endpoints. We are looking for a passionate, talented, and resourceful Senior Applied Scientist in the field of LLM, Artificial Intelligence (AI), Natural Language Processing (NLP), Recommender Systems and/or Information Retrieval, to invent and build scalable solutions for a state-of-the-art context-aware conversational AI. A successful candidate will have strong machine learning background and a desire to push the envelope in one or more of the above areas. The ideal candidate would also have hands-on experiences in building Generative AI solutions with LLMs, enjoy operating in dynamic environments, be self-motivated to take on challenging problems to deliver big customer impact, moving fast to ship solutions and then iterating on user feedback and interactions. Key job responsibilities As a Senior Applied Scientist, you will leverage your technical expertise and experience to demonstrate leadership in tackling large complex problems, setting the direction and collaborating with other talented applied scientists and engineers to research and develop LLM modeling and engineering techniques to reduce friction and enable natural and contextual conversations. You will analyze, understand and improve user experiences by leveraging Amazon’s heterogeneous data sources and large-scale computing resources to accelerate advances in artificial intelligence. You will work on core LLM technologies, including Prompt Engineering, Model Fine-Tuning, Reinforcement Learning from Human Feedback (RLHF), Evaluation, etc. Your work will directly impact our customers in the form of novel products and services .
US, CA, Pasadena
The Amazon Web Services (AWS) Center for Quantum Computing (CQC) is a multi-disciplinary team of scientists, engineers, and technicians, on a mission to develop a fault-tolerant quantum computer. We are looking to hire a Research Scientist with fabrication and data analysis experience working on all elements of a superconducting circuit. The position is on-site at our lab, located on the in Pasadena, CA. The ideal candidate will have had prior experience building software tools for data analysis and visualization to enable deep diving into fabrication details, electrical test data. We are looking for candidates with strong engineering principles, resourcefulness and data science experience. Organization and communication skills are essential. Key job responsibilities * Develop and automate data pipeline pertinent to superconducting device fabrication. * Develop analytical tools to uncover new information about established and new processes. * Develop new or contribute to modifying existing data visualization tools. * Utilize machine learning to enable better deeper dives into fabrication and related data. * Interface with various software, design, fabrication and electrical test teams to enable new functionalities. A day in the life The role will be vital to the fabrication team and quantum computing device integration mechanism. The candidate will develop software based analytical tools to enable data driven decisions across projects related to fabrication and supporting infrastructure. Each fabrication run delivers additional data. The candidate will stay close to the details of fabrication providing data analysis and quick feedback to key stakeholders. At the end of fabrication runs custom and standardized reports will be generated by the candidate to provide insights into data generated from the run. This position may require occasional weekend work. About the team AWS values diverse experiences. Even if you do not meet all of the qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why AWS? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Inclusive Team Culture Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences, inspire us to never stop embracing our uniqueness. Mentorship & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. Hybrid Work We value innovation and recognize this sometimes requires uninterrupted time to focus on a build. We also value in-person collaboration and time spent face-to-face. Our team affords employees options to work in the office every day or in a flexible, hybrid work model near one of our U.S. Amazon offices.
US, WA, Seattle
Do you want to join an innovative team of scientists who use machine learning and statistical techniques to help Amazon provide the best customer experience by preventing eCommerce fraud? Are you excited by the prospect of analyzing and modeling terabytes of data and creating state-of-the-art algorithms to solve real world problems? Do you like to own end-to-end business problems/metrics and directly impact the profitability of the company? Do you enjoy collaborating in a diverse team environment? If yes, then you may be a great fit to join the Amazon Buyer Risk Prevention (BRP) Machine Learning group. We are looking for a talented scientist who is passionate to build advanced algorithmic systems that help manage safety of millions of transactions every day. Key job responsibilities Use machine learning and statistical techniques to create scalable risk management systems Learning and understanding large amounts of Amazon’s historical business data for specific instances of risk or broader risk trends Design, development and evaluation of highly innovative models for risk management Working closely with software engineering teams to drive real-time model implementations and new feature creations Working closely with operations staff to optimize risk management operations, Establishing scalable, efficient, automated processes for large scale data analyses, model development, model validation and model implementation Tracking general business activity and providing clear, compelling management reporting on a regular basis Research and implement novel machine learning and statistical approaches
CA, ON, Toronto
Amazon Advertising is one of Amazon's fastest growing and most profitable businesses, responsible for defining and delivering a collection of advertising products that drive discovery and sales. Our products and solutions are strategically important to enable our Retail and Marketplace businesses to drive long-term growth. We deliver billions of ad impressions and millions of clicks and break fresh ground in product and technical innovations every day! As an Applied Scientist on this team, you will: - Drive end-to-end Machine Learning projects that have a high degree of ambiguity, scale, complexity. - Perform hands-on analysis and modeling of enormous data sets to develop insights that increase traffic monetization and merchandise sales, without compromising the shopper experience. - Build machine learning models, perform proof-of-concept, experiment, optimize, and deploy your models into production; work closely with software engineers to assist in productionizing your ML models. - Run A/B experiments, gather data, and perform statistical analysis. - Establish scalable, efficient, automated processes for large-scale data analysis, machine-learning model development, model validation and serving. - Research new and innovative machine learning approaches. - Recruit Applied Scientists to the team and provide mentorship. Why you will love this opportunity: Amazon is investing heavily in building a world-class advertising business. This team defines and delivers a collection of advertising products that drive discovery and sales. Our solutions generate billions in revenue and drive long-term growth for Amazon’s Retail and Marketplace businesses. We deliver billions of ad impressions, millions of clicks daily, and break fresh ground to create world-class products. We are a highly motivated, collaborative, and fun-loving team with an entrepreneurial spirit - with a broad mandate to experiment and innovate. Impact and Career Growth: You will invent new experiences and influence customer-facing shopping experiences to help suppliers grow their retail business and the auction dynamics that leverage native advertising; this is your opportunity to work within the fastest-growing businesses across all of Amazon! Define a long-term science vision for our advertising business, driven from our customers' needs, translating that direction into specific plans for research and applied scientists, as well as engineering and product teams. This role combines science leadership, organizational ability, technical strength, product focus, and business understanding. Team video https://youtu.be/zD_6Lzw8raE
US, WA, Seattle
Amazon Advertising is one of Amazon's fastest growing and most profitable businesses, responsible for defining and delivering a collection of advertising products that drive discovery and sales. Our products and solutions are strategically important to enable our Retail and Marketplace businesses to drive long-term growth. We deliver billions of ad impressions and millions of clicks and break fresh ground in product and technical innovations every day! As the Data Science Manager on this team, you will: - Lead of team of scientists, business intelligence engineers, etc., on solving science problems with a high degree of complexity and ambiguity. - Develop science roadmaps, run annual planning, and foster cross-team collaboration to execute complex projects. - Perform hands-on data analysis, build machine-learning models, run regular A/B tests, and communicate the impact to senior management. - Hire and develop top talent, provide technical and career development guidance to scientists and engineers in the organization. - Analyze historical data to identify trends and support optimal decision making. - Apply statistical and machine learning knowledge to specific business problems and data. - Formalize assumptions about how our systems should work, create statistical definitions of outliers, and develop methods to systematically identify outliers. Work out why such examples are outliers and define if any actions needed. - Given anecdotes about anomalies or generate automatic scripts to define anomalies, deep dive to explain why they happen, and identify fixes. - Build decision-making models and propose effective solutions for the business problems you define. - Conduct written and verbal presentations to share insights to audiences of varying levels of technical sophistication. Why you will love this opportunity: Amazon has invested heavily in building a world-class advertising business. This team defines and delivers a collection of advertising products that drive discovery and sales. Our solutions generate billions in revenue and drive long-term growth for Amazon’s Retail and Marketplace businesses. We deliver billions of ad impressions, millions of clicks daily, and break fresh ground to create world-class products. We are a highly motivated, collaborative, and fun-loving team with an entrepreneurial spirit - with a broad mandate to experiment and innovate. Impact and Career Growth: You will invent new experiences and influence customer-facing shopping experiences to help suppliers grow their retail business and the auction dynamics that leverage native advertising; this is your opportunity to work within the fastest-growing businesses across all of Amazon! Define a long-term science vision for our advertising business, driven from our customers' needs, translating that direction into specific plans for research and applied scientists, as well as engineering and product teams. This role combines science leadership, organizational ability, technical strength, product focus, and business understanding. Team video ~ https://youtu.be/zD_6Lzw8raE
US, WA, Seattle
Amazon Advertising is one of Amazon's fastest growing and most profitable businesses, responsible for defining and delivering a collection of advertising products that drive discovery and sales. Our products and solutions are strategically important to enable our Retail and Marketplace businesses to drive long-term growth. We deliver billions of ad impressions and millions of clicks and break fresh ground in product and technical innovations every day! As an Applied Science Manager in Machine Learning, you will: - Directly manage and lead a cross-functional team of Applied Scientists, Data Scientists, Economists, and Business Intelligence Engineers. - Develop and manage a research agenda that balances short term deliverables with measurable business impact as well as long term investments. - Lead marketplace design and development based on economic theory and data analysis. - Provide technical and scientific guidance to team members. - Rapidly design, prototype and test many possible hypotheses in a high-ambiguity environment, making use of both quantitative and business judgment - Advance the team's engineering craftsmanship and drive continued scientific innovation as a thought leader and practitioner. - Develop science and engineering roadmaps, run annual planning, and foster cross-team collaboration to execute complex projects. - Perform hands-on data analysis, build machine-learning models, run regular A/B tests, and communicate the impact to senior management. - Collaborate with business and software teams across Amazon Ads. - Stay up to date with recent scientific publications relevant to the team. - Hire and develop top talent, provide technical and career development guidance to scientists and engineers within and across the organization. Why you will love this opportunity: Amazon is investing heavily in building a world-class advertising business. This team defines and delivers a collection of advertising products that drive discovery and sales. Our solutions generate billions in revenue and drive long-term growth for Amazon’s Retail and Marketplace businesses. We deliver billions of ad impressions, millions of clicks daily, and break fresh ground to create world-class products. We are a highly motivated, collaborative, and fun-loving team with an entrepreneurial spirit - with a broad mandate to experiment and innovate. Impact and Career Growth: You will invent new experiences and influence customer-facing shopping experiences to help suppliers grow their retail business and the auction dynamics that leverage native advertising; this is your opportunity to work within the fastest-growing businesses across all of Amazon! Define a long-term science vision for our advertising business, driven from our customers' needs, translating that direction into specific plans for research and applied scientists, as well as engineering and product teams. This role combines science leadership, organizational ability, technical strength, product focus, and business understanding. Team video ~ https://youtu.be/zD_6Lzw8raE
US, WA, Seattle
Do you want to join an innovative team of scientists who use machine learning and statistical techniques to help Amazon provide the best customer experience by preventing eCommerce fraud? Are you excited by the prospect of analyzing and modeling terabytes of data and creating state-of-the-art algorithms to solve real world problems? Do you like to own end-to-end business problems/metrics and directly impact the profitability of the company? Do you enjoy collaborating in a diverse team environment? If yes, then you may be a great fit to join the Amazon Buyer Risk Prevention (BRP) Machine Learning group. We are looking for a talented scientist who is passionate to build advanced algorithmic systems that help manage safety of millions of transactions every day. Key job responsibilities Use machine learning and statistical techniques to create scalable risk management systems Learning and understanding large amounts of Amazon’s historical business data for specific instances of risk or broader risk trends Design, development and evaluation of highly innovative models for risk management Working closely with software engineering teams to drive real-time model implementations and new feature creations Working closely with operations staff to optimize risk management operations, Establishing scalable, efficient, automated processes for large scale data analyses, model development, model validation and model implementation Tracking general business activity and providing clear, compelling management reporting on a regular basis Research and implement novel machine learning and statistical approaches
US, CA, San Diego
Do you want to join an innovative team of scientists who use machine learning and statistical techniques to help Amazon provide the best customer experience by preventing eCommerce fraud? Are you excited by the prospect of analyzing and modeling terabytes of data and creating state-of-the-art algorithms to solve real world problems? Do you like to own end-to-end business problems/metrics and directly impact the profitability of the company? Do you enjoy collaborating in a diverse team environment? If yes, then you may be a great fit to join the Amazon Buyer Risk Prevention (BRP) Machine Learning group. We are looking for a talented scientist who is passionate to build advanced algorithmic systems that help manage safety of millions of transactions every day. Key job responsibilities Use machine learning and statistical techniques to create scalable risk management systems Learning and understanding large amounts of Amazon’s historical business data for specific instances of risk or broader risk trends Design, development and evaluation of highly innovative models for risk management Working closely with software engineering teams to drive real-time model implementations and new feature creations Working closely with operations staff to optimize risk management operations, Establishing scalable, efficient, automated processes for large scale data analyses, model development, model validation and model implementation Tracking general business activity and providing clear, compelling management reporting on a regular basis Research and implement novel machine learning and statistical approaches
US, WA, Seattle
Do you want to join an innovative team of scientists who use machine learning and statistical techniques to help Amazon provide the best customer experience by preventing eCommerce fraud? Are you excited by the prospect of analyzing and modeling terabytes of data and creating state-of-the-art algorithms to solve real world problems? Do you like to own end-to-end business problems/metrics and directly impact the profitability of the company? Do you enjoy collaborating in a diverse team environment? If yes, then you may be a great fit to join the Amazon Buyer Risk Prevention (BRP) Machine Learning group. We are looking for a talented scientist who is passionate to build advanced algorithmic systems that help manage safety of millions of transactions every day. Key job responsibilities Use machine learning and statistical techniques to create scalable risk management systems Learning and understanding large amounts of Amazon’s historical business data for specific instances of risk or broader risk trends Design, development and evaluation of highly innovative models for risk management Working closely with software engineering teams to drive real-time model implementations and new feature creations Working closely with operations staff to optimize risk management operations, Establishing scalable, efficient, automated processes for large scale data analyses, model development, model validation and model implementation Tracking general business activity and providing clear, compelling management reporting on a regular basis Research and implement novel machine learning and statistical approaches
US, CA, San Diego
Do you want to join an innovative team of scientists who use machine learning and statistical techniques to help Amazon provide the best customer experience by preventing eCommerce fraud? Are you excited by the prospect of analyzing and modeling terabytes of data and creating state-of-the-art algorithms to solve real world problems? Do you like to own end-to-end business problems/metrics and directly impact the profitability of the company? Do you enjoy collaborating in a diverse team environment? If yes, then you may be a great fit to join the Amazon Buyer Risk Prevention (BRP) Machine Learning group. We are looking for a talented scientist who is passionate to build advanced algorithmic systems that help manage safety of millions of transactions every day. Key job responsibilities Use machine learning and statistical techniques to create scalable risk management systems Learning and understanding large amounts of Amazon’s historical business data for specific instances of risk or broader risk trends Design, development and evaluation of highly innovative models for risk management Working closely with software engineering teams to drive real-time model implementations and new feature creations Working closely with operations staff to optimize risk management operations, Establishing scalable, efficient, automated processes for large scale data analyses, model development, model validation and model implementation Tracking general business activity and providing clear, compelling management reporting on a regular basis Research and implement novel machine learning and statistical approaches