Michele Donini, left, a senior applied scientist with Amazon Web Services, and Luca Oneto, associate professor of computer engineering at University of Genoa
Michele Donini, left, a senior applied scientist with Amazon Web Services, and Luca Oneto, associate professor of computer engineering at University of Genoa, review different approaches that can make data-driven predictions fairer for underrepresented groups.
Credit: Glynis Condon

Working toward fairer machine learning

Exploring and analyzing possible techniques to make ML algorithms capable of learning fairer models by utilizing empirical risk minimization theory.

Editor’s note: Michele Donini is a senior applied scientist with Amazon Web Services (AWS). He and his co-author, Luca Oneto, associate professor of computer engineering at University of Genoa, have written about how different approaches can make data-driven predictions fairer for underrepresented groups. Oneto also won a 2019 Machine Learning Research award for his work on algorithmic fairness. In this article, Donini and Oneto explore the research they and other collaborators have published related to designing machine learning (ML) models from a human-centered perspective, and building responsible AI.

What is fairness?

Fairness can be defined in many different ways, and many different formal notions exist, such as demographic parity, equal opportunity, and equal odds.

Graphic shows a model of an unfair outcome on the left and a fair outcome on the right
Algorithmic fairness is a topic of great importance, with impact on many applications. The issue requires much further research; even the definition of what “being fair” means for an ML model is still an open research question.

Nevertheless, the basic and common idea behind notions of fairness is that the learned ML model should behave equivalently, or at least similarly, no matter whether it is applied to one subgroup of the population (e.g., males) or to another one (e.g., females).

For example, demographic parity, which arguably is the most common notion of fairness, implies that the probability of a certain output of an ML model (e.g., deciding to make a loan) should not depend on the value of specific demographic attributes (e.g., gender, race, or age).

Moving toward fairer models

Broadly speaking, we can group current literature on algorithmic fairness into three main approaches:

  • The first approach consists of pre-processing the data to remove historical biases and then feeding this data to classical ML models.
  • The second approach consists of post-processing an already learned ML model. This approach is useful when very complex ML models need to be made fairer without touching their inner structure or when re-training them is unfeasible (due to computational cost, or time requirements).
  • The third approach, called in-processing, consists of enforcing fairness notions by imposing specific statistical constraints during the learning phase of the model. This is the most natural approach, but so far, it has required ad hoc solutions tailored to specific tasks and data sets.
Fair Models.png
Broadly speaking, current literature on algorithmic fairness falls into three main approaches: pre-processing data; post-processing an already learned ML model; and in-processing, which consists of enforcing fairness notions by imposing specific statistical constraints during the learning phase of the model.

We decided to explore and analyze possible techniques to make ML algorithms capable of learning fairer models.

We started from the base concepts of statistical learning theory — a mathematical framework for describing machine learning — and, in particular, from empirical risk minimization theory. The core concept of empirical risk minimization is that a model’s performance on test data may not accurately predict its performance on real-world data, as the real-world data may have a different probability distribution.

Empirical-risk-minimization theory provides a way to estimate the “true risk” of a model from its “empirical risk”, which can be computed from the available data. We extended this concept to the true and empirical fairness risk of ML models.

Below is a summary of three papers we’ve published related to these topics.

Empirical risk minimization under fairness constraints

This paper presents a new in-processing method, meaning that we incorporate a fairness constraint into the learning problem. We derive theoretical guarantees on both the accuracy and fairness of the resulting models, and we show how to apply our method to a large family of machine learning algorithms, including linear models and support vector machines for classification (a widely used supervised-learning method).

We observe that, in practice, we can meet our fairness constraint simply by requiring that a scalar product between two vectors remains small (an orthogonality constraint between the vector of the weights describing our model and the vector describing the discrimination between the different subgroups). We further observe that, for linear models, this requirement translates into a simple pre-processing method. Experiments indicate that our approach is empirically effective and performs favorably against state-of-the-art approaches.

Fair regression with Wasserstein barycenters

In this paper, we consider the case in which the ML model learns a regression function (as opposed to a classification task). We propose a post-processing method for transforming a real-valued regression function — the ML model — into one that satisfies the demographic-parity constraint (i.e., the probability of getting a positive outcome should be virtually the same for different subgroups). In particular, the new regression function is as good an approximation of the original as is possible while still satisfying the constraint, making it an optimal fair predictor.

Fair Representation.png
In “Fair regression with Wasserstein barycenters”, we consider the case in which the ML model learns a regression function and propose a post-processing method for transforming a real-valued regression function — the ML model — into one that satisfies the demographic-parity constraint.

We assume that the sensitive attribute — the demographic attribute that should not bias outcome — is available to the ML model at inference time and not only during training. We establish a connection between learning a fair model for regression and optimal transport theory, which describes how to measure distances among probability distributions. On that basis, we derive a closed-form expression for the optimal fair predictor.

Specifically, under the unfair regression function, different populations have different probability distributions; the function skews the probabilities for the population with the sensitive attribute. The difference between subgroups’ distributions can be calculated using the Wasserstein distance. We show that the mean of the distribution of the optimal fair predictor is the mean of the different subgroups’ distributions, as calculated using Wasserstein distance. This mean is known as the Wasserstein barycenter.

This result offers an intuitive interpretation of optimal fair prediction and suggests a simple post-processing algorithm to achieve fairness. We establish fairness-risk guarantees for this procedure. Numerical experiments indicate that our method is very effective in learning fair models, with a relative increase in error rate that is smaller than the relative gain in fairness.

"Exploiting MMD and Sinkhorn divergences for fair and transferable representation learning

Where the first paper described a general learning method, and the second a regression method, this paper concerns deep learning. We show how to improve demographic parity in the multitask-learning setting, in which a deep-learning model learns a single representation of the input data that is useful for multiple tasks. We derive theoretical guarantees on the learned model, establishing that the representation will still reduce bias even when transferred to novel tasks.

We propose a learning algorithm that imposes constraints based on two different ways of measuring distances between probability distributions, maximum mean discrepancy and Sinkhorn divergence. Keeping this distance small ensures that we represent similar inputs in a similar way when they differ only on the sensitive attribute. We present experiments on three real-world datasets, showing that the proposed method outperforms state-of-the-art approaches by a significant margin.

Algorithmic fairness is a topic of great importance, with impact on many applications. In our work, we have attempted to take a small step forward, but the issue requires much further research; even the definition of what “being fair” means for an ML model is still an open research question.

It’s also becoming clearer that we need to keep humans in the loop during the lifecycle of ML models, to evaluate whether the models are acting as we would like them to. In this sense, it is important to note that many other research subjects – such as the explainability, interpretability, and privacy of ML models – are deeply connected to algorithmic fairness. They can work in synergy, with the common goal of increasing the trustworthiness of ML models.

Research areas

Related content

US, WA, Seattle
Device Economics is looking for an economist experienced in causal inference, empirical industrial organization, forecasting, and scaled systems to work on business problems to advance critical resource allocation and pricing decisions in the Amazon Devices org. Output will be included in scaled systems to automate existing processes and to maximize business and customer objectives. Amazon Devices designs and builds Amazon first-party consumer electronics products to delight and engage customers. Amazon Devices represents a highly complex space with 100+ products across several product categories (e-readers [Kindle], tablets [Fire Tablets], smart speakers and audio assistants [Echo], wifi routers [eero], and video doorbells and cameras [Ring and Blink]), for sale both online and in offline retailers in several regions. The space becomes more complex with dynamic product offering with new product launches and new marketplace launches. The Device Economics team leads in analyzing these complex marketplace dynamics to enable science-driven decision making in the Devices org. Device Economics achieves this by combining economic expertise with macroeconomic trends, and including both in scientific applications for use by internal analysts, to provide deep understanding of customer preferences. Our team’s outputs inform product development decisions, investments in future product categories, product pricing and promotion, and bundling across complementary product lines. We have achieved substantial impact on the Devices business, and will achieve more. Device Economics seeks an economist adept in measuring customer preferences and behaviors with proven capacity to innovate, scale measurement, and drive rigor. The candidate must be passionate about advancing science for business and customer impact.
US, CA, Sunnyvale
A data scientist focused on conversational AI will be a highly autonomous contributor driving initiatives on the leading edge of Databases and Logs, Machine Learning (ML), Natural Language Processing (NLP), and Information Retrieval (IR). Leveraging expertise across techniques you will architect scalable solutions that extract insights from multimodal data and incorporate those to deliver engaging conversational experiences impacting Alexa's customer experience, design, architecture, and implementation. You will thrive in this fast-paced research environment, working with a smart and passionate team of scientists and engineers. About the team We are a part of Amazon Devices and Services organization, focusing on building Alexa. Our mission is “delight customers through contextual and personalized proactive experiences that keep customers informed, engaged, and productive without cognitive burden”. We are developing advanced systems to deliver engaging, intuitive, and adaptive content recommendations across all Amazon surfaces. We aim to facilitate seamless reasoning and customer experiences, surpassing the capabilities of previous machine learning models. We are looking for a passionate, talented, and resourceful Data Scientist to invent and build scalable solutions for a state-of-the-art context-aware personal assistant. The ideal candidate would also enjoy operating in dynamic environments, be self-motivated to take on challenging problems to deliver big customer impact, shipping solutions via rapid experimentation and then iterating on user feedback and interactions.
GB, Cambridge
The Artificial General Intelligence team (AGI) has an exciting position for an Applied Scientist with a strong background NLP and Large Language Models to help us develop state-of-the-art conversational systems. As part of this team, you will collaborate with talented scientists and software engineers to enable conversational assistants capabilities to support the use of external tools and sources of information, and develop novel reasoning capabilities to revolutionise the user experience for millions of Alexa customers. Key job responsibilities As an Applied Scientist, you will develop innovative solutions to complex problems to extend the functionalities of conversational assistants . You will use your technical expertise to research and implement novel algorithms and modelling solutions in collaboration with other scientists and engineers. You will analyse customer behaviours and define metrics to enable the identification of actionable insights and measure improvements in customer experience. You will communicate results and insights to both technical and non-technical audiences through written reports, presentations and external publications.
US, WA, Seattle
Shape the Future of Human-Machine Interaction Are you a master of natural language processing, eager to push the boundaries of conversational AI? Amazon is seeking exceptional graduate students to join our cutting-edge research team, where they will have the opportunity to explore and push the boundaries of natural language processing (NLP), natural language understanding (NLU), and speech recognition technologies. Imagine waking up each morning, fueled by the excitement of tackling complex research problems that have the potential to reshape the world. You'll dive into production-scale data, exploring innovative approaches to natural language understanding, large language models, reinforcement learning with human feedback, conversational AI, and multimodal learning. Your days will be filled with brainstorming sessions, coding sprints, and lively discussions with brilliant minds from diverse backgrounds. Throughout your journey, you'll have access to unparalleled resources, including state-of-the-art computing infrastructure, cutting-edge research papers, and mentorship from industry luminaries. This immersive experience will not only sharpen your technical skills but also cultivate your ability to think critically, communicate effectively, and thrive in a fast-paced, innovative environment where bold ideas are celebrated.. Join us at the forefront of applied science, where your contributions will shape the future of AI and propel humanity forward. Seize this extraordinary opportunity to learn, grow, and leave an indelible mark on the world of technology. Amazon has positions available for Natural Language Processing & Speech Applied Science Internships in, but not limited to, Bellevue, WA; Boston, MA; Cambridge, MA; New York, NY; Santa Clara, CA; Seattle, WA; Sunnyvale, CA. Key job responsibilities We are particularly interested in candidates with expertise in: NLP/NLU, LLMs, Reinforcement Learning, Human Feedback/HITL, Deep Learning, Speech Recognition, Conversational AI, Natural Language Modeling, Multimodal Learning. In this role, you will work alongside global experts to develop and implement novel, scalable algorithms and modeling techniques that advance the state-of-the-art in areas at the intersection of Natural Language Processing and Speech Technologies. You will tackle challenging, groundbreaking research problems on production-scale data, with a focus on natural language processing, speech recognition, text-to-speech (TTS), text recognition, question answering, NLP models (e.g., LSTM, transformer-based models), signal processing, information extraction, conversational modeling, audio processing, speaker detection, large language models, multilingual modeling, and more. The ideal candidate should possess the ability to work collaboratively with diverse groups and cross-functional teams to solve complex business problems. A successful candidate will be a self-starter, comfortable with ambiguity, with strong attention to detail and the ability to thrive in a fast-paced, ever-changing environment. A day in the life - Develop novel, scalable algorithms and modeling techniques that advance the state-of-the-art in natural language processing, speech recognition, text-to-speech, question answering, and conversational modeling. - Tackle groundbreaking research problems on production-scale data, leveraging techniques such as LSTM, transformer-based models, signal processing, information extraction, audio processing, speaker detection, large language models, and multilingual modeling. - Collaborate with cross-functional teams to solve complex business problems, leveraging your expertise in NLP/NLU, LLMs, reinforcement learning, human feedback/HITL, deep learning, speech recognition, conversational AI, natural language modeling, and multimodal learning. - Thrive in a fast-paced, ever-changing environment, embracing ambiguity and demonstrating strong attention to detail.
US, WA, Seattle
Unleash Your Potential at the Forefront of AI Innovation At Amazon, we're on a mission to revolutionize the way the world leverages machine learning. Amazon is seeking graduate student scientists who can turn revolutionary theory into awe-inspiring reality. As an Applied Science Intern focused on Information and Knowledge Management in Machine Learning, you will play a critical role in developing the systems and frameworks that power Amazon's machine learning capabilities. You'll be at the epicenter of this transformation, shaping the systems and frameworks that power our cutting-edge AI capabilities. Imagine a role where you develop intuitive tools and workflows that empower machine learning teams to discover, reuse, and build upon existing models and datasets, accelerating innovation across the company. You'll leverage natural language processing and information retrieval techniques to unlock insights from vast repositories of unstructured data, fueling the next generation of AI applications. Throughout your journey, you'll have access to unparalleled resources, including state-of-the-art computing infrastructure, cutting-edge research papers, and mentorship from industry luminaries. This immersive experience will not only sharpen your technical skills but also cultivate your ability to think critically, communicate effectively, and thrive in a fast-paced, innovative environment where bold ideas are celebrated. Join us at the forefront of applied science, where your contributions will shape the future of AI and propel humanity forward. Seize this extraordinary opportunity to learn, grow, and leave an indelible mark on the world of technology. Amazon has positions available for Machine Learning Applied Science Internships in, but not limited to Arlington, VA; Bellevue, WA; Boston, MA; New York, NY; Palo Alto, CA; San Diego, CA; Santa Clara, CA; Seattle, WA. Key job responsibilities We are particularly interested in candidates with expertise in: Knowledge Graphs and Extraction, Neural Networks/GNNs, Data Structures and Algorithms, Time Series, Machine Learning, Natural Language Processing, Deep Learning, Large Language Models, Graph Modeling, Knowledge Graphs and Extraction, Programming/Scripting Languages In this role, you'll collaborate with brilliant minds to develop innovative frameworks and tools that streamline the lifecycle of machine learning assets, from data to deployed models in areas at the intersection of Knowledge Management within Machine Learning. You will conduct groundbreaking research into emerging best practices and innovations in the field of ML operations, knowledge engineering, and information management, proposing novel approaches that could further enhance Amazon's machine learning capabilities. The ideal candidate should possess the ability to work collaboratively with diverse groups and cross-functional teams to solve complex business problems. A successful candidate will be a self-starter, comfortable with ambiguity, with strong attention to detail and the ability to thrive in a fast-paced, ever-changing environment. A day in the life - Develop scalable, efficient, automated processes for large scale data analyses, model development, model validation and model implementation. - Design, development and evaluation of highly innovative ML models for solving complex business problems. - Research and apply the latest ML techniques and best practices from both academia and industry. - Think about customers and how to improve the customer delivery experience. - Use and analytical techniques to create scalable solutions for business problems.
US, WA, Seattle
Revolutionize the Future of AI at the Frontier of Applied Science Are you a brilliant mind seeking to push the boundaries of what's possible with artificial intelligence? Join our elite team of researchers and engineers at the forefront of applied science, where we're harnessing the latest advancements in natural language processing, deep learning, and generative AI to reshape industries and unlock new realms of innovation. As an Applied Science Intern, you'll have the unique opportunity to work alongside world-renowned experts, gaining invaluable hands-on experience with cutting-edge technologies such as large language models, transformers, and neural networks. You'll dive deep into complex challenges, fine-tuning state-of-the-art models, developing novel algorithms for named entity recognition, and exploring the vast potential of generative AI. This internship is not just about executing tasks – it's about being a driving force behind groundbreaking discoveries. You'll collaborate with cross-functional teams, leveraging your expertise in statistics, recommender systems, and question answering to tackle real-world problems and deliver impactful solutions. Throughout your journey, you'll have access to unparalleled resources, including state-of-the-art computing infrastructure, cutting-edge research papers, and mentorship from industry luminaries. This immersive experience will not only sharpen your technical skills but also cultivate your ability to think critically, communicate effectively, and thrive in a fast-paced, innovative environment where bold ideas are celebrated.. Join us at the forefront of applied science, where your contributions will shape the future of AI and propel humanity forward. Seize this extraordinary opportunity to learn, grow, and leave an indelible mark on the world of technology. Amazon has positions available for LLM & GenAI Applied Science Internships in, but not limited to, Bellevue, WA; Boston, MA; Cambridge, MA; New York, NY; Santa Clara, CA; Seattle, WA; Sunnyvale, CA. Key job responsibilities We are particularly interested in candidates with expertise in: LLMs, NLP/NLU, Gen AI, Transformers, Fine-Tuning, Recommendation Systems, Deep Learning, NER, Statistics, Neural Networks, Question Answering. In this role, you will work alongside global experts to develop and implement novel, scalable algorithms and modeling techniques that advance the state-of-the-art in areas at the intersection of LLMs and GenAI. You will tackle challenging, groundbreaking research problems on production-scale data, with a focus on recommendation systems, question answering, deep learning and generative AI. The ideal candidate should possess the ability to work collaboratively with diverse groups and cross-functional teams to solve complex business problems. A successful candidate will be a self-starter, comfortable with ambiguity, with strong attention to detail and the ability to thrive in a fast-paced, ever-changing environment. A day in the life - Collaborate with cross-functional teams to tackle complex challenges in natural language processing, computer vision, and generative AI. - Fine-tune state-of-the-art models and develop novel algorithms to push the boundaries of what's possible. - Explore the vast potential of generative AI and its applications across industries. - Attend cutting-edge research seminars and engage in thought-provoking discussions with industry luminaries. - Leverage state-of-the-art computing infrastructure and access to the latest research papers to fuel your innovation. - Present your groundbreaking work and insights to the team, fostering a culture of knowledge-sharing and continuous learning
US, WA, Seattle
Shape the Future of Visual Intelligence Are you passionate about pushing the boundaries of computer vision and shaping the future of visual intelligence? Join Amazon and embark on an exciting journey where you'll develop cutting-edge algorithms and models that power our groundbreaking computer vision services, including Amazon Rekognition, Amazon Go, Visual Search, and more! At Amazon, we're combining computer vision, mobile robots, advanced end-of-arm tooling, and high-degree of freedom movement to solve real-world problems at an unprecedented scale. As an intern, you'll have the opportunity to build innovative solutions where visual input helps customers shop, anticipate technological advances, work with leading-edge technology, focus on highly targeted customer use-cases, and launch products that solve problems for Amazon customers worldwide. Throughout your journey, you'll have access to unparalleled resources, including state-of-the-art computing infrastructure, cutting-edge research papers, and mentorship from industry luminaries. This immersive experience will not only sharpen your technical skills but also cultivate your ability to think critically, communicate effectively, and thrive in a fast-paced, innovative environment where bold ideas are celebrated.. Join us at the forefront of applied science, where your contributions will shape the future of AI and propel humanity forward. Seize this extraordinary opportunity to learn, grow, and leave an indelible mark on the world of technology. Amazon has positions available for Computer Vision Applied Science Internships in, but not limited to, Arlington, VA; Boston, MA; Cupertino, CA; Minneapolis, MN; New York, NY; Portland, OR; Santa Clara, CA; Seattle, WA; Bellevue, WA; Santa Clara, CA; Sunnyvale, CA. Key job responsibilities We are particularly interested in candidates with expertise in: Vision - Language Models, Object Recognition/Detection, Computer Vision, Large Language Models (LLMs), Programming/Scripting Languages, Facial Recognition, Image Retrieval, Deep Learning, Ranking, Video Understanding, Robotics In this role, you will work alongside global experts to develop and implement novel, scalable algorithms and modeling techniques that advance the state-of-the-art in areas of visual intelligence. You will tackle challenging, groundbreaking research problems to help build solutions where visual input helps the customers shop, anticipate technological advances, work with leading edge technology, focus on highly targeted customer use-cases, and launch products that solve problems for Amazon customers. The ideal candidate should possess the ability to work collaboratively with diverse groups and cross-functional teams to solve complex business problems. A successful candidate will be a self-starter, comfortable with ambiguity, with strong attention to detail and the ability to thrive in a fast-paced, ever-changing environment. A day in the life - Collaborate with Amazon scientists and cross-functional teams to develop and deploy cutting-edge computer vision solutions into production. - Dive into complex challenges, leveraging your expertise in areas such as Vision-Language Models, Object Recognition/Detection, Large Language Models (LLMs), Facial Recognition, Image Retrieval, Deep Learning, Ranking, Video Understanding, and Robotics. - Contribute to technical white papers, create technical roadmaps, and drive production-level projects that will support Amazon Science. - Embrace ambiguity, strong attention to detail, and a fast-paced, ever-changing environment as you own the design and development of end-to-end systems. - Engage in knowledge-sharing, mentorship, and career-advancing resources to grow as a well-rounded professional.
US, WA, Seattle
Shape the Future of Cloud Computing Are you a graduate student passionate about Automated Reasoning and its real-world applications? Join our team of innovators and embark on a journey to revolutionize cloud computing through cutting-edge automated reasoning techniques.Our tools are called billions of times daily, powering the backbone of Amazon's products and services. We are changing the way computer systems are developed and operated, raising the bar for security, durability, availability, and quality. As an Applied Science Intern, you'll have the opportunity to work alongside our brilliant scientists and contribute to groundbreaking projects. From distributed proof search and SAT/SMT solvers to program analysis, synthesis, and verification, you'll tackle complex challenges at the intersection of theory and practice, driving innovation and delivering tangible value to our customers. This internship is not just about executing tasks – you'll explore novel approaches to solving intricate automated reasoning problems. You'll dive deep into cutting-edge research, leveraging your expertise to develop innovative solutions. You'll work on deploying your solutions into production, witnessing the real-world impact of your contributions. Throughout your journey, you'll have access to unparalleled resources, including state-of-the-art computing infrastructure, cutting-edge research papers, and mentorship from industry luminaries. This immersive experience will not only sharpen your technical skills but also cultivate your ability to think critically, communicate effectively, and thrive in a fast-paced, innovative environment. Join us and be part of a team that is shaping the future of cloud computing through the power of Automated Reasoning. Apply now and unlock your potential! Amazon has positions available for Automated Reasoning Applied Science Internships in, but not limited to, Arlington, VA; Boston, MA; Cupertino, CA; Minneapolis, MN; New York, NY; Portland, OR; Santa Clara, CA; Seattle, WA; Bellevue, WA; Santa Clara, CA; Sunnyvale, CA. Key job responsibilities We are particularly interested in candidates with expertise in: Theorem Proving, Boolean Satisfiability Solvers, Bounded Model Checking, Deductive Verification, Programming/Scripting Languages, Abstract Interpretation, Automated Reasoning, Static/Program Analysis, Program Synthesis In this role, you will work alongside global experts to develop and implement novel, scalable algorithms and modeling techniques that advance the state-of-the-art in areas at the intersection of Natural Language Processing and Speech Technologies. You will tackle challenging, groundbreaking research problems on production-scale data, with a focus on natural language processing, speech recognition, text-to-speech (TTS), text recognition, question answering, NLP models (e.g., LSTM, transformer-based models), signal processing, information extraction, conversational modeling, audio processing, speaker detection, large language models, multilingual modeling, and more. The ideal candidate should possess the ability to work collaboratively with diverse groups and cross-functional teams to solve complex business problems. A successful candidate will be a self-starter, comfortable with ambiguity, with strong attention to detail and the ability to thrive in a fast-paced, ever-changing environment. Key job responsibilities We are particularly interested in candidates with expertise in: Theorem Proving, Boolean Satisfiability Solvers, Bounded Model Checking, Deductive Verification, Programming/Scripting Languages, Abstract Interpretation, Automated Reasoning, Static/Program Analysis, Program Synthesis In this role, you will work alongside global experts to develop and implement novel, scalable algorithms and modeling techniques that advance the state-of-the-art in areas at the intersection of Natural Language Processing and Speech Technologies. You will tackle challenging, groundbreaking research problems on production-scale data, with a focus on natural language processing, speech recognition, text-to-speech (TTS), text recognition, question answering, NLP models (e.g., LSTM, transformer-based models), signal processing, information extraction, conversational modeling, audio processing, speaker detection, large language models, multilingual modeling, and more. The ideal candidate should possess the ability to work collaboratively with diverse groups and cross-functional teams to solve complex business problems. A successful candidate will be a self-starter, comfortable with ambiguity, with strong attention to detail and the ability to thrive in a fast-paced, ever-changing environment.
US, WA, Seattle
Unleash Your Potential as an AI Trailblazer At Amazon, we're on a mission to revolutionize the way people discover and access information. Our Applied Science team is at the forefront of this endeavor, pushing the boundaries of recommender systems and information retrieval. We're seeking brilliant minds to join us as interns and contribute to the development of cutting-edge AI solutions that will shape the future of personalized experiences. As an Applied Science Intern focused on Recommender Systems and Information Retrieval in Machine Learning, you'll have the opportunity to work alongside renowned scientists and engineers, tackling complex challenges in areas such as deep learning, natural language processing, and large-scale distributed systems. Your contributions will directly impact the products and services used by millions of Amazon customers worldwide. Imagine a role where you immerse yourself in groundbreaking research, exploring novel machine learning models for product recommendations, personalized search, and information retrieval tasks. You'll leverage natural language processing and information retrieval techniques to unlock insights from vast repositories of unstructured data, fueling the next generation of AI applications. Throughout your journey, you'll have access to unparalleled resources, including state-of-the-art computing infrastructure, cutting-edge research papers, and mentorship from industry luminaries. This immersive experience will not only sharpen your technical skills but also cultivate your ability to think critically, communicate effectively, and thrive in a fast-paced, innovative environment where bold ideas are celebrated. Join us at the forefront of applied science, where your contributions will shape the future of AI and propel humanity forward. Seize this extraordinary opportunity to learn, grow, and leave an indelible mark on the world of technology. Amazon has positions available for Machine Learning Applied Science Internships in, but not limited to Arlington, VA; Bellevue, WA; Boston, MA; New York, NY; Palo Alto, CA; San Diego, CA; Santa Clara, CA; Seattle, WA. Key job responsibilities We are particularly interested in candidates with expertise in: Knowledge Graphs and Extraction, Programming/Scripting Languages, Time Series, Machine Learning, Natural Language Processing, Deep Learning,Neural Networks/GNNs, Large Language Models, Data Structures and Algorithms, Graph Modeling, Collaborative Filtering, Learning to Rank, Recommender Systems In this role, you'll collaborate with brilliant minds to develop innovative frameworks and tools that streamline the lifecycle of machine learning assets, from data to deployed models in areas at the intersection of Knowledge Management within Machine Learning. You will conduct groundbreaking research into emerging best practices and innovations in the field of ML operations, knowledge engineering, and information management, proposing novel approaches that could further enhance Amazon's machine learning capabilities. The ideal candidate should possess the ability to work collaboratively with diverse groups and cross-functional teams to solve complex business problems. A successful candidate will be a self-starter, comfortable with ambiguity, with strong attention to detail and the ability to thrive in a fast-paced, ever-changing environment. A day in the life - Design, implement, and experimentally evaluate new recommendation and search algorithms using large-scale datasets - Develop scalable data processing pipelines to ingest, clean, and featurize diverse data sources for model training - Conduct research into the latest advancements in recommender systems, information retrieval, and related machine learning domains - Collaborate with cross-functional teams to integrate your innovative solutions into production systems, impacting millions of Amazon customers worldwide - Communicate your findings through captivating presentations, technical documentation, and potential publications, sharing your knowledge with the global AI community
US, WA, Seattle
Do you have a strong science background and want to help build new technologies? Do you have a physics background and want to help build and test superconducting circuits? Would you love to help develop the algorithms and models that power computer vision services at Amazon, such as Amazon Rekognition, Amazon Go, Visual Search, etc? Join the quantum revolution at Amazon and be part of a team that's pushing the boundaries of what's possible in quantum computing and quantum technologies. As a Research Science Intern focused on Quantum Technologies, you'll have the opportunity to work alongside leading experts in the field, contributing to cutting-edge research and driving innovation in areas such as quantum algorithms, quantum simulation, superconducting qubits, quantum key distribution, and quantum optics. We are looking for builders, innovators, and entrepreneurs who want to bring their ideas to reality and improve the lives of millions of customers. Research interns at Amazon work passionately to apply cutting-edge advances in technology to solve real-world problems. As an intern, you will be challenged to apply theory into practice through experimentation and invention, develop new algorithms using modeling software and programming techniques for complex problems, implement prototypes and work with massive datasets. Throughout your journey, you'll have access to unparalleled resources, including state-of-the-art computing infrastructure, cutting-edge research papers, and mentorship from industry luminaries. This immersive experience will not only sharpen your technical skills but also cultivate your ability to think critically, communicate effectively, and thrive in a fast-paced, innovative environment where bold ideas are celebrated. Amazon has positions available for Operations Research Science Internships in, but not limited to, Bellevue, WA; Boston, MA; Cambridge, MA; New York, NY; Santa Clara, CA; Seattle, WA; Sunnyvale, CA. Key job responsibilities We are particularly interested in candidates with the following skills: Quantum Algorithms, Quantum Simulators, Superconducting Qubits, Quantum Key Distribution , Optics In this role, you ain hands-on experience in applying cutting-edge analytical techniques to tackle complex business challenges at scale. If you are passionate about using data-driven insights to drive operational excellence, we encourage you to apply. The ideal candidate should possess the ability to work collaboratively with diverse groups and cross-functional teams to solve complex business problems. A successful candidate will be a self-starter, comfortable with ambiguity, with strong attention to detail and the ability to thrive in a fast-paced, ever-changing environment. A day in the life - Conduct research and develop new quantum algorithms to solve complex computational problems - Design and implement quantum simulation models to study the behavior of quantum systems - Investigate the properties and performance of superconducting qubits, a promising platform for building large-scale quantum computers - Explore the application of quantum key distribution protocols for secure communication and data encryption, ensuring the privacy and integrity of sensitive information - Explore the application of quantum optics principles to develop novel quantum sensing and communication technologies