Computing on private data

Both secure multiparty computation and differential privacy protect the privacy of data used in computation, but each has advantages in different contexts.

Many of today’s most innovative computation-based products and solutions are fueled by data. Where those data are private, it is essential to protect them and to prevent the release of information about data subjects, owners, or users to the wrong parties. How can we perform useful computations on sensitive data while preserving privacy?

Related content
Technique that mixes public and private training data can meet differential-privacy criteria while cutting error increase by 60%-70%.

We will revisit two well-studied approaches to this challenge: secure multiparty computation (MPC) and differential privacy (DP). MPC and DP were invented to address different real-world problems and to achieve different technical goals. However, because they are both aimed at using private information without fully revealing it, they are often confused. To help draw a distinction between the two approaches, we will discuss the power and limitations of both and give typical scenarios in which each can be highly effective.

We are interested in scenarios in which multiple individuals (sometimes, society as a whole) can derive substantial utility from a computation on private data but, in order to preserve privacy, cannot simply share all of their data with each other or with an external party.

Secure multiparty computation

MPC methods allow a group of parties to collectively perform a computation that involves all of their private data while revealing only the result of the computation. More formally, an MPC protocol enables n parties, each of whom possesses a private dataset, to compute a function of the union of their datasets in such a way that the only information revealed by the computation is the output of the function. Common situations in which MPC can be used to protect private interests include

  • auctions: the winning bid amount should be made public, but no information about the losing bids should be revealed;
  • voting: the number of votes cast for each option should be made public but not the vote cast by any one individual;
  • machine learning inference: secure two-party computation enables a client to submit a query to a server that holds a proprietary model and receive a response, keeping the query private from the server and the model private from the client.
Related content
New approach to homomorphic encryption speeds up the training of encrypted machine learning models sixfold.

Note that the number n of participants can be quite small (e.g., two in the case of machine learning inference), moderate in size, or very large; the latter two size ranges both occur naturally in auctions and votes. Similarly, the participants may be known to each other (as they would be, for example, in a departmental faculty vote) or not (as, for example, in an online auction). MPC protocols mathematically guarantee the secrecy of input values but do not attempt to hide the identities of the participants; if anonymous participation is desired, it can be achieved by combining MPC with an anonymous-communication protocol.

Although MPC may seem like magic, it is implementable and even practical using cryptographic and distributed-computing techniques. For example, suppose that Alice, Bob, Carlos, and David are four engineers who want to compare their annual raises. Alice selects four random numbers that sum to her raise. She keeps one number to herself and gives each of the other three to one of the other engineers. Bob, Carlos, and David do the same with their own raises.

Secure multiparty computation
Four engineers wish to compute their average raise, without revealing any one engineer's raise to the others. Each selects four numbers that sum to his or her raise and sends three of them to the other engineers. Each engineer then sums his or her four numbers — one private number and three received from the others. The sum of all four engineers' sums equals the sum of all four raises.

After everyone has distributed the random numbers, each engineer adds up the numbers he or she is holding and sends the sum to the others. Each engineer adds up these four sums privately (i.e., on his or her local machine) and divides by four to get the average raise. Now they can all compare their raises to the team average.

Alice's share
Bob's share
Carlos's share
David's share
Sum of sums
Alice's raise
Bob's raise
Carlos's raise
David's raise

Note that, because Alice (like Bob, Carlos, and David) kept part of her raise private (the bold numbers), no one else learned her actual raise. When she summed the numbers she was holding, the sum didn’t correspond to anyone’s raise. In fact, Bob’s sum was negative, because all that matters is that the four chosen numbers add up to the raise; the sign and magnitude of these four numbers are irrelevant.

Summing all of the engineers’ sums results in the same value as summing the raises directly, namely $12,686. If all of the engineers follow this protocol faithfully, dividing this value by four yields the team average raise of $3,171.50, which allows each person to compare his or her raise against the team average (locally and hence privately) without revealing any salary information.

A highly readable introduction to MPC that emphasizes practical protocols, some of which have been deployed in real-world scenarios, can be found in a monograph by Evans, Kolesnikov, and Rosulek. Examples of real-world applications that have been deployed include analysis of gender-based wage gaps in Boston-area companies, aggregate adoption of cybersecurity measures, and Covid exposure notification. Readers may also wish to read our previous blog post on this and related topics.

Differential privacy

Differential privacy (DP) is a body of statistical and algorithmic techniques for releasing an aggregate function of a dataset without revealing the mapping between data contributors and data items. As in MPC, we have n parties, each of whom possesses a data item. Either the parties themselves or, more often, an external agent wishes to compute an aggregate function of the parties’ input data.

Related content
Calibrating noise addition to word density in the embedding space improves utility of privacy-protected text.

If this computation is performed in a differentially private manner, then no information that could be inferred from the output about the ith input, xi, can be associated with the individual party Pi. Typically, the number n of participants is very large, the participants are not known to each other, and the goal is to compute a statistical property of the set {x1, …, xn} while protecting the privacy of individual data contributors {P1, …, Pn}.

In slightly more detail, we say that a randomized algorithm M preserves differential privacy with respect to an aggregation function f if it satisfies two properties. First, for every set of input values, the output of M closely approximates the value of f. Second, for every distinct pair (xi, xi') of possible values for the ith individual input, the distribution of M(x1, …, xi,…, xn) is approximately equivalent to the distribution of M(x1, …, xi′, …, xn). The maximum “distance” between the two distributions is characterized by a parameter, ϵ, called the privacy parameter, and M is called an ϵ-differentially private algorithm.

Note that the output of a differentially private algorithm is a random variable drawn from a distribution on the range of the function f. That is because DP computation requires randomization; in particular, it works by “adding noise.” All known DP techniques introduce a salient trade-off between the privacy parameter and the utility of the output of the computation. Smaller values of ϵ produce better privacy guarantees, but they require more noise and hence produce less-accurate outputs; larger values of ϵ yield worse privacy bounds, but they require less noise and hence deliver better accuracy.

For example, consider a poll, the goal of which is to predict who is going to win an election. The pollster and respondents are willing to sacrifice some accuracy in order to improve privacy. Suppose respondents P1, …, Pn have predictions x1, …, xn, respectively, where each xi is either 0 or 1. The poll is supposed to output a good estimate of p, which we use to denote the fraction of the parties who predict 1. The DP framework allows us to compute an accurate estimate and simultaneously to preserve each respondent’s “plausible deniability” about his or her true prediction by requiring each respondent to add noise before sending a response to the pollster.

Related content
Private aggregation of teacher ensembles (PATE) leads to word error rate reductions of more than 26% relative to standard differential-privacy techniques.

We now provide a few more details of the polling example. Consider the algorithm m that takes as input a bit xi and flips a fair coin. If the coin comes up tails, then m outputs xi; otherwise m flips another fair coin and outputs 1 if heads and 0 if tails. This m is known as the randomized response mechanism; when the pollster asks Pi for a prediction, Pi responds with m(xi). Simple statistical calculation shows that, in the set of answers that the pollster receives from the respondents, the expected fraction that are 1’s is

Pr[First coin is tails] ⋅ p + Pr[First coin is heads] ⋅ Pr[Second coin is heads] = p/2 + 1/4.

Thus, the expected number of 1’s received is n(p/2 + 1/4). Let N = m(x1) + ⋅⋅⋅ + m(xn) denote the actual number of 1’s received; we approximate p by M(x1, …, xn) = 2N/n − 1/2. In fact, this approximation algorithm, M, is differentially private. Accuracy follows from the statistical calculation, and privacy follows from the “plausible deniability” provided by the fact that M outputs 1 with probability at least 1/4 regardless of the value of xi.

Differential privacy has dominated the study of privacy-preserving statistical computation since it was introduced in 2006 and is widely regarded as a fundamental breakthrough in both theory and practice. An excellent overview of algorithmic techniques in DP can be found in a monograph by Dwork and Roth. DP has been applied in many real-world applications, most notably the 2020 US Census.

The power and limitations of MPC and DP

We now review some of the strengths and weaknesses of these two approaches and highlight some key differences between them.

Secure multiparty computation

MPC has been extensively studied for more than 40 years, and there are powerful, general results showing that it can be done for all functions f using a variety of cryptographic and coding-theoretic techniques, system models, and adversary models.

Despite the existence of fully general, secure protocols, MPC has seen limited real-world deployment. One obstacle is protocol complexity — particularly the communication complexity of the most powerful, general solutions. Much current work on MPC addresses this issue.

Related content
A privacy-preserving version of the popular XGBoost machine learning algorithm would let customers feel even more secure about uploading sensitive data to the cloud.

More-fundamental questions that must be answered before MPC can be applied in a given scenario include the nature of the function f being computed and the information environment in which the computation is taking place. In order to explain this point, we first note that the set of participants in the MPC computation is not necessarily the same as the set of parties that receive the result of the computation. The two sets may be identical, one may be a proper subset of the other, they may have some (but not all) elements in common, or they may be entirely disjoint.

Although a secure MPC protocol (provably!) reveals nothing to the recipients about the private inputs except what can be inferred from the result, even that may be too much. For example, if the result is the number of votes for and votes against a proposition in a referendum, and the referendum passes unanimously, then the recipients learn exactly how each participant voted. The referendum authority can avoid revealing private information by using a different f, e.g., one that is “YES” if the number of votes for the proposition is at least half the number of participants and “NO” if it is less than half.

This simple example demonstrates a pervasive trade-off in privacy-preserving computation: participants can compute a function that is more informative if they are willing to reveal private information to the recipients in edge cases; they can achieve more privacy in edge cases if they are willing to compute a less informative function.

In addition to specifying the function f carefully, users of MPC must evaluate the information environment in which MPC is to be deployed and, in particular, must avoid the catastrophic loss of privacy that can occur when the recipients combine the result of the computation with auxiliary information. For example, consider the scenario in which the participants are all of the companies in a given commercial sector and metropolitan area, and they wish to use MPC to compute the total dollar loss that they (collectively) experienced in a given year that was attributable to data breaches; in this example, the recipients of the result are the companies themselves.

Related content
Scientists describe the use of privacy-preserving machine learning to address privacy challenges in XGBoost training and prediction.

Suppose further that, during that year, one of the companies suffered a severe breach that was covered in the local media, which identified the company by name and reported an approximate dollar figure for the loss that the company suffered as a result of the breach. If that approximate figure is very close to the total loss imposed by data breaches on all the companies that year, then the participants can conclude that all but one of them were barely affected by data breaches that year.

Note that this potentially sensitive information is not leaked by the MPC protocol, which reveals nothing but the aggregate amount lost (i.e., the value of the function f). Rather, it is inferred by combining the result of the computation with information that was already available to the participants before the computation was done. The same risk that input privacy will be destroyed when results are combined with auxiliary information is posed by any computational method that reveals the exact value of the function f.

Differential privacy

The DP framework provides some elegant, simple mechanisms that can be applied to any function f whose output is a vector of real numbers. Essentially, one can independently perturb or “noise up” each component of f(x) by an appropriately defined random value. The amount of noise that must be added in order to hide the contribution (or, indeed, the participation) of any single data subject is determined by the privacy parameter and the maximum amount by which a single input can change the output of f. We explain one such mechanism in slightly more mathematical detail in the following paragraph.

One can apply the Laplace mechanism with privacy parameter ϵ to a function f, whose outputs are k-tuples of real numbers, by returning the value f(x1, …, xn) + (Y1, …, Yk) on input (x1, …, xn), where the Yi are independent random variables drawn from the Laplace distribution with parameter Δ(f)/ϵ. Here Δ(f) denotes the 1sensitivity of the function f, which captures the magnitude by which a single individual’s data can change the output of f in the worst case. The technical definition of the Laplace distribution is beyond the scope of this article, but for our purposes, its important property is that the Yi can be sampled efficiently.

Related content
The team’s latest research on privacy-preserving machine learning, federated learning, and bias mitigation.

Crucially, DP protects data contributors against privacy loss caused by post-processing computational results or by combining results with auxiliary information. The scenario in which privacy loss occurred when the output of an MPC protocol was combined with information from an existing news story could not occur in a DP application; moreover, no harm could be done by combining the result of a DP computation with auxiliary information in a future news story.

DP techniques also benefit from powerful composition theorems that allow separate differentially private algorithms to be combined in one application. In particular, the independent use of an ϵ1-differentially private algorithm and an ϵ2-differentially private algorithm, when taken together, is (ϵ1 + ϵ2)-differentially private.

One limitation on the applicability of DP is the need to add noise — something that may not be tolerable in some application scenarios. More fundamentally, the ℓ1 sensitivity of a function f, which yields an upper bound on the amount of noise that must be added to the output in order to achieve a given privacy parameter ϵ, also yields a lower bound. If the output of f is strongly influenced by the presence of a single outlier in the input, then it is impossible to achieve strong privacy and high accuracy simultaneously.

For example, consider the simple case in which f is the sum of all of the private inputs, and each input is an arbitrary positive integer. It is easy to see that the ℓ1 sensitivity is unbounded in this case; to hide the contribution or the participation of an individual whose data item strongly dominates those of all other individuals would require enough noise to render the output meaningless. If one can restrict all of the private inputs to a small interval [a,b], however, then the Laplace mechanism can provide meaningful privacy and accuracy.

DP was originally designed to compute statistical aggregates while preserving the privacy of individual data subjects; in particular, it was designed with real-valued functions in mind. Since then, researchers have developed DP techniques for non-numerical computations. For example, the exponential mechanism can be used to solve selection problems, in which both input and output are of arbitrary type.

Related content
Amazon is helping develop standards for post-quantum cryptography and deploying promising technologies for customers to experiment with.

In specifying a selection problem, one must define a scoring function that maps input-output pairs to real numbers. For each input x, a solution y is better than a solution y′ if the score of (x,y) is greater than that of (x,y′). The exponential mechanism generally works well (i.e., achieves good privacy and good accuracy simultaneously) for selection problems (e.g., approval voting) that can be defined by scoring functions of low sensitivity but not for those (e.g., set intersection) in which the scoring function must have high sensitivity. In fact, there is no differentially private algorithm that works well for set intersection; by contrast, MPC for set intersection is a mature and practical technology that has seen real-world deployment.


In conclusion, both secure multiparty computation and differential privacy can be used to perform computations on sensitive data while preserving the privacy of those data. Important differences between the bodies of technique include

  • The nature of the privacy guarantee: Use of MPC to compute a function y = f(x1, x2, ..., xn) guarantees that the recipients of the result learn the output y and nothing more. For example, if there are exactly two input vectors that are mapped to y by f, the recipients of the output y gain no information about which of two was the actual input to the MPC computation, regardless of the number of components in which these two input vectors differ or the magnitude of the differences. On the other hand, for any third input vector that does not map to y, the recipient learns with certainty that the real input to the MPC computation was not this third vector, even if it differs from one of the first two in only one component and only by a very small amount. By contrast, computing f with a DP algorithm guarantees that, for any two input vectors that differ in only one component, the (randomized!) results of the computation are approximately indistinguishable, regardless of whether the exact values of f on these two input vectors are equal, nearly equal, or extremely different. Straightforward use of composition yields a privacy guarantee for inputs that differ in c components at the expense of increasing the privacy parameter by a factor of c.
  • Typical use cases: DP techniques are most often used to compute aggregate properties of very large datasets, and typically, the identities of data contributors are not known. None of these conditions is typical of MPC use cases.
  • Exact vs. noisy answers: MPC can be used to compute exact answers for all functions f. DP requires the addition of noise. This is not a problem in many statistical computations, but even small amounts of noise may not be acceptable in some application scenarios. Moreover, if f is extremely sensitive to outliers in the input data, the amount of noise needed to achieve meaningful privacy may preclude meaningful accuracy.
  • Auxiliary information: Combining the result of a DP computation with auxiliary information cannot result in privacy loss. By contrast, any computational method (including MPC) that returns the exact value y of a function f runs the risk that a recipient of y might be able to infer something about the input data that is not implied by y alone, if y is combined with auxiliary information.

Finally, we would like to point out that, in some applications, it is possible to get the benefits of both MPC and DP. If the goal is to compute f, and g is a differentially private approximation of f that achieves good privacy and accuracy simultaneously, then one natural way to proceed is to use MPC to compute g. We expect to see both MPC and DP used to enhance data privacy in Amazon’s products and services.

Related content

US, WA, Seattle
Join us at the cutting edge of Amazon's sustainability initiatives to work on environmental and social advancements to support Amazon's long term worldwide sustainability strategy. At Amazon, we're working to be the most customer-centric company on earth. To get there, we need exceptionally talented, bright, and driven people. The Worldwide Sustainability (WWS) organization capitalizes on Amazon’s scale & speed to build a more resilient and sustainable company. We manage our social and environmental impacts globally, driving solutions that enable our customers, businesses, and the world around us to become more sustainable. Sustainability Science and Innovation (SSI) is a multi-disciplinary team within the WW Sustainability organization that combines science, analytics, economics, statistics, machine learning, product development, and engineering expertise. We use this expertise and skills to identify, develop and evaluate the science and innovations necessary for Amazon, customers and partners to meet their long-term sustainability goals and commitments. We are seeking a Principal Applied Scientist who is not just adept in the theoretical aspects of Machine Learning (ML), Artificial Intelligence (AI), and Large Language Models (LLMs) but also possesses a pragmatic, hands-on approach to navigating the complexities of innovation. You will take the lead in conceptualization, building, and launching innovative models and solutions that significantly drive material impacts for our long-term sustainability and climate goals. You'll be guided by problems and customer needs. You'll use strong technical judgment to determine appropriate approaches - custom pre-training models, fine-tuning trusted base models, leveraging retrieval-augmented generation (RAGs), or combining approaches. You'll collaborate with business leaders, scientists, and engineers to incorporate sustainability domain nuances when creating data foundations, developing AI models/applications, and applying techniques like data indexing, validation metrics, model distillation, and customized loss functions. You'll work across teams to embed AI/ML solutions and capabilities into existing sustainability data and systems. You'll define key AI sustainability research directions, adopt/invent new ML techniques, conduct rigorous experiments, publish results, and ensure research translates into practice. You'll develop long-term strategies, persuade teams, propose goals and deliver. If you see yourself as a hands-on technical leader and innovator at the intersection of AI, technology, and sustainability, we'd like to connect. You don't need to be an expert in sustainability and climate domains. Key job responsibilities - Creating web-scale sustainability-specific data foundations that align with our impact areas and sustainability goals; - Models to measure environmental and economic impacts at scale; - Automated solutions simplifying complex, labor-intensive ESG tasks; reasoning mechanisms for multi-view decarbonization plans and multi-objective optimization models; - Models to create, monitor, and quality assure high-integrity forest carbon credits. About the team Diverse Experiences: World Wide Sustainability (WWS) values diverse experiences. Even if you do not meet all of the qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Inclusive Team Culture: It’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences, inspire us to never stop embracing our uniqueness. Mentorship & Career Growth: We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance: We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why flexible work hours and arrangements are part of our culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve. We are open to hiring candidates to work out of one of the following locations: Arlington, VA, USA | New York City, NY, USA | San Francisco, CA, USA | Seattle, WA, USA
US, CA, San Francisco
If you are interested in this position, please apply on Twitch's Career site About Twitch Launched in 2011, Twitch is a global community that comes together each day to create multiplayer entertainment: unique, live, unpredictable experiences created by the interactions of millions. We bring the joy of co-op to everything, from casual gaming to world-class esports to anime marathons, music, and art streams. Twitch also hosts TwitchCon, where we bring everyone together to celebrate and grow their personal interests and passions. We're always live at Twitch. About the Position We are looking for applied scientists to solve challenging and open-ended problems in the domain of user and content safety. As an applied scientist on Twitch's Community team, you will use machine learning to develop data products tackling problems such as harassment, spam, and illegal content. You will use a wide toolbox of ML tools to handle multiple types of data, including user behavior, metadata, and user generated content such as text and video. You will collaborate with a team of passionate scientists and engineers to develop these models and put them into production, where they can help Twitch's creators and viewers succeed and build communities. You will report to an Applied Science Manager. This position will be located in San Francisco. You Will - Build machine learning products to protect Twitch and its users from abusive behavior such as harassment, spam, and violent or illegal content. - Work backwards from customer problems to develop the right solution for the job, whether a classical ML model or a state-of-the-art one. - Collaborate with Community Health's engineering and product management team to productionize your models into flexible data pipelines and ML-based services. - Continue to learn and experiment with new techniques in ML, software engineering, or safety so that we can better help communities on Twitch grow and stay safe. Perks - Medical, Dental, Vision & Disability Insurance - 401(k) - Maternity & Parental Leave - Flexible PTO - Amazon Employee Discount We are open to hiring candidates to work out of one of the following locations: San Francisco, CA, USA
US, CA, San Diego
The Private Brands team is looking for an Applied Scientist to join the team in building science solutions at scale. Our team applies Optimization, Machine Learning, Statistics, Causal Inference, and Econometrics/Economics to derive actionable insights. We are an interdisciplinary team of Scientists, Engineers, and Economists and primary focus on building optimization and machine learning solutions in supply chain domain with specific focus on Amazon private brand products. Key job responsibilities You will work with business leaders, scientists, and economists to translate business and functional requirements into concrete deliverables, including the design, development, testing, and deployment of highly scalable optimization solutions and ML models. This is a unique, high visibility opportunity for someone who wants to have business impact, dive deep into large-scale problems, enable measurable actions on the consumer economy, and work closely with scientists and economists. As a scientist, you bring business and industry context to science and technology decisions. You set the standard for scientific excellence and make decisions that affect the way we build and integrate algorithms. Your solutions are exemplary in terms of algorithm design, clarity, model structure, efficiency, and extensibility. You tackle intrinsically hard problems, acquiring expertise as needed. You decompose complex problems into straightforward solutions. We are particularly interested in candidates with experience in predictive and machine learning models and working with distributed systems. Academic and/or practical background in Machine Learning are particularly relevant for this position. Familiarity and experience in applying Operations Research techniques to supply chain problems is a plus. We are open to hiring candidates to work out of one of the following locations: San Diego, CA, USA | Seattle, WA, USA
US, CA, Sunnyvale
The Amazon Devices team designs and engineers high-profile consumer electronics, including the best-selling Kindle family of products. We have also produced groundbreaking devices like Fire tablets, Fire TV, Amazon Dash, and Amazon Echo. What will you help us create? Work hard. Have fun. Make history. If you are an innovative Applied Scientist, have a track record of delivering to timelines with high quality and are deeply technical, we want to talk to you. You will be closely integrated with the research and development team, both developing and optimizing features. You will work with other world-leading scientists to build and deliver the world's most scalable robotics systems, working together from ideation-to-production using tools such as Computer Vision Deep Learning instance segmentation, pose estimation, activity understanding), CV geometry, active learning and reinforcement learning. A successful candidate will have excellent technical ability, scientific vision, project management skills, great communication skills, and a motivation to achieve results in a collaborative team environment. We are open to hiring candidates to work out of one of the following locations: Sunnyvale, CA, USA
GB, London
Amazon Advertising is looking for a Senior Applied Scientist to join its brand new initiative that powers Amazon’s contextual advertising product. Advertising at Amazon is a fast-growing multi-billion dollar business that spans across desktop, mobile and connected devices; encompasses ads on Amazon and a vast network of hundreds of thousands of third party publishers; and extends across US, EU and an increasing number of international geographies. We are looking for a dynamic, innovative and accomplished Senior Applied Scientist to work on machine learning and data science initiatives for contextual data processing and classification that power our contextual advertising solutions. Are you excited by the prospect of analyzing terabytes of data and leveraging state-of-the-art data science and machine learning techniques to solve real world problems? Do you like to own business problems/metrics of high ambiguity where yo get to define the path forward for success of a new initiative? As an applied scientist, you will invent ML and Artificial General Intelligence based solutions to power our contextual classification technology. As this is a new initiative, you will get an opportunity to act as a thought leader, work backwards from the customer needs, dive deep into data to understand the issues, conceptualize and build algorithms and collaborate with multiple cross-functional teams. Key job responsibilities * Design, prototype and test many possible hypotheses in a high-ambiguity environment, making use of both analysis and business judgment. * Collaborate with software engineering teams to integrate successful experiments into large-scale, highly complex Amazon production systems. * Promote the culture of experimentation and applied science at Amazon. * Demonstrated ability to meet deadlines while managing multiple projects. * Excellent communication and presentation skills working with multiple peer groups and different levels of management * Influence and continuously improve a sustainable team culture that exemplifies Amazon’s leadership principles. About the team The Supply Quality organization has the charter to solve optimization problems for ad-programs in Amazon and ensure high-quality ad-impressions. We develop advanced algorithms and infrastructure systems to optimize performance for our advertisers and publishers. We are focused on solving a wide variety of problems in computational advertising like Contextual data processing and classification, traffic quality prediction (robot and fraud detection), Security forensics and research, Viewability prediction, Brand Safety and experimentation. Our team includes experts in the areas of distributed computing, machine learning, statistics, optimization, text mining, information theory and big data systems. We are open to hiring candidates to work out of one of the following locations: London, GBR
ES, M, Madrid
At Amazon, we are committed to being the Earth’s most customer-centric company. The International Technology group (InTech) owns the enhancement and delivery of Amazon’s cutting-edge engineering to all the varied customers and cultures of the world. We do this through a combination of partnerships with other Amazon technical teams and our own innovative new projects. You will be joining the Tools and Machine learning (Tamale) team. As part of InTech, Tamale strives to solve complex catalog quality problems using challenging machine learning and data analysis solutions. You will be exposed to cutting edge big data and machine learning technologies, along to all Amazon catalog technology stack, and you'll be part of a key effort to improve our customers experience by tackling and preventing defects in items in Amazon's catalog. We are looking for a passionate, talented, and inventive Scientist with a strong machine learning background to help build industry-leading machine learning solutions. We strongly value your hard work and obsession to solve complex problems on behalf of Amazon customers. Key job responsibilities We look for applied scientists who possess a wide variety of skills. As the successful applicant for this role, you will with work closely with your business partners to identify opportunities for innovation. You will apply machine learning solutions to automate manual processes, to scale existing systems and to improve catalog data quality, to name just a few. You will work with business leaders, scientists, and product managers to translate business and functional requirements into concrete deliverables, including the design, development, testing, and deployment of highly scalable distributed services. You will be part of team of 5 scientists and 13 engineers working on solving data quality issues at scale. You will be able to influence the scientific roadmap of the team, setting the standards for scientific excellence. You will be working with state-of-the-art models, including image to text, LLMs and GenAI. Your work will improve the experience of millions of daily customers using Amazon in Europe and in other regions. You will have the chance to have great customer impact and continue growing in one of the most innovative companies in the world. You will learn a huge amount - and have a lot of fun - in the process! This position will be based in Madrid, Spain We are open to hiring candidates to work out of one of the following locations: Madrid, M, ESP
US, WA, Redmond
Project Kuiper is an initiative to increase global broadband access through a constellation of 3,236 satellites in low Earth orbit (LEO). Its mission is to bring fast, affordable broadband to unserved and underserved communities around the world. Project Kuiper will help close the digital divide by delivering fast, affordable broadband to a wide range of customers, including consumers, businesses, government agencies, and other organizations operating in places without reliable connectivity. As an Applied Scientist on the team you will responsible for building out and maintaining the algorithms and software services behind one of the world’s largest satellite constellations. You will be responsible for developing algorithms and applications that provide mission critical information derived from past and predicted satellite orbits to other systems and organizations rapidly, reliably, and at scale. You will be focused on contributing to the design and analysis of software systems responsible across a broad range of areas required for automated management of the Kuiper constellation. You will apply knowledge of mathematical modeling, optimization algorithms, astrodynamics, state estimation, space systems, and software engineering across a wide variety of problems to enable space operations at an unprecedented scale. You will develop features for systems to interface with internal and external teams, predict and plan communication opportunities, manage satellite orbits determination and prediction systems, develop analysis and infrastructure to monitor and support systems performance. Your work will interface with various subsystems within Project Kuiper and Amazon, as well as with external organizations, to enable engineers to safely and efficiently manage the satellite constellation. The ideal candidate will be detail oriented, strong organizational skills, able to work independently, juggle multiple tasks at once, and maintain professionalism under pressure. You should have proven knowledge of mathematical modeling and optimization along with strong software engineering skills. You should be able to independently understand customer requirements, and use data-driven approaches to identify possible solutions, select the best approach, and deliver high-quality applications. Export Control Requirement: Due to applicable export control laws and regulations, candidates must be a U.S. citizen or national, U.S. permanent resident (i.e., current Green Card holder), or lawfully admitted into the U.S. as a refugee or granted asylum. About the team The Constellation Management & Space Safety team maintains and builds the software services responsible for maintaining situational awareness of Kuiper satellites through their entire lifecycle in space. We coordinate with internal and external organizations to maintain the nominal operational state of the constellation. We build automated systems that use satellite telemetry and other relevant data to predict future orbits, plan maneuvers to avoid high risk close approaches with other objects in space, keep satellites in the desired locations, and exchange data with external organizations. We provide visibility information that is used to predict and establish communication channels for Kuiper satellites. We are open to hiring candidates to work out of one of the following locations: Redmond, WA, USA
US, WA, Seattle
Join us in the evolution of Amazon’s Seller business! The Selling Partner Recruitment and Success organization is the growth and development engine for our Store. Partnering with business, product, and engineering, we catalyze SP growth with comprehensive and accurate data, unique insights, and actionable recommendations and collaborate with WW SP facing teams to drive adoption and create feedback loops. We strongly believe that any motivated SP should be able to grow their businesses and reach their full potential by using our scaled, automated, and self-service tools. We aim to accelerate the growth of Sellers by providing tools and insights that enable them to make better and faster decisions at each step of selection management. To accomplish this, we offer intelligent insights that are both detailed and actionable, allowing Sellers to introduce new products and engage with customers effectively. We leverage extensive structured and unstructured data to generate science-based insights about their business. Furthermore, we provide personalized recommendations tailored to individual Sellers' business objectives in a user-friendly format. These insights and recommendations are integrated into our products, including Amazon Brand Analytics (ABA), Product Opportunity Explorer (OX), and Manage Your Growth (MYG). We are looking for a talented and passionate Sr. Research Scientist to lead our research endeavors and develop world-class statistical and machine learning models. The successful candidate will work closely with Product Managers (PM), User Experience (UX) designers, engineering teams, and Seller Growth Consulting teams to provide actionable insights that drive improvements in Seller businesses. Key job responsibilities You set the standard for scientific excellence and make decisions that affect the way we build and integrate algorithms. Your solutions are exemplary in terms of algorithm design, clarity, model structure, efficiency, and extensibility. You tackle intrinsically hard problems, acquiring expertise as needed. You decompose complex problems into straightforward solutions. About the team The Seller Growth science team aims to provide data and science solutions to drive Seller growth and create better Seller experiences. We structure our science domain with three key themes and two horizontal components. We discover the opportunity space by identifying opportunities with unrealized potential, then generate actionable analytics to identify high value actions (HVAs) that unlock the opportunity space, and finally, empower Sellers with personalized Growth Plans and differentiated treatment that help them realize their potential. We are open to hiring candidates to work out of one of the following locations: Seattle, WA, USA
US, CA, San Diego
Join our Private Brand Intelligence (PBI) organization in building Data Science driven solutions at scale to delight our customers with products across our leading private brands such as Amazon Basics, Amazon Essentials, and by Amazon. PBI applies Generative AI, Machine Learning, Statistics, and Economics to derive actionable insights about the complex economy of Amazon’s retail business. We also develop ML/Econ models and algorithms to drive strategic business decisions and improve operations. We are an interdisciplinary team of Scientists, Economists, Engineers, and Product Managers incubating and building day one solutions using cutting-edge technology, to solve some of the toughest business problems at Amazon. You will work with business leaders and economists to translate business and functional requirements into concrete deliverables, including the design, development, testing, and deployment of highly scalable distributed solutions. You will invent and implement scalable ML and econometric models while building tools to help our customers gain and apply insights. This is a unique, high visibility opportunity for someone who wants to have business impact, dive deep into large-scale science problems, enable measurable actions on the Consumer economy, and work closely with scientists and economists. If you are interested in Machine Learning, Generative AI, and large-scale intelligent solutions then this is the role you have been looking for. We are a Day 1 team, with a charter to be disruptive through the use of Data Science and Machine Learning. You will start on green field projects working with engineers to bring our models to life as well as data products that other teams can benefit from. We are an inclusive team, and are looking for Data Scientists that aren't averse to learning and building and or optimizing ML models alongside our engineers, product managers and business partners. As a Data Scientist on the team, you will drive improvements to our business, collaborating with scientists, economists, engineers and highly-engaged stakeholders to deliver actionable insights continuously. We’re truly an agile shop: we work closely with users, deliver features with high frequency, can pivot on a dime when needed, and are passionate about solving customer pain points. We are looking for data science leaders who share our vision for continuously improving the customer experience, who are motivated by challenging business problems and who love thinking big. Key job responsibilities * You will take the lead on large projects that span multiple teams. The problems you solve will be ambiguous, requiring both technical and domain expertise. You will deliver significant benefits to business with minimal assistance. * You need to understand challenges in your teams’ business area, the applicability of relevant data science disciplines, and interactions among systems. You will influence your team’s technical and business strategy by making insightful contributions to team priorities and approach. * You will make solutions simpler. You will optimize connected systems using their dynamics. You will improve the consistency and integration between your team’s solutions and the work being done by related teams. * You will improve the work done by others, either via a collaborative effort or by increasing their scientific knowledge, using specialized tools or advanced techniques. You will lead and actively participate in scientific reviews for your team and others. * You are able to communicate your ideas effectively to achieve the right outcome for your team and customer including when to make appropriate trade-offs. You harmonize discordant views and lead the resolution of contentious issues. * You actively participate in the hiring process and improve the skills and knowledge of others via mentoring. We are open to hiring candidates to work out of one of the following locations: San Diego, CA, USA | Seattle, WA, USA
US, WA, Seattle
Are you interested in working with top talents in Optimization, Operations Research and Supply Chain to help Amazon to efficiently match our Devices with worldwide customers? We have challenging problems and need your innovative solutions to make tremendous financial impacts! The Amazon Demand Science Optimization organization is looking for an Applied Scientist with background in Operations Research, Optimization, Supply Chain and/or Simulation to support science efforts to integrate across inventory management functionalities. Our team is responsible for science models (both deterministic and stochastic) that power world-wide inventory allocation for Amazon Devices business that includes Echo, Kindle, Fire Tablets, Amazon TVs, Amazon Fire TV sticks, Ring, and other smart home devices. We formulate and solve challenging large-scale financially-based optimization problems which ingest demand forecasts and produce optimal procurement, production, distribution, and inventory management plans. In addition, we also work closely with the demand forecasting, material procurement, production planning, finance, and logistics teams to co-optimize the inventory management and supply chain for Amazon Devices given operational constraints. Key job responsibilities The successful candidate will be a self-starter, comfortable with ambiguity, with strong attention to detail, and an ability to work in a fast-paced and ever-changing environment and a desire to help shape the overall business. Responsibilities include: - Design and develop advanced mathematical, simulation, and optimization models and apply them to define strategic and tactical needs and drive appropriate business and technical solutions in the areas of inventory management and distribution, network flow, supply chain optimization, and demand planning - Apply mathematical optimization techniques (linear, quadratic, SOCP, robust, stochastic, dynamic, mixed-integer programming, network flows, nonlinear, nonconvex programming) and algorithms to design optimal or near optimal solution methodologies to be used by in-house decision support tools and software - Research, prototype and experiment with these models by using modeling languages such as Python; participate in the production level deployment - Create, enhance, and maintain technical documentation, and present to other Scientists, Product, and Engineering teams - Lead project plans from a scientific perspective by managing product features, technical risks, milestones and launch plans - Influence the organization's long-term roadmap and resourcing, onboard new technologies onto Science team's toolbox, mentor other Scientists About the team We are open to hiring candidates to work out of one of the following locations: Seattle, WA, USA