Computing on private data

Both secure multiparty computation and differential privacy protect the privacy of data used in computation, but each has advantages in different contexts.

Many of today’s most innovative computation-based products and solutions are fueled by data. Where those data are private, it is essential to protect them and to prevent the release of information about data subjects, owners, or users to the wrong parties. How can we perform useful computations on sensitive data while preserving privacy?

Related content
Technique that mixes public and private training data can meet differential-privacy criteria while cutting error increase by 60%-70%.

We will revisit two well-studied approaches to this challenge: secure multiparty computation (MPC) and differential privacy (DP). MPC and DP were invented to address different real-world problems and to achieve different technical goals. However, because they are both aimed at using private information without fully revealing it, they are often confused. To help draw a distinction between the two approaches, we will discuss the power and limitations of both and give typical scenarios in which each can be highly effective.

We are interested in scenarios in which multiple individuals (sometimes, society as a whole) can derive substantial utility from a computation on private data but, in order to preserve privacy, cannot simply share all of their data with each other or with an external party.

Secure multiparty computation

MPC methods allow a group of parties to collectively perform a computation that involves all of their private data while revealing only the result of the computation. More formally, an MPC protocol enables n parties, each of whom possesses a private dataset, to compute a function of the union of their datasets in such a way that the only information revealed by the computation is the output of the function. Common situations in which MPC can be used to protect private interests include

  • auctions: the winning bid amount should be made public, but no information about the losing bids should be revealed;
  • voting: the number of votes cast for each option should be made public but not the vote cast by any one individual;
  • machine learning inference: secure two-party computation enables a client to submit a query to a server that holds a proprietary model and receive a response, keeping the query private from the server and the model private from the client.
Related content
New approach to homomorphic encryption speeds up the training of encrypted machine learning models sixfold.

Note that the number n of participants can be quite small (e.g., two in the case of machine learning inference), moderate in size, or very large; the latter two size ranges both occur naturally in auctions and votes. Similarly, the participants may be known to each other (as they would be, for example, in a departmental faculty vote) or not (as, for example, in an online auction). MPC protocols mathematically guarantee the secrecy of input values but do not attempt to hide the identities of the participants; if anonymous participation is desired, it can be achieved by combining MPC with an anonymous-communication protocol.

Although MPC may seem like magic, it is implementable and even practical using cryptographic and distributed-computing techniques. For example, suppose that Alice, Bob, Carlos, and David are four engineers who want to compare their annual raises. Alice selects four random numbers that sum to her raise. She keeps one number to herself and gives each of the other three to one of the other engineers. Bob, Carlos, and David do the same with their own raises.

Secure multiparty computation
Four engineers wish to compute their average raise, without revealing any one engineer's raise to the others. Each selects four numbers that sum to his or her raise and sends three of them to the other engineers. Each engineer then sums his or her four numbers — one private number and three received from the others. The sum of all four engineers' sums equals the sum of all four raises.

After everyone has distributed the random numbers, each engineer adds up the numbers he or she is holding and sends the sum to the others. Each engineer adds up these four sums privately (i.e., on his or her local machine) and divides by four to get the average raise. Now they can all compare their raises to the team average.

 
Amount
Alice's share
Bob's share
Carlos's share
David's share
Sum of sums
Alice's raise
3800
-1000
2500
900
1400
 
Bob's raise
2514
700
400
650
764
 
Carlos's raise
2982
750
-100
832
1500
 
David's raise
3390
1500
900
-3000
3990
 
Sum
12686
1950
3700
-618
7654
12686
Average
3171.5
 
 
 
 
3171.5

Note that, because Alice (like Bob, Carlos, and David) kept part of her raise private (the bold numbers), no one else learned her actual raise. When she summed the numbers she was holding, the sum didn’t correspond to anyone’s raise. In fact, Bob’s sum was negative, because all that matters is that the four chosen numbers add up to the raise; the sign and magnitude of these four numbers are irrelevant.

Summing all of the engineers’ sums results in the same value as summing the raises directly, namely $12,686. If all of the engineers follow this protocol faithfully, dividing this value by four yields the team average raise of $3,171.50, which allows each person to compare his or her raise against the team average (locally and hence privately) without revealing any salary information.

A highly readable introduction to MPC that emphasizes practical protocols, some of which have been deployed in real-world scenarios, can be found in a monograph by Evans, Kolesnikov, and Rosulek. Examples of real-world applications that have been deployed include analysis of gender-based wage gaps in Boston-area companies, aggregate adoption of cybersecurity measures, and Covid exposure notification. Readers may also wish to read our previous blog post on this and related topics.

Differential privacy

Differential privacy (DP) is a body of statistical and algorithmic techniques for releasing an aggregate function of a dataset without revealing the mapping between data contributors and data items. As in MPC, we have n parties, each of whom possesses a data item. Either the parties themselves or, more often, an external agent wishes to compute an aggregate function of the parties’ input data.

Related content
Calibrating noise addition to word density in the embedding space improves utility of privacy-protected text.

If this computation is performed in a differentially private manner, then no information that could be inferred from the output about the ith input, xi, can be associated with the individual party Pi. Typically, the number n of participants is very large, the participants are not known to each other, and the goal is to compute a statistical property of the set {x1, …, xn} while protecting the privacy of individual data contributors {P1, …, Pn}.

In slightly more detail, we say that a randomized algorithm M preserves differential privacy with respect to an aggregation function f if it satisfies two properties. First, for every set of input values, the output of M closely approximates the value of f. Second, for every distinct pair (xi, xi') of possible values for the ith individual input, the distribution of M(x1, …, xi,…, xn) is approximately equivalent to the distribution of M(x1, …, xi′, …, xn). The maximum “distance” between the two distributions is characterized by a parameter, ϵ, called the privacy parameter, and M is called an ϵ-differentially private algorithm.

Note that the output of a differentially private algorithm is a random variable drawn from a distribution on the range of the function f. That is because DP computation requires randomization; in particular, it works by “adding noise.” All known DP techniques introduce a salient trade-off between the privacy parameter and the utility of the output of the computation. Smaller values of ϵ produce better privacy guarantees, but they require more noise and hence produce less-accurate outputs; larger values of ϵ yield worse privacy bounds, but they require less noise and hence deliver better accuracy.

For example, consider a poll, the goal of which is to predict who is going to win an election. The pollster and respondents are willing to sacrifice some accuracy in order to improve privacy. Suppose respondents P1, …, Pn have predictions x1, …, xn, respectively, where each xi is either 0 or 1. The poll is supposed to output a good estimate of p, which we use to denote the fraction of the parties who predict 1. The DP framework allows us to compute an accurate estimate and simultaneously to preserve each respondent’s “plausible deniability” about his or her true prediction by requiring each respondent to add noise before sending a response to the pollster.

Related content
Private aggregation of teacher ensembles (PATE) leads to word error rate reductions of more than 26% relative to standard differential-privacy techniques.

We now provide a few more details of the polling example. Consider the algorithm m that takes as input a bit xi and flips a fair coin. If the coin comes up tails, then m outputs xi; otherwise m flips another fair coin and outputs 1 if heads and 0 if tails. This m is known as the randomized response mechanism; when the pollster asks Pi for a prediction, Pi responds with m(xi). Simple statistical calculation shows that, in the set of answers that the pollster receives from the respondents, the expected fraction that are 1’s is

Pr[First coin is tails] ⋅ p + Pr[First coin is heads] ⋅ Pr[Second coin is heads] = p/2 + 1/4.

Thus, the expected number of 1’s received is n(p/2 + 1/4). Let N = m(x1) + ⋅⋅⋅ + m(xn) denote the actual number of 1’s received; we approximate p by M(x1, …, xn) = 2N/n − 1/2. In fact, this approximation algorithm, M, is differentially private. Accuracy follows from the statistical calculation, and privacy follows from the “plausible deniability” provided by the fact that M outputs 1 with probability at least 1/4 regardless of the value of xi.

Differential privacy has dominated the study of privacy-preserving statistical computation since it was introduced in 2006 and is widely regarded as a fundamental breakthrough in both theory and practice. An excellent overview of algorithmic techniques in DP can be found in a monograph by Dwork and Roth. DP has been applied in many real-world applications, most notably the 2020 US Census.

The power and limitations of MPC and DP

We now review some of the strengths and weaknesses of these two approaches and highlight some key differences between them.

Secure multiparty computation

MPC has been extensively studied for more than 40 years, and there are powerful, general results showing that it can be done for all functions f using a variety of cryptographic and coding-theoretic techniques, system models, and adversary models.

Despite the existence of fully general, secure protocols, MPC has seen limited real-world deployment. One obstacle is protocol complexity — particularly the communication complexity of the most powerful, general solutions. Much current work on MPC addresses this issue.

Related content
A privacy-preserving version of the popular XGBoost machine learning algorithm would let customers feel even more secure about uploading sensitive data to the cloud.

More-fundamental questions that must be answered before MPC can be applied in a given scenario include the nature of the function f being computed and the information environment in which the computation is taking place. In order to explain this point, we first note that the set of participants in the MPC computation is not necessarily the same as the set of parties that receive the result of the computation. The two sets may be identical, one may be a proper subset of the other, they may have some (but not all) elements in common, or they may be entirely disjoint.

Although a secure MPC protocol (provably!) reveals nothing to the recipients about the private inputs except what can be inferred from the result, even that may be too much. For example, if the result is the number of votes for and votes against a proposition in a referendum, and the referendum passes unanimously, then the recipients learn exactly how each participant voted. The referendum authority can avoid revealing private information by using a different f, e.g., one that is “YES” if the number of votes for the proposition is at least half the number of participants and “NO” if it is less than half.

This simple example demonstrates a pervasive trade-off in privacy-preserving computation: participants can compute a function that is more informative if they are willing to reveal private information to the recipients in edge cases; they can achieve more privacy in edge cases if they are willing to compute a less informative function.

In addition to specifying the function f carefully, users of MPC must evaluate the information environment in which MPC is to be deployed and, in particular, must avoid the catastrophic loss of privacy that can occur when the recipients combine the result of the computation with auxiliary information. For example, consider the scenario in which the participants are all of the companies in a given commercial sector and metropolitan area, and they wish to use MPC to compute the total dollar loss that they (collectively) experienced in a given year that was attributable to data breaches; in this example, the recipients of the result are the companies themselves.

Related content
Scientists describe the use of privacy-preserving machine learning to address privacy challenges in XGBoost training and prediction.

Suppose further that, during that year, one of the companies suffered a severe breach that was covered in the local media, which identified the company by name and reported an approximate dollar figure for the loss that the company suffered as a result of the breach. If that approximate figure is very close to the total loss imposed by data breaches on all the companies that year, then the participants can conclude that all but one of them were barely affected by data breaches that year.

Note that this potentially sensitive information is not leaked by the MPC protocol, which reveals nothing but the aggregate amount lost (i.e., the value of the function f). Rather, it is inferred by combining the result of the computation with information that was already available to the participants before the computation was done. The same risk that input privacy will be destroyed when results are combined with auxiliary information is posed by any computational method that reveals the exact value of the function f.

Differential privacy

The DP framework provides some elegant, simple mechanisms that can be applied to any function f whose output is a vector of real numbers. Essentially, one can independently perturb or “noise up” each component of f(x) by an appropriately defined random value. The amount of noise that must be added in order to hide the contribution (or, indeed, the participation) of any single data subject is determined by the privacy parameter and the maximum amount by which a single input can change the output of f. We explain one such mechanism in slightly more mathematical detail in the following paragraph.

One can apply the Laplace mechanism with privacy parameter ϵ to a function f, whose outputs are k-tuples of real numbers, by returning the value f(x1, …, xn) + (Y1, …, Yk) on input (x1, …, xn), where the Yi are independent random variables drawn from the Laplace distribution with parameter Δ(f)/ϵ. Here Δ(f) denotes the 1sensitivity of the function f, which captures the magnitude by which a single individual’s data can change the output of f in the worst case. The technical definition of the Laplace distribution is beyond the scope of this article, but for our purposes, its important property is that the Yi can be sampled efficiently.

Related content
The team’s latest research on privacy-preserving machine learning, federated learning, and bias mitigation.

Crucially, DP protects data contributors against privacy loss caused by post-processing computational results or by combining results with auxiliary information. The scenario in which privacy loss occurred when the output of an MPC protocol was combined with information from an existing news story could not occur in a DP application; moreover, no harm could be done by combining the result of a DP computation with auxiliary information in a future news story.

DP techniques also benefit from powerful composition theorems that allow separate differentially private algorithms to be combined in one application. In particular, the independent use of an ϵ1-differentially private algorithm and an ϵ2-differentially private algorithm, when taken together, is (ϵ1 + ϵ2)-differentially private.

One limitation on the applicability of DP is the need to add noise — something that may not be tolerable in some application scenarios. More fundamentally, the ℓ1 sensitivity of a function f, which yields an upper bound on the amount of noise that must be added to the output in order to achieve a given privacy parameter ϵ, also yields a lower bound. If the output of f is strongly influenced by the presence of a single outlier in the input, then it is impossible to achieve strong privacy and high accuracy simultaneously.

For example, consider the simple case in which f is the sum of all of the private inputs, and each input is an arbitrary positive integer. It is easy to see that the ℓ1 sensitivity is unbounded in this case; to hide the contribution or the participation of an individual whose data item strongly dominates those of all other individuals would require enough noise to render the output meaningless. If one can restrict all of the private inputs to a small interval [a,b], however, then the Laplace mechanism can provide meaningful privacy and accuracy.

DP was originally designed to compute statistical aggregates while preserving the privacy of individual data subjects; in particular, it was designed with real-valued functions in mind. Since then, researchers have developed DP techniques for non-numerical computations. For example, the exponential mechanism can be used to solve selection problems, in which both input and output are of arbitrary type.

Related content
Amazon is helping develop standards for post-quantum cryptography and deploying promising technologies for customers to experiment with.

In specifying a selection problem, one must define a scoring function that maps input-output pairs to real numbers. For each input x, a solution y is better than a solution y′ if the score of (x,y) is greater than that of (x,y′). The exponential mechanism generally works well (i.e., achieves good privacy and good accuracy simultaneously) for selection problems (e.g., approval voting) that can be defined by scoring functions of low sensitivity but not for those (e.g., set intersection) in which the scoring function must have high sensitivity. In fact, there is no differentially private algorithm that works well for set intersection; by contrast, MPC for set intersection is a mature and practical technology that has seen real-world deployment.

Conclusion

In conclusion, both secure multiparty computation and differential privacy can be used to perform computations on sensitive data while preserving the privacy of those data. Important differences between the bodies of technique include

  • The nature of the privacy guarantee: Use of MPC to compute a function y = f(x1, x2, ..., xn) guarantees that the recipients of the result learn the output y and nothing more. For example, if there are exactly two input vectors that are mapped to y by f, the recipients of the output y gain no information about which of two was the actual input to the MPC computation, regardless of the number of components in which these two input vectors differ or the magnitude of the differences. On the other hand, for any third input vector that does not map to y, the recipient learns with certainty that the real input to the MPC computation was not this third vector, even if it differs from one of the first two in only one component and only by a very small amount. By contrast, computing f with a DP algorithm guarantees that, for any two input vectors that differ in only one component, the (randomized!) results of the computation are approximately indistinguishable, regardless of whether the exact values of f on these two input vectors are equal, nearly equal, or extremely different. Straightforward use of composition yields a privacy guarantee for inputs that differ in c components at the expense of increasing the privacy parameter by a factor of c.
  • Typical use cases: DP techniques are most often used to compute aggregate properties of very large datasets, and typically, the identities of data contributors are not known. None of these conditions is typical of MPC use cases.
  • Exact vs. noisy answers: MPC can be used to compute exact answers for all functions f. DP requires the addition of noise. This is not a problem in many statistical computations, but even small amounts of noise may not be acceptable in some application scenarios. Moreover, if f is extremely sensitive to outliers in the input data, the amount of noise needed to achieve meaningful privacy may preclude meaningful accuracy.
  • Auxiliary information: Combining the result of a DP computation with auxiliary information cannot result in privacy loss. By contrast, any computational method (including MPC) that returns the exact value y of a function f runs the risk that a recipient of y might be able to infer something about the input data that is not implied by y alone, if y is combined with auxiliary information.

Finally, we would like to point out that, in some applications, it is possible to get the benefits of both MPC and DP. If the goal is to compute f, and g is a differentially private approximation of f that achieves good privacy and accuracy simultaneously, then one natural way to proceed is to use MPC to compute g. We expect to see both MPC and DP used to enhance data privacy in Amazon’s products and services.

Related content

BR, SP, Sao Paulo
A Amazon lançou o Centro de Inovação de IA Generativa em junho de 2023 para ajudar os clientes da AWS a acelerar a inovação e o sucesso empresarial com IA Generativa (https://press.aboutamazon.com/2023/6/aws-announces-generative-ai -centro de inovação). Este Centro de Inovação oferece oportunidades para inovar em uma organização de ritmo acelerado que contribui para projetos e tecnologias revolucionárias que são implantadas em dispositivos e na nuvem. Como cientista de dados, você é proficiente em projetar e desenvolver soluções avançadas baseadas em IA generativa para resolver diversos problemas dos clientes. Você trabalhará com terabytes de texto, imagens e outros tipos de dados para resolver problemas do mundo real por meio da Gen AI. Você trabalhará em estreita colaboração com equipes de contas e estrategistas de ML para definir o caso de uso, e com outros cientistas e engenheiros de ML da equipe para projetar experimentos e encontrar novas maneiras de agregar valor ao cliente. A pessoa selecionado possuirá habilidades técnicas e de contato com o cliente que permitirão que você faça parte da equipe técnica da AWS no ecossistema/ambiente de nossos provedores de soluções, bem como diretamente para os clientes finais. Você será capaz de conduzir discussões com pessoal técnico e de gerenciamento sênior de clientes e parceiros. A day in the life Aqui na AWS, abraçamos nossas diferenças. Estamos empenhados em promover a nossa cultura de inclusão. Temos dez grupos de afinidade liderados por funcionários, alcançando 40.000 funcionários em mais de 190 filiais em todo o mundo. Temos ofertas de benefícios inovadoras e organizamos experiências de aprendizagem anuais e contínuas, incluindo nossas conferências Conversations on Race and Ethnicity (CORE) e AmazeCon (diversidade de gênero). A cultura de inclusão da Amazon é reforçada pelos nossos 16 Princípios de Liderança, que lembram os membros da equipe de buscar perspectivas diversas, aprender e ser curiosos e ganhar confiança. About the team Equilíbrio trabalho/vida pessoal Nossa equipe valoriza muito o equilíbrio entre vida pessoal e profissional. Não se trata de quantas horas você passa em casa ou no trabalho; trata-se do fluxo que você estabelece que traz energia para ambas as partes da sua vida. Acreditamos que encontrar o equilíbrio certo entre sua vida pessoal e profissional é fundamental para a felicidade e a realização ao longo da vida. Oferecemos flexibilidade no horário de trabalho e incentivamos você a encontrar seu próprio equilíbrio entre trabalho e vida pessoal. Mentoria e crescimento de carreira Nossa equipe se dedica a apoiar novos membros. Temos uma ampla combinação de níveis de experiência e mandatos e estamos construindo um ambiente que celebra o compartilhamento de conhecimento e a orientação. Nossos membros seniores desfrutam de orientação individual e revisões de código completas, mas gentis. Nós nos preocupamos com o crescimento de sua carreira e nos esforçamos para atribuir projetos com base no que ajudará cada membro da equipe a se tornar um engenheiro mais completo e capacitá-los a assumir tarefas mais complexas no futuro. We are open to hiring candidates to work out of one of the following locations: Sao Paulo, SP, BRA
US, WA, Seattle
Outbound Communications own the worldwide charter for delighting our customers with timely, relevant notifications (email, mobile, SMS and other channels) to drive awareness and discovery of Amazon’s products and services. We meet customers at their channel of preference with the most relevant content at the right time and frequency. We directly create and operate marketing campaigns, and we have also enabled select partner teams to build programs by reusing and extending our infrastructure. We optimize for customers to receive the most relevant and engaging content across all of Amazon worldwide, and apply the appropriate guardrails to ensure a consistent and high-quality CX. Outbound Communications seek a talented Applied Scientist to join our team to develop the next generation of automated and personalized marketing programs to help Amazon customers in their shopping journeys worldwide. Come join us in our mission today! Key job responsibilities As an Applied Scientist on the team, you will lead the roadmap and strategy for applying science to solve customer problems in the automated marketing domain. This is an opportunity to come in on Day 0 and lead the science strategy of one of the most interesting problem spaces at Amazon - understanding the Amazon customer to build deeply personalized and adaptive messaging experiences. You will be part of a multidisciplinary team and play an active role in translating business and functional requirements into concrete deliverables. You will work closely with product management and the software development team to put solutions into production. You will apply your skills in areas such as deep learning and reinforcement learning while building scalable industrial systems. You will have a unique opportunity to produce and deliver models that help build best-in-class customer experiences and build systems that allow us to deploy these models to production with low latency and high throughput. We are open to hiring candidates to work out of one of the following locations: Seattle, WA, USA
US, WA, Seattle
The Artificial General Intelligence (AGI) team is looking for a passionate, talented, and inventive Applied Scientist with a strong deep learning background, to help build industry-leading technology with multimodal systems. Key job responsibilities As an Applied Scientist with the AGI team, you will work with talented peers to develop novel algorithms and modeling techniques to advance the state of the art with multimodal systems. Your work will directly impact our customers in the form of products and services that make use of vision and language technology. You will leverage Amazon’s heterogeneous data sources and large-scale computing resources to accelerate development with multimodal Large Language Models (LLMs) and Generative Artificial Intelligence (Gen AI) in Computer Vision. About the team The AGI team has a mission to push the envelope with multimodal LLMs and Gen AI in Computer Vision, in order to provide the best-possible experience for our customers. We are open to hiring candidates to work out of one of the following locations: Seattle, WA, USA
US, WA, Bellevue
The Fulfillment by Amazon (FBA) team is looking for a passionate, curious, and creative Senior Applied Scientist, with expertise in machine learning and a proven record of solving business problems through scalable ML solutions, to join our top-notch cross-domain FBA science team. We want to learn seller behaviors, understand seller experience, build automated LLM-based solutions to sellers, design seller policies and incentives, and develop science products and services that empower third-party sellers to grow their businesses. We also predict potentially costly defects that may occur during packing, shipping, receiving and storing the inventory. We aim to prevent such defects before occurring while we are also fulfilling customer demand as quickly and efficiently as possible, in addition to managing returns and reimbursements. To do so, we build and innovate science solutions at the intersection of machine learning, statistics, economics, operations research, and data analytics. As a senior applied scientist, you will propose and deploy solutions that will likely draw from a range of scientific areas such as supervised and unsupervised learning, recommendation systems, statistical learning, LLMs, and reinforcement learning. This role has high visibility to senior Amazon business leaders and involves working with other scientists, and partnering with engineering and product teams to integrate scientific work into production systems. Key job responsibilities - As a senior member of the science team, you will play an integral part in building Amazon's FBA management system. - Research and develop machine learning models to solve diverse business problems faced in Seller inventory management systems. - Define a long-term science vision and roadmap for the team, driven fundamentally from our customers' needs, translating those directions into specific plans for research and applied scientists, as well as engineering and product teams. - Drive and execute machine learning projects/products end-to-end: from ideation, analysis, prototyping, development, metrics, and monitoring. - Review and audit modeling processes and results for other scientists, both junior and senior. - Advocate the right ML solutions to business stakeholders, engineering teams, as well as executive level decision makers A day in the life In this role, you will be a technical leader in machine learning with significant scope, impact, and high visibility. Your solutions may lead to billions of dollars impact on either the topline or the bottom line of Amazon third-party seller business. As a senior scientist on the team, you will be involved in every aspect of the process - from idea generation, business analysis and scientific research, through to development and deployment of advanced models - giving you a real sense of ownership. From day one, you will be working with experienced scientists, engineers, and designers who love what they do. You are expected to make decisions about technology, models and methodology choices. You will strive for simplicity, and demonstrate judgment backed by mathematical proof. You will also collaborate with the broader decision and research science community in Amazon to broaden the horizon of your work and mentor engineers and scientists. The successful candidate will have the strong expertise in applying machine learning models in an applied environment and is looking for her/his next opportunity to innovate, build, deliver, and impress. We are seeking someone who wants to lead projects that require innovative thinking and deep technical problem-solving skills to create production-ready machine learning solutions. The candidate will need to be entrepreneurial, wear many hats, and work in a fast-paced, high-energy, highly collaborative environment. We value highly technical people who know their subject matter deeply and are willing to learn new areas. We look for individuals who know how to deliver results and show a desire to develop themselves, their colleagues, and their career. About the team Fulfillment by Amazon (FBA) is a service that allows sellers to outsource order fulfillment to Amazon, allowing sellers to leverage Amazon’s world-class facilities to provide customers Prime delivery promise. Sellers gain access to Prime members worldwide, see their sales lift, and are free to focus their time and resources on what they do best while Amazon manages fulfillment. Over the last several years, sellers have enjoyed strong business growth with FBA shipping more than half of all products offered by Amazon. FBA focuses on helping sellers with automating and optimizing the third-party supply chain. FBA sellers leverage Amazon’s expertise in machine learning, optimization, data analytics, econometrics, and market design to deliver the best inventory management experience to sellers. We work full-stack, from foundational backend systems to future-forward user interfaces. Our culture is centered on rapid prototyping, rigorous experimentation, and data-driven decision-making. We are open to hiring candidates to work out of one of the following locations: Bellevue, WA, USA
US, WA, Bellevue
The Fulfillment by Amazon (FBA) team is looking for a passionate, curious, and creative Applied Scientist, with expertise and experience in machine learning, to join our top-notch cross-domain FBA science team. We want to learn seller behaviors, understand seller experience, build automated LLM-based solutions to sellers, design seller policies and incentives, and develop science products and services that empower third-party sellers to grow their businesses. We also predict potentially costly defects that may occur during packing, shipping, receiving and storing the inventory. We aim to prevent such defects before occurring while we are also fulfilling customer demand as quickly and efficiently as possible, in addition to managing returns and reimbursements. To do so, we build and innovate science solutions at the intersection of machine learning, statistics, economics, operations research, and data analytics. As an applied scientist, you will design and implement ML solutions that will likely draw from a range of scientific areas such as supervised and unsupervised learning, recommendation systems, statistical learning, LLMs, and reinforcement learning. This role has high visibility to senior Amazon business leaders and involves working with other senior and principal scientists, and partnering with engineering and product teams to integrate scientific work into production systems. Key job responsibilities - Research and develop machine learning models to solve diverse FBA business problems. - Translate business requirements/problems into specific plans for research and applied scientists, as well as engineering and product teams. - Drive and execute machine learning projects/products end-to-end: from ideation, analysis, prototyping, development, metrics, and monitoring. - Work closely with teams of scientists, product managers, program managers, software engineers to drive production model implementations. - Build scalable, efficient, automated processes for large scale data analyses, model development, model validation and model implementation. - Advocate technical solutions to business stakeholders, engineering teams, as well as executive level decision makers A day in the life In this role, you will work in machine learning with significant scope, impact, and high visibility. Your solutions may lead to billions of dollars impact on either the topline or the bottom line of Amazon third-party seller business. As an applied scientist, you will be involved in every aspect of the scientific development process - from idea generation, business analysis and scientific research, through to development and deployment of advanced models - giving you a real sense of ownership. From day one, you will be working with experienced scientists, engineers, and designers who love what they do. You are expected to make decisions about technology, models and methodology choices. You will strive for simplicity, and demonstrate judgment backed by mathematical proof. You will also collaborate with the broader decision and research science community in Amazon to broaden the horizon of your work and mentor engineers and scientists. The successful candidate will have the strong expertise in applying machine learning models in an applied environment and is looking for her/his next opportunity to innovate, build, deliver, and impress. We are seeking someone who wants to lead projects that require innovative thinking and deep technical problem-solving skills to create production-ready machine learning solutions. We value highly technical people who know their subject matter deeply and are willing to learn new areas. We look for individuals who know how to deliver results and show a desire to develop themselves, their colleagues, and their career. About the team Fulfillment by Amazon (FBA) is a service that allows sellers to outsource order fulfillment to Amazon, allowing sellers to leverage Amazon’s world-class facilities to provide customers Prime delivery promise. Sellers gain access to Prime members worldwide, see their sales lift, and are free to focus their time and resources on what they do best while Amazon manages fulfillment. Over the last several years, sellers have enjoyed strong business growth with FBA shipping more than half of all products offered by Amazon. FBA focuses on helping sellers with automating and optimizing the third-party supply chain. FBA sellers leverage Amazon’s expertise in machine learning, optimization, data analytics, econometrics, and market design to deliver the best inventory management experience to sellers. We work full-stack, from foundational backend systems to future-forward user interfaces. Our culture is centered on rapid prototyping, rigorous experimentation, and data-driven decision-making. We are open to hiring candidates to work out of one of the following locations: Bellevue, WA, USA
GB, London
Economic Decision Science is a central science team working across a variety of topics in the EU Stores business and beyond. We work closely EU business leaders to drive change at Amazon. We focus on solving long-term, ambiguous and challenging problems, while providing advisory support to help solve short-term business pain points. Key topics include pricing, product selection, delivery speed, profitability, and customer experience. We tackle these issues by building novel econometric models, machine learning systems, and high-impact experiments which we integrate into business, financial, and system-level decision making. Our work is highly collaborative and we regularly partner with EU- and US-based interdisciplinary teams. We are looking for a Senior Economist who is able to provide structure around complex business problems, hone those complex problems into specific, scientific questions, and test those questions to generate insights. The ideal candidate will work with various science, engineering, operations and analytics teams to estimate models and algorithms on large scale data, design pilots and measure their impact, and transform successful prototypes into improved policies and programs at scale. If you have an entrepreneurial spirit, you know how to deliver results fast, and you have a deeply quantitative, highly innovative approach to solving problems, and long for the opportunity to build pioneering solutions to challenging problems, we want to talk to you. Key job responsibilities - Provide data-driven guidance and recommendations on strategic questions facing the EU Retail leadership - Scope, design and implement version-zero (V0) models and experiments to kickstart new initiatives, thinking, and drive system-level changes across Amazon - Build a long-term research agenda to understand, break down, and tackle the most stubborn and ambiguous business challenges - Influence business leaders and work closely with other scientists at Amazon to deliver measurable progress and change We are open to hiring candidates to work out of one of the following locations: London, GBR
US, WA, Seattle
Amazon is looking for a passionate, talented, and inventive Applied Scientist with background in Natural Language Processing (NLP), Deep Learning, Generative AI (GenAI) to help build industry-leading technology in contact center. The ideal candidate should have a robust foundation in NLP and machine learning and a keen interest in advancing the field. The ideal candidate would also enjoy operating in dynamic environments, have the self-motivation to take on challenging problems to deliver big customer impact, and move fast to ship solutions and innovate along the development process. As part of our Transcribe science team in Amazon AWS AI, you will have the opportunity to build the next generation call center analytic solutions. You will work along side a supportive and collaborative team with a healthy mix of scientists, software engineers and language engineers to research and develop state-of-the-art technology for natural language processing. A day in the life AWS Utility Computing (UC) provides product innovations — from foundational services such as Amazon’s Simple Storage Service (S3) and Amazon Elastic Compute Cloud (EC2), to consistently released new product innovations that continue to set AWS’s services and features apart in the industry. As a member of the UC organization, you’ll support the development and management of Compute, Database, Storage, Internet of Things (Iot), Platform, and Productivity Apps services in AWS, including support for customers who require specialized security solutions for their cloud services. Diverse Experiences AWS values diverse experiences. Even if you do not meet all of the qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why AWS? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Inclusive Team Culture Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences, inspire us to never stop embracing our uniqueness. Mentorship & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. Hybrid Work We value innovation and recognize this sometimes requires uninterrupted time to focus on a build. We also value in-person collaboration and time spent face-to-face. Our team affords employees options to work in the office every day or in a flexible, hybrid work model near one of our U.S. Amazon offices. We are open to hiring candidates to work out of one of the following locations: Bellevue, WA, USA | Seattle, WA, USA
US, WA, Seattle
We are looking for an Applied Scientist to join our Seattle team. As an Applied Scientist, you are able to use a range of science methodologies to solve challenging business problems when the solution is unclear. Our team solves a broad range of problems ranging from natural knowledge understanding of third-party shoppable content, product and content recommendation to social media influencers and their audiences, determining optimal compensation for creators, and mitigating fraud. We generate deep semantic understanding of the photos, and videos in shoppable content created by our creators for efficient processing and appropriate placements for the best customer experience. For example, you may lead the development of reinforcement learning models such as MAB to rank content/product to be shown to influencers. To achieve this, a deep understanding of the quality and relevance of content must be established through ML models that provide those contexts for ranking. In order to be successful in our team, you need a combination of business acumen, broad knowledge of statistics, deep understanding of ML algorithms, and an analytical mindset. You thrive in a collaborative environment, and are passionate about learning. Our team utilizes a variety of AWS tools such as SageMaker, S3, and EC2 with a variety of skillset in shallow and deep learning ML models, particularly in NLP and CV. You will bring knowledge in many of these domains along with your own specialties. Key job responsibilities • Use statistical and machine learning techniques to create scalable and lasting systems. • Analyze and understand large amounts of Amazon’s historical business data for Recommender/Matching algorithms • Design, develop and evaluate highly innovative models for NLP. • Work closely with teams of scientists and software engineers to drive real-time model implementations and new feature creations. • Establish scalable, efficient, automated processes for large scale data analyses, model development, model validation and implementation. • Research and implement novel machine learning and statistical approaches, including NLP and Computer Vision A day in the life In this role, you’ll be utilizing your NLP or CV skills, and creative and critical problem-solving skills to drive new projects from ideation to implementation. Your science expertise will be leveraged to research and deliver often novel solutions to existing problems, explore emerging problems spaces, and create or organize knowledge around them. About the team Our team puts a high value on your work and personal life happiness. It isn’t about how many hours you spend at home or at work; it’s about the flow you establish that brings energy to both parts of you. We believe striking the right balance between your personal and professional life is critical to life-long happiness and fulfillment. We offer flexibility in working hours and encourage you to establish your own harmony between your work and personal life. We are open to hiring candidates to work out of one of the following locations: New York, NY, USA | Seattle, WA, USA
US, WA, Seattle
The Automated Reasoning Group in AWS Platform is looking for an Applied Scientist with experience in building scalable solver solutions that delight customers. You will be part of a world-class team building the next generation of automated reasoning tools and services. AWS has the most services and more features within those services, than any other cloud provider–from infrastructure technologies like compute, storage, and databases–to emerging technologies, such as machine learning and artificial intelligence, data lakes and analytics, and Internet of Things. You will apply your knowledge to propose solutions, create software prototypes, and move prototypes into production systems using modern software development tools and methodologies. In addition, you will support and scale your solutions to meet the ever-growing demand of customer use. You will use your strong verbal and written communication skills, are self-driven and own the delivery of high quality results in a fast-paced environment. Each day, hundreds of thousands of developers make billions of transactions worldwide on AWS. They harness the power of the cloud to enable innovative applications, websites, and businesses. Using automated reasoning technology and mathematical proofs, AWS allows customers to answer questions about security, availability, durability, and functional correctness. We call this provable security, absolute assurance in security of the cloud and in the cloud. See https://aws.amazon.com/security/provable-security/ As an Applied Scientist in AWS Platform, you will play a pivotal role in shaping the definition, vision, design, roadmap and development of product features from beginning to end. You will: - Define and implement new solver applications that are scalable and efficient approaches to difficult problems - Apply software engineering best practices to ensure a high standard of quality for all team deliverables - Work in an agile, startup-like development environment, where you are always working on the most important stuff - Deliver high-quality scientific artifacts - Work with the team to define new interfaces that lower the barrier of adoption for automated reasoning solvers - Work with the team to help drive business decisions The AWS Platform is the glue that holds the AWS ecosystem together. From identity features such as access management and sign on, cryptography, console, builder & developer tools, to projects like automating all of our contractual billing systems, AWS Platform is always innovating with the customer in mind. The AWS Platform team sustains over 750 million transactions per second. Learn and Be Curious. We have a formal mentor search application that lets you find a mentor that works best for you based on location, job family, job level etc. Your manager can also help you find a mentor or two, because two is better than one. In addition to formal mentors, we work and train together so that we are always learning from one another, and we celebrate and support the career progression of our team members. Inclusion and Diversity. Our team is diverse! We drive towards an inclusive culture and work environment. We are intentional about attracting, developing, and retaining amazing talent from diverse backgrounds. Team members are active in Amazon’s 10+ affinity groups, sometimes known as employee resource groups, which bring employees together across businesses and locations around the world. These range from groups such as the Black Employee Network, Latinos at Amazon, Indigenous at Amazon, Families at Amazon, Amazon Women and Engineering, LGBTQ+, Warriors at Amazon (Military), Amazon People With Disabilities, and more. Key job responsibilities Work closely with internal and external users on defining and extending application domains. Tune solver performance for application-specific demands. Identify new opportunities for solver deployment. About the team Solver science is a talented team of scientists from around the world. Expertise areas include solver theory, performance, implementation, and applications. Diverse Experiences AWS values diverse experiences. Even if you do not meet all of the qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why AWS? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Inclusive Team Culture Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences, inspire us to never stop embracing our uniqueness. Mentorship & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. Hybrid Work We value innovation and recognize this sometimes requires uninterrupted time to focus on a build. We also value in-person collaboration and time spent face-to-face. Our team affords employees options to work in the office every day or in a flexible, hybrid work model near one of our U.S. Amazon offices. We are open to hiring candidates to work out of one of the following locations: Portland, OR, USA | Seattle, WA, USA
CN, 11, Beijing
Amazon Search JP builds features powering product search on the Amazon JP shopping site and expands the innovations to world wide. As an Applied Scientist on this growing team, you will take on a key role in improving the NLP and ranking capabilities of the Amazon product search service. Our ultimate goal is to help customers find the products they are searching for, and discover new products they would be interested in. We do so by developing NLP components that cover a wide range of languages and systems. As an Applied Scientist for Search JP, you will design, implement and deliver search features on Amazon site, helping millions of customers every day to find quickly what they are looking for. You will propose innovation in NLP and IR to build ML models trained on terabytes of product and traffic data, which are evaluated using both offline metrics as well as online metrics from A/B testing. You will then integrate these models into the production search engine that serves customers, closing the loop through data, modeling, application, and customer feedback. The chosen approaches for model architecture will balance business-defined performance metrics with the needs of millisecond response times. Key job responsibilities - Designing and implementing new features and machine learned models, including the application of state-of-art deep learning to solve search matching, ranking and Search suggestion problems. - Analyzing data and metrics relevant to the search experiences. - Working with teams worldwide on global projects. Your benefits include: - Working on a high-impact, high-visibility product, with your work improving the experience of millions of customers - The opportunity to use (and innovate) state-of-the-art ML methods to solve real-world problems with tangible customer impact - Being part of a growing team where you can influence the team's mission, direction, and how we achieve our goals We are open to hiring candidates to work out of one of the following locations: Beijing, 11, CHN | Shanghai, 31, CHN