Computing on private data

Both secure multiparty computation and differential privacy protect the privacy of data used in computation, but each has advantages in different contexts.

Many of today’s most innovative computation-based products and solutions are fueled by data. Where those data are private, it is essential to protect them and to prevent the release of information about data subjects, owners, or users to the wrong parties. How can we perform useful computations on sensitive data while preserving privacy?

Related content
Technique that mixes public and private training data can meet differential-privacy criteria while cutting error increase by 60%-70%.

We will revisit two well-studied approaches to this challenge: secure multiparty computation (MPC) and differential privacy (DP). MPC and DP were invented to address different real-world problems and to achieve different technical goals. However, because they are both aimed at using private information without fully revealing it, they are often confused. To help draw a distinction between the two approaches, we will discuss the power and limitations of both and give typical scenarios in which each can be highly effective.

We are interested in scenarios in which multiple individuals (sometimes, society as a whole) can derive substantial utility from a computation on private data but, in order to preserve privacy, cannot simply share all of their data with each other or with an external party.

Secure multiparty computation

MPC methods allow a group of parties to collectively perform a computation that involves all of their private data while revealing only the result of the computation. More formally, an MPC protocol enables n parties, each of whom possesses a private dataset, to compute a function of the union of their datasets in such a way that the only information revealed by the computation is the output of the function. Common situations in which MPC can be used to protect private interests include

  • auctions: the winning bid amount should be made public, but no information about the losing bids should be revealed;
  • voting: the number of votes cast for each option should be made public but not the vote cast by any one individual;
  • machine learning inference: secure two-party computation enables a client to submit a query to a server that holds a proprietary model and receive a response, keeping the query private from the server and the model private from the client.
Related content
New approach to homomorphic encryption speeds up the training of encrypted machine learning models sixfold.

Note that the number n of participants can be quite small (e.g., two in the case of machine learning inference), moderate in size, or very large; the latter two size ranges both occur naturally in auctions and votes. Similarly, the participants may be known to each other (as they would be, for example, in a departmental faculty vote) or not (as, for example, in an online auction). MPC protocols mathematically guarantee the secrecy of input values but do not attempt to hide the identities of the participants; if anonymous participation is desired, it can be achieved by combining MPC with an anonymous-communication protocol.

Although MPC may seem like magic, it is implementable and even practical using cryptographic and distributed-computing techniques. For example, suppose that Alice, Bob, Carlos, and David are four engineers who want to compare their annual raises. Alice selects four random numbers that sum to her raise. She keeps one number to herself and gives each of the other three to one of the other engineers. Bob, Carlos, and David do the same with their own raises.

Secure multiparty computation
Four engineers wish to compute their average raise, without revealing any one engineer's raise to the others. Each selects four numbers that sum to his or her raise and sends three of them to the other engineers. Each engineer then sums his or her four numbers — one private number and three received from the others. The sum of all four engineers' sums equals the sum of all four raises.

After everyone has distributed the random numbers, each engineer adds up the numbers he or she is holding and sends the sum to the others. Each engineer adds up these four sums privately (i.e., on his or her local machine) and divides by four to get the average raise. Now they can all compare their raises to the team average.

 
Amount
Alice's share
Bob's share
Carlos's share
David's share
Sum of sums
Alice's raise
3800
-1000
2500
900
1400
 
Bob's raise
2514
700
400
650
764
 
Carlos's raise
2982
750
-100
832
1500
 
David's raise
3390
1500
900
-3000
3990
 
Sum
12686
1950
3700
-618
7654
12686
Average
3171.5
 
 
 
 
3171.5

Note that, because Alice (like Bob, Carlos, and David) kept part of her raise private (the bold numbers), no one else learned her actual raise. When she summed the numbers she was holding, the sum didn’t correspond to anyone’s raise. In fact, Bob’s sum was negative, because all that matters is that the four chosen numbers add up to the raise; the sign and magnitude of these four numbers are irrelevant.

Summing all of the engineers’ sums results in the same value as summing the raises directly, namely $12,686. If all of the engineers follow this protocol faithfully, dividing this value by four yields the team average raise of $3,171.50, which allows each person to compare his or her raise against the team average (locally and hence privately) without revealing any salary information.

A highly readable introduction to MPC that emphasizes practical protocols, some of which have been deployed in real-world scenarios, can be found in a monograph by Evans, Kolesnikov, and Rosulek. Examples of real-world applications that have been deployed include analysis of gender-based wage gaps in Boston-area companies, aggregate adoption of cybersecurity measures, and Covid exposure notification. Readers may also wish to read our previous blog post on this and related topics.

Differential privacy

Differential privacy (DP) is a body of statistical and algorithmic techniques for releasing an aggregate function of a dataset without revealing the mapping between data contributors and data items. As in MPC, we have n parties, each of whom possesses a data item. Either the parties themselves or, more often, an external agent wishes to compute an aggregate function of the parties’ input data.

Related content
Calibrating noise addition to word density in the embedding space improves utility of privacy-protected text.

If this computation is performed in a differentially private manner, then no information that could be inferred from the output about the ith input, xi, can be associated with the individual party Pi. Typically, the number n of participants is very large, the participants are not known to each other, and the goal is to compute a statistical property of the set {x1, …, xn} while protecting the privacy of individual data contributors {P1, …, Pn}.

In slightly more detail, we say that a randomized algorithm M preserves differential privacy with respect to an aggregation function f if it satisfies two properties. First, for every set of input values, the output of M closely approximates the value of f. Second, for every distinct pair (xi, xi') of possible values for the ith individual input, the distribution of M(x1, …, xi,…, xn) is approximately equivalent to the distribution of M(x1, …, xi′, …, xn). The maximum “distance” between the two distributions is characterized by a parameter, ϵ, called the privacy parameter, and M is called an ϵ-differentially private algorithm.

Note that the output of a differentially private algorithm is a random variable drawn from a distribution on the range of the function f. That is because DP computation requires randomization; in particular, it works by “adding noise.” All known DP techniques introduce a salient trade-off between the privacy parameter and the utility of the output of the computation. Smaller values of ϵ produce better privacy guarantees, but they require more noise and hence produce less-accurate outputs; larger values of ϵ yield worse privacy bounds, but they require less noise and hence deliver better accuracy.

For example, consider a poll, the goal of which is to predict who is going to win an election. The pollster and respondents are willing to sacrifice some accuracy in order to improve privacy. Suppose respondents P1, …, Pn have predictions x1, …, xn, respectively, where each xi is either 0 or 1. The poll is supposed to output a good estimate of p, which we use to denote the fraction of the parties who predict 1. The DP framework allows us to compute an accurate estimate and simultaneously to preserve each respondent’s “plausible deniability” about his or her true prediction by requiring each respondent to add noise before sending a response to the pollster.

Related content
Private aggregation of teacher ensembles (PATE) leads to word error rate reductions of more than 26% relative to standard differential-privacy techniques.

We now provide a few more details of the polling example. Consider the algorithm m that takes as input a bit xi and flips a fair coin. If the coin comes up tails, then m outputs xi; otherwise m flips another fair coin and outputs 1 if heads and 0 if tails. This m is known as the randomized response mechanism; when the pollster asks Pi for a prediction, Pi responds with m(xi). Simple statistical calculation shows that, in the set of answers that the pollster receives from the respondents, the expected fraction that are 1’s is

Pr[First coin is tails] ⋅ p + Pr[First coin is heads] ⋅ Pr[Second coin is heads] = p/2 + 1/4.

Thus, the expected number of 1’s received is n(p/2 + 1/4). Let N = m(x1) + ⋅⋅⋅ + m(xn) denote the actual number of 1’s received; we approximate p by M(x1, …, xn) = 2N/n − 1/2. In fact, this approximation algorithm, M, is differentially private. Accuracy follows from the statistical calculation, and privacy follows from the “plausible deniability” provided by the fact that M outputs 1 with probability at least 1/4 regardless of the value of xi.

Differential privacy has dominated the study of privacy-preserving statistical computation since it was introduced in 2006 and is widely regarded as a fundamental breakthrough in both theory and practice. An excellent overview of algorithmic techniques in DP can be found in a monograph by Dwork and Roth. DP has been applied in many real-world applications, most notably the 2020 US Census.

The power and limitations of MPC and DP

We now review some of the strengths and weaknesses of these two approaches and highlight some key differences between them.

Secure multiparty computation

MPC has been extensively studied for more than 40 years, and there are powerful, general results showing that it can be done for all functions f using a variety of cryptographic and coding-theoretic techniques, system models, and adversary models.

Despite the existence of fully general, secure protocols, MPC has seen limited real-world deployment. One obstacle is protocol complexity — particularly the communication complexity of the most powerful, general solutions. Much current work on MPC addresses this issue.

Related content
A privacy-preserving version of the popular XGBoost machine learning algorithm would let customers feel even more secure about uploading sensitive data to the cloud.

More-fundamental questions that must be answered before MPC can be applied in a given scenario include the nature of the function f being computed and the information environment in which the computation is taking place. In order to explain this point, we first note that the set of participants in the MPC computation is not necessarily the same as the set of parties that receive the result of the computation. The two sets may be identical, one may be a proper subset of the other, they may have some (but not all) elements in common, or they may be entirely disjoint.

Although a secure MPC protocol (provably!) reveals nothing to the recipients about the private inputs except what can be inferred from the result, even that may be too much. For example, if the result is the number of votes for and votes against a proposition in a referendum, and the referendum passes unanimously, then the recipients learn exactly how each participant voted. The referendum authority can avoid revealing private information by using a different f, e.g., one that is “YES” if the number of votes for the proposition is at least half the number of participants and “NO” if it is less than half.

This simple example demonstrates a pervasive trade-off in privacy-preserving computation: participants can compute a function that is more informative if they are willing to reveal private information to the recipients in edge cases; they can achieve more privacy in edge cases if they are willing to compute a less informative function.

In addition to specifying the function f carefully, users of MPC must evaluate the information environment in which MPC is to be deployed and, in particular, must avoid the catastrophic loss of privacy that can occur when the recipients combine the result of the computation with auxiliary information. For example, consider the scenario in which the participants are all of the companies in a given commercial sector and metropolitan area, and they wish to use MPC to compute the total dollar loss that they (collectively) experienced in a given year that was attributable to data breaches; in this example, the recipients of the result are the companies themselves.

Related content
Scientists describe the use of privacy-preserving machine learning to address privacy challenges in XGBoost training and prediction.

Suppose further that, during that year, one of the companies suffered a severe breach that was covered in the local media, which identified the company by name and reported an approximate dollar figure for the loss that the company suffered as a result of the breach. If that approximate figure is very close to the total loss imposed by data breaches on all the companies that year, then the participants can conclude that all but one of them were barely affected by data breaches that year.

Note that this potentially sensitive information is not leaked by the MPC protocol, which reveals nothing but the aggregate amount lost (i.e., the value of the function f). Rather, it is inferred by combining the result of the computation with information that was already available to the participants before the computation was done. The same risk that input privacy will be destroyed when results are combined with auxiliary information is posed by any computational method that reveals the exact value of the function f.

Differential privacy

The DP framework provides some elegant, simple mechanisms that can be applied to any function f whose output is a vector of real numbers. Essentially, one can independently perturb or “noise up” each component of f(x) by an appropriately defined random value. The amount of noise that must be added in order to hide the contribution (or, indeed, the participation) of any single data subject is determined by the privacy parameter and the maximum amount by which a single input can change the output of f. We explain one such mechanism in slightly more mathematical detail in the following paragraph.

One can apply the Laplace mechanism with privacy parameter ϵ to a function f, whose outputs are k-tuples of real numbers, by returning the value f(x1, …, xn) + (Y1, …, Yk) on input (x1, …, xn), where the Yi are independent random variables drawn from the Laplace distribution with parameter Δ(f)/ϵ. Here Δ(f) denotes the 1sensitivity of the function f, which captures the magnitude by which a single individual’s data can change the output of f in the worst case. The technical definition of the Laplace distribution is beyond the scope of this article, but for our purposes, its important property is that the Yi can be sampled efficiently.

Related content
The team’s latest research on privacy-preserving machine learning, federated learning, and bias mitigation.

Crucially, DP protects data contributors against privacy loss caused by post-processing computational results or by combining results with auxiliary information. The scenario in which privacy loss occurred when the output of an MPC protocol was combined with information from an existing news story could not occur in a DP application; moreover, no harm could be done by combining the result of a DP computation with auxiliary information in a future news story.

DP techniques also benefit from powerful composition theorems that allow separate differentially private algorithms to be combined in one application. In particular, the independent use of an ϵ1-differentially private algorithm and an ϵ2-differentially private algorithm, when taken together, is (ϵ1 + ϵ2)-differentially private.

One limitation on the applicability of DP is the need to add noise — something that may not be tolerable in some application scenarios. More fundamentally, the ℓ1 sensitivity of a function f, which yields an upper bound on the amount of noise that must be added to the output in order to achieve a given privacy parameter ϵ, also yields a lower bound. If the output of f is strongly influenced by the presence of a single outlier in the input, then it is impossible to achieve strong privacy and high accuracy simultaneously.

For example, consider the simple case in which f is the sum of all of the private inputs, and each input is an arbitrary positive integer. It is easy to see that the ℓ1 sensitivity is unbounded in this case; to hide the contribution or the participation of an individual whose data item strongly dominates those of all other individuals would require enough noise to render the output meaningless. If one can restrict all of the private inputs to a small interval [a,b], however, then the Laplace mechanism can provide meaningful privacy and accuracy.

DP was originally designed to compute statistical aggregates while preserving the privacy of individual data subjects; in particular, it was designed with real-valued functions in mind. Since then, researchers have developed DP techniques for non-numerical computations. For example, the exponential mechanism can be used to solve selection problems, in which both input and output are of arbitrary type.

Related content
Amazon is helping develop standards for post-quantum cryptography and deploying promising technologies for customers to experiment with.

In specifying a selection problem, one must define a scoring function that maps input-output pairs to real numbers. For each input x, a solution y is better than a solution y′ if the score of (x,y) is greater than that of (x,y′). The exponential mechanism generally works well (i.e., achieves good privacy and good accuracy simultaneously) for selection problems (e.g., approval voting) that can be defined by scoring functions of low sensitivity but not for those (e.g., set intersection) in which the scoring function must have high sensitivity. In fact, there is no differentially private algorithm that works well for set intersection; by contrast, MPC for set intersection is a mature and practical technology that has seen real-world deployment.

Conclusion

In conclusion, both secure multiparty computation and differential privacy can be used to perform computations on sensitive data while preserving the privacy of those data. Important differences between the bodies of technique include

  • The nature of the privacy guarantee: Use of MPC to compute a function y = f(x1, x2, ..., xn) guarantees that the recipients of the result learn the output y and nothing more. For example, if there are exactly two input vectors that are mapped to y by f, the recipients of the output y gain no information about which of two was the actual input to the MPC computation, regardless of the number of components in which these two input vectors differ or the magnitude of the differences. On the other hand, for any third input vector that does not map to y, the recipient learns with certainty that the real input to the MPC computation was not this third vector, even if it differs from one of the first two in only one component and only by a very small amount. By contrast, computing f with a DP algorithm guarantees that, for any two input vectors that differ in only one component, the (randomized!) results of the computation are approximately indistinguishable, regardless of whether the exact values of f on these two input vectors are equal, nearly equal, or extremely different. Straightforward use of composition yields a privacy guarantee for inputs that differ in c components at the expense of increasing the privacy parameter by a factor of c.
  • Typical use cases: DP techniques are most often used to compute aggregate properties of very large datasets, and typically, the identities of data contributors are not known. None of these conditions is typical of MPC use cases.
  • Exact vs. noisy answers: MPC can be used to compute exact answers for all functions f. DP requires the addition of noise. This is not a problem in many statistical computations, but even small amounts of noise may not be acceptable in some application scenarios. Moreover, if f is extremely sensitive to outliers in the input data, the amount of noise needed to achieve meaningful privacy may preclude meaningful accuracy.
  • Auxiliary information: Combining the result of a DP computation with auxiliary information cannot result in privacy loss. By contrast, any computational method (including MPC) that returns the exact value y of a function f runs the risk that a recipient of y might be able to infer something about the input data that is not implied by y alone, if y is combined with auxiliary information.

Finally, we would like to point out that, in some applications, it is possible to get the benefits of both MPC and DP. If the goal is to compute f, and g is a differentially private approximation of f that achieves good privacy and accuracy simultaneously, then one natural way to proceed is to use MPC to compute g. We expect to see both MPC and DP used to enhance data privacy in Amazon’s products and services.

Related content

GB, Cambridge
Our team builds generative AI solutions that will produce some of the future’s most influential voices in media and art. We develop cutting-edge technologies with Amazon Studios, the provider of original content for Prime Video, with Amazon Game Studios and Alexa, the ground-breaking service that powers the audio for Echo. Do you want to be part of the team developing the future technology that impacts the customer experience of ground-breaking products? Then come join us and make history. We are looking for a passionate, talented, and inventive Applied Scientist with a background in Machine Learning to help build industry-leading Speech, Language, Audio and Video technology. As an Applied Scientist at Amazon you will work with talented peers to develop novel algorithms and generative AI models to drive the state of the art in audio (and vocal arts) generation. Position Responsibilities: * Participate in the design, development, evaluation, deployment and updating of data-driven models for digital vocal arts applications. * Participate in research activities including the application and evaluation and digital vocal and video arts techniques for novel applications. * Research and implement novel ML and statistical approaches to add value to the business. * Mentor junior engineers and scientists. We are open to hiring candidates to work out of one of the following locations: Cambridge, GBR | London, GBR
GB, Cambridge
Our team undertakes research together with multiple organizations to advance the state-of-the-art in speech technologies. We not only work on giving Alexa, the ground-breaking service that powers Echo, her voice, but we also develop cutting-edge technologies with Amazon Studios, the provider of original content for Prime Video. Do you want to be part of the team developing the latest technology that impacts the customer experience of ground-breaking products? Then come join us and make history. We are looking for a passionate, talented, and inventive Senior Applied Scientist with a background in Machine Learning to help build industry-leading Speech, Language and Video technology. As a Senior Applied Scientist at Amazon you will work with talented peers to develop novel algorithms and modelling techniques to drive the state of the art in speech and vocal arts synthesis. Position Responsibilities: * Participate in the design, development, evaluation, deployment and updating of data-driven models for digital vocal arts applications. * Participate in research activities including the application and evaluation and digital vocal and video arts techniques for novel applications. * Research and implement novel ML and statistical approaches to add value to the business. * Mentor junior engineers and scientists. We are open to hiring candidates to work out of one of the following locations: Cambridge, GBR | London, GBR
CN, 11, Beijing
Are you interested in applying your strong quantitative analysis and big data skills to world-changing problems? Are you interested in driving the development of methods, models and systems for strategy planning, transportation and fulfillment network? Are you interested to cooperate with Amazonians around the world? If so, then this is the job for you. Our team, ATE(Analytics Technology and Engineering) is looking for an Applied Scientist to join our growing Science Team in Bangalore (India)/ Beijing(China). We are responsible for creating core analytics tech capabilities, quantative models, platforms development, and data engineering. We develop scalable analytics applications and research models to optimize operations processes. We standardize and optimize data sources and visualization efforts across geographies, build up, and maintain the online business intelligence services and data mart. You will work with other scientists, professional data engineers, business intelligence engineers, and product managers using rigorous quantitative approaches to ensure high quality data tech products for our customers around the world, including India, Australia, Brazil, Mexico, Singapore and Middle East. Amazon is growing rapidly and because we are driven by faster delivery to customers, a more efficient supply chain network, and lower cost of operations, our main focus is in the development of strategic models and automation tools fed by our massive amounts of available data. You will be responsible for building these models/tools that improve the economics of Amazon’s worldwide fulfillment networks in different countries as Amazon increases the speed and decreases the cost to deliver products to customers. You will work on large-scale vehicle routing and scheduling problems under complex operational and physical constraints. You will also identify and evaluate opportunities to reduce variable costs by improving fulfillment center processes, transportation operations and scheduling, and the execution of operational plans. Finally, you will help create the metrics to quantify improvements to the fulfillment costs (e.g., transportation and labor costs) resulting from the application of these optimization models and tools. Key job responsibilities - Design and develop complex mathematical, simulation and optimization models and apply them to define strategic and tactical needs and drive the appropriate business and technical solutions in the areas of vehicle routing, inventory management, network flow, supply chain optimization, demand planning. - Apply theories of mathematical optimization, including linear programming, combinatorial optimization, integer programming, dynamic programming, network flows and algorithms to design optimal or near optimal solution methodologies to be used by in-house decision support tools and software. - Translating business questions and concerns into specific analytical questions that can be answered with available data using Statistical and Machine Learning methods. - Prototype models by using modeling and programming languages with efficient data querying and modeling infrastructure. - Communicate proposals and results in a clear manner backed by data and coupled with actionable conclusions to drive business decisions. - Collaborate with colleagues from multidisciplinary science, engineering and business backgrounds. - Manage your own process. Prioritize and execute on high impact projects, triage external requests, and ensure to deliver projects in time. We are open to hiring candidates to work out of one of the following locations: Beijing, 11, CHN
GB, Cambridge
Our team builds generative AI solutions that will produce some of the future’s most influential voices in media and art. We develop cutting-edge technologies with Amazon Studios, the provider of original content for Prime Video, with Amazon Game Studios and Alexa, the ground-breaking service that powers the audio for Echo. Do you want to be part of the team developing the future technology that impacts the customer experience of ground-breaking products? Then come join us and make history. We are looking for a passionate, talented, and inventive Applied Scientist with a background in Machine Learning to help build industry-leading Speech, Language, Audio and Video technology. As an Applied Scientist at Amazon you will work with talented peers to develop novel algorithms and generative AI models to drive the state of the art in audio (and vocal arts) generation. Position Responsibilities: * Participate in the design, development, evaluation, deployment and updating of data-driven models for digital vocal arts applications. * Participate in research activities including the application and evaluation and digital vocal and video arts techniques for novel applications. * Research and implement novel ML and statistical approaches to add value to the business. * Mentor junior engineers and scientists. We are open to hiring candidates to work out of one of the following locations: Cambridge, GBR | London, GBR
US, WA, Bellevue
Do you enjoy solving challenging problems and driving innovations in research? Are you seeking for an environment with a group of motivated and talented scientists like yourself? Do you want to create scalable optimization models and apply machine learning techniques to guide real-world decisions? Do you want to play a key role in the future of Amazon transportation and operations? North America Sort Centers (NASC) are experiencing growth and looking for a skilled, highly motivated Research Scientist in partnership with the Modeling and Optimization (MOP) team. The Sort Center network is the critical Middle-Mile solution in the Amazon Transportation Services (ATS) group, linking Fulfillment Centers to the Last Mile. The experience of our customers is dependent on our ability to efficiently execute volume flow through the middle-mile network. Key job responsibilities A Research Scientist - provides analytical decision support to Amazon planning teams via applying advanced mathematical and statistical techniques. - collaborates effectively with Amazon internal business customers, and is their trusted partner - is proactive and independent in discovering and resolving business pain-points within a given scope - is able to identify a suitable level of sophistication in resolving the different business needs - is confident in leveraging existing solutions to new problems where appropriate and is independent in designing and implementing new solutions where needed - is aware of the limitations of their proposed solutions and is proactive in communicating them to the business, and advances the application of sciences towards Amazon business problems by bringing new methods, ideas, and practices to the team and scientific community. A day in the life - Your will be developing model-based optimization, simulation, and/or predictive tools to identify and evaluate opportunities to improve customer experience, network speed, cost, and efficiency of capital investment. - You will quantify the improvements resulting from the application of these tools and you will evaluate the trade-offs between potentially competing objectives. - You will develop good communication skills and ability to speak at a level appropriate for the audience, will collaborate effectively with fellow scientists, software development engineers, and product managers, and will deliver business value in a close partnership with many stakeholders from operations, finance, IT, and business leadership. About the team - At the Modeling and Optimization (MOP) team, we use mathematical optimization, algorithm design, statistics, and machine learning to improve decision-making capabilities across WW Operations from first mile to last mile. - We focus on transportation topology, labor and resource planning for fulfillment facilities, routing science, visualization research, data science and development, and process optimization. - We create models to simulate, optimize, and control the fulfillment network with the objective of reducing cost while improving speed and reliability. - We support multiple business lanes, therefore maintain a comprehensive and objective view, coordinating solutions across organizational lines where possible. We are open to hiring candidates to work out of one of the following locations: Bellevue, WA, USA
US, MA, Cambridge
The Artificial General Intelligence (AGI) team is looking for a highly skilled and experienced Senior Applied Scientist, to lead the development and implementation of cutting-edge algorithms and models for supervised fine-tuning and reinforcement learning through human feedback; with a focus across text, image, and video modalities. As a Senior Applied Scientist, you will play a critical role in driving the development of Generative AI (GenAI) technologies that can handle Amazon-scale use cases and have a significant impact on our customers' experiences. Key job responsibilities - Collaborate with cross-functional teams of engineers, product managers, and scientists to identify and solve complex problems in GenAI - Design and execute experiments to evaluate the performance of different algorithms and models, and iterate quickly to improve results - Think big about the arc of development of GenAI over a multi-year horizon, and identify new opportunities to apply these technologies to solve real-world problems - Communicate results and insights to both technical and non-technical audiences, including through presentations and written reports - Mentor and guide junior scientists and engineers, and contribute to the overall growth and development of the team We are open to hiring candidates to work out of one of the following locations: Bellevue, WA, USA | Cambridge, MA, USA | New York, NY, USA | Sunnyvale, CA, USA
US, MA, North Reading
We are looking for experienced scientists and engineers to explore new ideas, invent new approaches, and develop new solutions in the areas of Controls, Dynamic modeling and System identification. Are you inspired by invention? Is problem solving through teamwork in your DNA? Do you like the idea of seeing how your work impacts the bigger picture? Answer yes to any of these and you’ll fit right in here at Amazon Robotics. We are a smart team of doers that work passionately to apply cutting edge advances in robotics and software to solve real-world challenges that will transform our customers’ experiences in ways we can’t even imagine yet. We invent new improvements every day. We are Amazon Robotics and we will give you the tools and support you need to invent with us in ways that are rewarding, fulfilling and fun. Key job responsibilities Applied Scientists take on big unanswered questions and guide development team to state-of-the-art solutions. We want to hear from you if you have deep industry experience in the Mechatronics domain and : * the ability to think big and conceive of new ideas and novel solutions; * the insight to correctly identify those worth exploring; * the hands-on skills to quickly develop proofs-of-concept; * the rigor to conduct careful experimental evaluations; * the discipline to fast-fail when data refutes theory; * and the fortitude to continue exploring until your solution is found We are open to hiring candidates to work out of one of the following locations: North Reading, MA, USA | Westborough, MA, USA
IL, Tel Aviv
Come build the future of entertainment with us. Are you interested in helping shape the future of movies and television? Do you want to help define the next generation of how and what Amazon customers are watching? Prime Video is a premium streaming service that offers customers a vast collection of TV shows and movies - all with the ease of finding what they love to watch in one place. We offer customers thousands of popular movies and TV shows including Amazon Originals and exclusive licensed content to exciting live sports events. We also offer our members the opportunity to subscribe to add-on channels which they can cancel at anytime and to rent or buy new release movies and TV box sets on the Prime Video Store. Prime Video is a fast-paced, growth business - available in over 240 countries and territories worldwide. The team works in a dynamic environment where innovating on behalf of our customers is at the heart of everything we do. If this sounds exciting to you, please read on. We are looking for an Applied Scientist to embark on our journey to build a Prime Video Sports tech team in Israel from ground up. Our team will focus on developing products to allow for personalizing the customers’ experience and providing them real-time insights and revolutionary experiences using Computer Vision (CV) and Machine Learning (ML). You will get a chance to work on greenfield, cutting-edge and large-scale engineering and science projects, and a rare opportunity to be one of the founders of the Israel Prime Video Sports tech team in Israel. Key job responsibilities We are looking for an Applied Scientist with domain expertise in Computer Vision or Recommendation Systems to lead development of new algorithms and E2E solutions. You will be part of a team of applied scientists and software development engineers responsible for research, design, development and deployment of algorithms into production pipelines. As a technologist, you will also drive publications of original work in top-tier conferences in Computer Vision and Machine Learning. You will be expected to deal with ambiguity! We're looking for someone with outstanding analytical abilities and someone comfortable working with cross-functional teams and systems. You must be a self-starter and be able to learn on the go. About the team In September 2018 Prime Video launched its first full-scale live streaming experience to world-wide Prime customers with NFL Thursday Night Football. That was just the start. Now Amazon has exclusive broadcasting rights to major leagues like NFL Thursday Night Football, Tennis major like Roland-Garros and English Premium League to list few and are broadcasting live events across 30+ sports world-wide. Prime Video is expanding not just the breadth of live content that it offers, but the depth of the experience. This is a transformative opportunity, the chance to be at the vanguard of a program that will revolutionize Prime Video, and the live streaming experience of customers everywhere. We are open to hiring candidates to work out of one of the following locations: Tel Aviv, ISR
MX, DIF, Mexico City
Are you a data enthusiast? Does the world’s most complex logistic systems inspire your curiosity? Is your passion to navigate through hundreds of systems, processes, and data sources to solve the puzzles and identify the next big opportunity? Are you a creative big thinker who is passionate about using data and optimization tools to direct decision making and solve complex and large-scale challenges? Do you feel like your skills uniquely qualify you to bridge communication between teams with competing priorities? If so, then this position is for you! We are looking for a motivated individual with strong analytic and communication skills to join the effort in evolving the network we have today into the network we need tomorrow. Amazon’s extensive logistics system is comprised of thousands of fixed infrastructure nodes, with millions of possible connections between them. Billions of packages flow through this network on a yearly basis, making the impact of optimal improvements unparalleled. This magnificent challenge is a terrific opportunity to analyze Amazon’s data and generate actionable recommendations using optimization and simulation. Come build with us! In this role, your main focus will be to perform analysis, synthesize information, identify business opportunities, provide project direction, and communicate business and technical requirements within the team and across stakeholder groups. You consider the needs of day-to-day operations and insist on the standards required to build the network of tomorrow. You will assist in defining trade-offs and quantifying opportunities for a variety of projects. You will learn current processes, build metrics, educate diverse stakeholder groups, assist science groups in initial solution design, and audit all model implementation. A successful candidate in this position will have a background in communicating across significant differences, prioritizing competing requests, and quantifying decisions made. The ideal candidate will have a strong ability to model real world data with high complexity and delivery high quality analysis, data products and optimizations models for strategic decision. They are excited to be part of, and learn from, a large science community and are ready to dig into the details to find insights that direct decisions. The successful candidate will have good communication skills and an ability to speak at a level appropriate for the audience, will collaborate effectively with scientists, product managers and business stakeholders. Key job responsibilities Statistical Models (ML, regression, forecasting, ) Optimization models, AB and hypothesis testing, Bayesian models. Communication skills with both tech and non tech stakeholders. Writting skills, capable to create documents for different types of readers (business, science, tech) to communicate results on analysis, testing. A day in the life We are open to hiring candidates to work out of one of the following locations: Mexico City, DIF, MEX
US, WA, Seattle
We are seeking a Senior Applied Scientist to join our AI Security team and use AI to develop foundational services that make security mechanisms more effective and efficient. As a Senior Applied Scientist, you will be responsible for researching, modeling, designing, and implementing state-of-the-art AI-based security solutions at Amazon scale. You will collaborate with applied scientists, security engineers, software engineers, as well as internal stakeholders and partners to develop innovative technologies to solve some of our hardest security problems, and build paved path solutions that support builder teams across Amazon throughout their software development journey, enabling Amazon businesses to accelerate the pace of innovation to delight our customers. Key job responsibilities • Research and develop accurate and scalable methods to solve foundational security problems. • Lead and partner with applied scientists and engineers to drive modeling and technical design for complex problems. • Build security tooling and paved path solutions that support builder teams throughout their software development journey. About the team ABOUT AmSec: Diverse Experiences: Amazon Security values diverse experiences. Even if you do not meet all of the qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why Amazon Security: At Amazon, security is central to maintaining customer trust and delivering delightful customer experiences. Our organization is responsible for creating and maintaining a high bar for security across all of Amazon’s products and services. We offer talented security professionals the chance to accelerate their careers with opportunities to build experience in a wide variety of areas including cloud, devices, retail, entertainment, healthcare, operations, and physical stores. Inclusive Team Culture: In Amazon Security, it’s in our nature to learn and be curious. Ongoing DEI events and learning experiences inspire us to continue learning and to embrace our uniqueness. Addressing the toughest security challenges requires that we seek out and celebrate a diversity of ideas, perspectives, and voices. Training & Career Growth: We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, training, and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance: We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why flexible work hours and arrangements are part of our culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve. We are open to hiring candidates to work out of one of the following locations: Austin, TX, USA | San Francisco, CA, USA | Seattle, WA, USA