How we built Cedar with automated reasoning and differential testing

The new development process behind Amazon Web Services’ Cedar authorization-policy language.

Cedar is a new authorization-policy language used by the Amazon Verified Permissions and AWS Verified Access managed services, and we recently released it publicly. Using Cedar, developers can write policies that specify fine-grained permissions for their applications. The applications then authorize access requests by calling Cedar’s authorization engine. Because Cedar policies are separate from application code, they can be independently authored, updated, analyzed, and audited. 

Related content
CAV keynote lecture by the director of applied science for AWS Identity explains how AWS is making the power of automated reasoning available to all customers.

We want to assure developers that Cedar’s authorization decisions will be correct. To provide that assurance, we follow a two-part process we call verification-guided development when we’re working on Cedar. First, we use automated reasoning to prove important correctness properties about formal models of Cedar’s components. Second, we use differential random testing to show that the models match the production code. In this blog post we present an overview of verification-guided development for Cedar.

A primer on Cedar

Cedar is a language for writing and enforcing authorization policies for custom applications. Cedar policies are expressed in syntax resembling natural language. They define who (the principal) can do what (the action) on what target (the resource) under which conditions (when)?

To see how Cedar works, consider a simple application, TinyTodo, designed for managing task lists. TinyTodo uses Cedar to control who can do what. Here is one of TinyTodo’s policies:

// policy 1
permit(principal, action, resource)
when {
	resource has owner && resource.owner == principal
};

This policy states that any principal (a TinyTodo User) can perform any action on any resource (a TinyTodo List) as long as the resource’s creator, defined by its owner attribute, matches the requesting principal. Here’s another TinyTodo Cedar policy:

// policy 2
permit (
	principal,
	action == Action::"GetList",
	resource
)
when {
	principal in resource.editors || principal in resource.readers
};

This policy states that any principal can read the contents of a task list (Action::"GetList") if that principal is in either the list’s readers group or its editors group. Here is a third policy:

// policy 3
forbid (
	principal in Team::"interns",
	action == Action::"CreateList",
	resource == Application::"TinyTodo"
);

This policy states that any principal who is an intern (in Team::"interns") is forbidden from creating a new task list (Action::"CreateList") using TinyTodo (Application::"TinyTodo").

Related content
Meet Amazon Science’s newest research area.

When the application needs to enforce access, as when a user of TinyTodo issues a command, it only needs to make a corresponding request to the Cedar authorization engine. The authorization engine evaluates the request in light of the Cedar policies and relevant application data. If it returns decision Allow, TinyTodo can proceed with the command. If it returns decision Deny, TinyTodo can report that the command is not permitted.

How do we build Cedar to be trustworthy?

Our work on Cedar uses a process we call verification-guided development to ensure that Cedar’s authorization engine makes the correct decisions. The process has two parts. First, we model Cedar’s authorization engine and validator in the Dafny verification-aware programming language. With Dafny, you can write code, and you can specify properties about what the code is meant to do under all circumstances. Using Dafny’s built-in automated-reasoning capabilities we have proved that the code satisfies a variety of safety and security properties.

Second, we use differential random testing (DRT) to confirm that Cedar’s production implementation, written in Rust, matches the Dafny model’s behavior. We generate millions of diverse inputs and feed them to both the Dafny model and the production code. If both versions always produce the same output, we have a high degree of confidence that the implementation matches the model.

Cedar figure.png
Building Cedar using automated reasoning and differential testing.

Proving properties about Cedar authorization

 Cedar’s authorization algorithm was designed to be secure by default, as exemplified by the following two properties:

  • explicit permit — permission is granted only by individual permit policies and is not gained by error or default;
  • forbid overrides permit — any applicable forbid policy always denies access, even if there is a permit policy that allows it.

With these properties, sets of policies are easier to understand. Policy authors know that permit policies are the only way access is granted, and forbid policies decline access regardless of whether it is explicitly permitted.

Related content
Distributing proof search, reasoning about distributed systems, and automating regulatory compliance are just three fruitful research areas.

Given an authorization request, the Cedar authorization engine takes each Cedar policy and evaluates it after substituting the application request parameters into the principal, action and resource variables. For example, for the request principal= User::”Alice”, action=Action::”GetList”, and resource=List::”AliceList”, substituting for the variables in policy 1 would produce the expression List::”AliceList” has owner && List::”AliceList”.owner == User::”Alice”. If this expression evaluates to true, we say the request satisfies the policy. The authorization engine collects the satisfied forbid and permit policies into distinct sets and then makes its decision.

We model the authorization engine as a Dafny function and use Dafny’s automated-reasoning capabilities to state and prove the explicit-permit and forbid-overrides-permit properties. To see how this helps uncover mistakes, let’s consider a buggy version of the authorization engine:

function method isAuthorized(): Response { // BUGGY VERSION
	var f := forbids();
	var p := permits();
	if f != {} then
		Response(Deny, f)
	else
		Response(Allow, p)
}

The logic states that if any forbid policy is applicable (set f is not the empty set {}), the result should be Deny, thus overriding any applicable permit policies (in set p). Otherwise, the result is Allow. While this logic correctly reflects the desired forbid-overrides-permit property, it does not correctly capture explicit permit. Just because there are no applicable forbid policies doesn’t mean there are any applicable permit policies. We can see this by specifying and attempting to prove explicit permit in Dafny:

// A request is explicitly permitted when a permit policy is satisfied
predicate IsExplicitlyPermitted(request: Request, store: Store) {
	exists p ::
		p in store.policies.policies.Keys &&
		store.policies.policies[p].effect == Permit &&
		Authorizer(request, store).satisfied(p)
}
lemma AllowedIfExplicitlyPermitted(request: Request, store: Store)
ensures // A request is allowed if it is explicitly permitted
	(Authorizer(request, store).isAuthorized().decision == Allow) ==>
	IsExplicitlyPermitted(request, store)
{ ... }

A Dafny predicate is a function that takes arguments and returns a logical condition, and a Dafny lemma is a property to be proved. The IsExplicitlyPermitted predicate defines the condition that there is an applicable permit policy for the given request. The AllowedIfExplicitlyPermitted lemma states that a decision of Allow necessarily means the request was explicitly permitted. This lemma does not hold for the isAuthorized definition above; Dafny complains that A postcondition might not hold on this return path and points to the ensures clause.

Here is the corrected code:

function method isAuthorized(): Response {
	var f := forbids();
	var p := permits();
	if f == {} && p != {} then
		Response(Allow, p)
	else
		Response(Deny, f)
}

Now a response is Allow only if there are no applicable forbid policies, and there is at least one applicable permit policy. With this change, Dafny automatically proves AllowedIfExplicitlyPermitted. It also proves forbid overrides permit (not shown).

Related content
To mark the occasion of the eighth Federated Logic Conference (FloC), Amazon’s Byron Cook, Daniel Kröning, and Marijn Heule discussed automated reasoning’s prospects.

We have used the Cedar Dafny models to prove a variety of properties. Our most significant proof is that the Cedar validator, which confirms that Cedar policies are consistent with the application’s data model, is sound: if the validator accepts a policy, evaluating the policy should never result in certain classes of error. When carrying out this proof in Dafny, we found a number of subtle bugs in the validator’s design that we were able to correct.

We note that Dafny models are useful not just for automated reasoning but for manual reasoning, too. The Dafny code is much easier to read than the Rust implementation. As one measure of this, at the time of this writing the Dafny model for the authorizer has about one-sixth as many lines of code as the production code. Both Cedar users and tool implementers can refer to the Dafny models to quickly understand precise details about how Cedar works.

Differential random testing

Once we have proved properties about the Cedar Dafny model, we want to provide evidence that they hold for the production code, too, which we can do by using DRT to show that the model and the production code behave the same. Using the cargo fuzz random-testing framework, we generate millions of inputs — access requests, accompanying data, and policies — and send them to both the Dafny model engine and the Rust production engine. If the two versions agree on the decision, then all is well. If they disagree, then we have found a bug.

The main challenge with using DRT effectively is to ensure the necessary code coverage by generating useful and diverse inputs. Randomly generated policies are unlikely to mention the same groups and attributes chosen in randomly generated requests and data. As a result, pure random generation will miss a lot of core evaluation logic and overindex on error-handling code. To resolve this, we wrote several input generators, including ones that take care to generate policies, data, and requests that are consistent with one another, while also producing policies that use Cedar’s key language constructs. As of this writing, we run DRT for six hours nightly and execute on the order of 100 million total tests.

Related content
Rungta had a promising career with NASA, but decided the stars aligned for her at Amazon.

The use of DRT during Cedar’s development has discovered corner cases where there were discrepancies between the model and the production code, making it an important tool in our toolkit. For example, there was a bug in a Rust package we were using for IP address operations; the Dafny model exposed an issue in how the package was parsing IP addresses. Since the bug is in an external package, we fixed the problem within our code while we wait for the upstream fix. We also found subtle bugs in the Cedar policy parser, in how the authorizer handles missing application data, and how namespace prefixes on application data (e.g., TinyTodo::List::”AliceList”) are interpreted.

Learn more

In this post we have discussed the verification-guided development process we have followed for the Cedar authorization policy language. In this process, we model Cedar language components in the Dafny programming language and use Dafny’s automated-reasoning capabilities to prove properties about them. We check that the Cedar production code matches the Dafny model through differential random testing. This process has revealed several interesting bugs during development and has given us greater confidence that Cedar’s authorization engine makes correct decisions.

To learn more, you can check out the Cedar Dafny models and differential-testing code on GitHub. You can also learn more about Dafny on the Dafny website and the Cedar service on the Cedar website.

Related content

US, WA, Seattle
At Amazon, we strive every day to be Earth’s most customer centric company. Selling Partner Experience Science (SPeXSci) delivers on this by building AI-enhanced experiences, optimization, and automation that help third-party sellers build more successful businesses. This includes recommendations that drive growth and AI-enhanced assistance for troubleshooting issues. There are many challenges that we confront caused by the volume, diversity, and complexity of our selling partner's needs… and we are always striving to do better. Do you want to join an innovative team who creatively applies techniques ranging from statistics and traditional machine learning to deep learning and natural language processing? A team that drives our flywheel of improvement by hunting down opportunities to do better that are buried in tens of millions of solved cases? Are you interested in helping us redefine what world class support can be in an age of automation and AI, while prizing human empathy and ingenuity? The SPeXSci Team is looking for an Applied Scientist to build statistical and machine learning solutions that help us understand and solve our most challenging problems. We need to better understand our Sellers and the problems they face, to design permanent fixes to recurring problems, to anticipate problems so that we are prepared to deal with them, to measure our success at delighting our customers, and to identify opportunities to grow and improve. In this role, you will have ownership of the end-to-end development of solutions to complex problems and you will play an integral role in strategic decision-making. You will also work closely with engineers, operations teams, product owners to build ML pipelines, platforms and solutions that solve problems of defect detection, automation, and workforce optimization. We are open to hiring candidates to work out of one of the following locations: Seattle, WA, USA
US, CA, Palo Alto
We’re working to improve shopping on Amazon using the conversational capabilities of large language models, and are searching for pioneers who are passionate about technology, innovation, and customer experience, and are ready to make a lasting impact on the industry. You'll be working with talented scientists, engineers, and technical program managers (TPM) to innovate on behalf of our customers. If you're fired up about being part of a dynamic, driven team, then this is your moment to join us on this exciting journey! We are open to hiring candidates to work out of one of the following locations: Palo Alto, CA, USA
US, WA, Seattle
As a Senior Data Scientist with expertise in Machine Learning (ML), development and use of multi-model models, utilizing diverse sets of large data you will work with a team of Applied Scientists and Software Engineers to build innovative foundation models for robotic manipulation utilizing computer vision and scene perception technology. Your role will focus first on feature engineering, data collection and data usage from large data sets across Fulfillment Technologies and Robotics (FTR), with an eye on strategy going forward to unify a data strategy across organizations. This position requires high levels of analytical thinking, ability to quickly approach large ambiguous problems and apply analytics, technical and engineering expertise to rapidly analyze, validate, visualize, prototype and deliver solutions. Key job responsibilities - Utilize expertise in feature engineering on massive data sets through exploratory data analysis across existing large data sets in Fulfillment Technologies and Robotics (FTR). Help identify areas where we could create new data sources that would improve training capabilities based on understanding of how different scenes in FCs could impact the trained model and ultimately performance of robotic manipulation. - Identify data requirements, build methodology and data modeling strategy across the diverse data sets for both short-term and long-term needs - Work closely with Applied Scientists in building FM solutions, ensuring that the data strategy fits the experimentation paths, as well as contribute to the FM strategy through identifying opportunities based on the data - Work with and develop large datasets (training/fine tuning) and bring large datasets together to inform both training in FOMO as well as across FTR - Design and implement data solutions, working closely with engineers to guide on best paths for building data pipelines and infrastructure for model training - Collaborate across teams both within and outside of FOMO on data strategy A day in the life Amazon offers a full range of benefits that support you and eligible family members, including domestic partners and their children. Benefits can vary by location, the number of regularly scheduled hours you work, length of employment, and job status such as seasonal or temporary employment. The benefits that generally apply to regular, full-time employees include: 1. Medical, Dental, and Vision Coverage 2. Maternity and Parental Leave Options 3. Paid Time Off (PTO) 4. 401(k) Plan If you are not sure that every qualification on the list above describes you exactly, we'd still love to hear from you! At Amazon, we value people with unique backgrounds, experiences, and skillsets. If you’re passionate about this role and want to make an impact on a global scale, please apply! We are open to hiring candidates to work out of one of the following locations: Seattle, WA, USA
IN, KA, Bangalore
Are you interested in changing the Digital Reading Experience? We are from Kindle Books Team looking for a set of Scientists to take the reading experience in Kindle to next level with a set of innovations! We envision Kindle as the place where readers find the best manifestation of all written content optimized with features that enable them to get the most out of reading, and creators are able to realize their vision to customers quickly and at scale. Every time customers open their content, regardless of surface, they start or restart their reading in a familiar, useful and engaging place. We achieve this by building a strong foundation of core experiences and act as a force multiplier and partner for content creators (directly or indirectly) to easily innovate on top of Kindle's purpose built content experience stack in a simple and extensible way. We will achieve this by providing a best-in-class reading experience, unique content experiences, and remaining agile in meeting the evolving needs and preferences of our users. Our goal is to foster long-lasting reading habits and make us the preferred destination for enriching literary experiences. We are building a In The Book Science team and looking for Scientists, who are passionate about Reading and are willing to take Reading to the next level. Every Book is a complex structure with different entities, layout, format and semantics, with more than 17MM eBooks in our catalog. We are looking for experts in all domains like core NLP, Generative AI, CV and Deep Learning Techniques for unlocking capabilities like analysis, enhancement, curation, moderation, translation, transformation and generation in Books based on Content structure, features, Intent & Synthesis. Scientists will focus on Inside the book content and semantically learn the different entities to enhance the Reading experience overall (Kindle & beyond). They have an opportunity to influence in 2 major phases of life-cycle - Publishing (Creation of Books process) and Reading experience (building engaging features & representation in the book thereby driving reading engagement). Key job responsibilities - 3+ years of building machine learning models for business application experience - PhD, or Master's degree and 2+ years of applied research experience - Knowledge of programming languages such as C/C++, Python, Java or Perl - Experience programming in Java, C++, Python or related language - You have expertise in one of the applied science disciplines, such as machine learning, natural language processing, computer vision, Deep learning - You are able to use reasonable assumptions, data, and customer requirements to solve problems. - You initiate the design, development, execution, and implementation of smaller components with input and guidance from team members. - You work with SDEs to deliver solutions into production to benefit customers or an area of the business. - You assume responsibility for the code in your components. You write secure, stable, testable, maintainable code with minimal defects. - You understand basic data structures, algorithms, model evaluation techniques, performance, and optimality tradeoffs. - You follow engineering and scientific method best practices. You get your designs, models, and code reviewed. You test your code and models thoroughly - You participate in team design, scoping and prioritization discussions. You are able to map a business goal to a scientific problem and map business metrics to technical metrics. - You invent, refine and develop your solutions to ensure they are meeting customer needs and team goals. You keep current with research trends in your area of expertise and scrutinize your results. A day in the life You will be working with a group of talented scientists on researching algorithm and running experiments to test solutions to improve our experience. This will involve collaboration with partner teams including engineering, PMs, data annotators, and other scientists to discuss data quality, model development and productionizing the same. You will mentor other scientists, review and guide their work, help develop roadmaps for the team. We are open to hiring candidates to work out of one of the following locations: Bangalore, IND | Bangalore, KA, IND
JP, 13, Tokyo
日本の大学で機械学習や関連領域の研究に従事している学生の皆様に向けたフェローシッププログラムのご案内です。Amazon JapanのRetail Scienceチームでは、何百万人もの顧客にインパクトを与える価値あるテクノロジーに繋がるような、新しいプロトタイプやコンセプトを開発するプロジェクトに従事していただく学生を募集しています。プログラムは1ヶ月から3ヶ月の短期間のプロジェクトになります。 プロジェクトの対象となるテーマには、自然言語処理、表現学習、レコメンデーションシステム、因果推論といった領域が含まれますが、これらに限定されるわけではありません。プロジェクトは、チームのシニアサイエンティスト1名または複数名のガイダンスのもとで定義、遂行され、プロジェクト中は他のサイエンティストもメンターとしてフォローします。 学生の皆様が新しいモデルを考案したり、新しいテクノロジーを活用し実験する時間を最大化できるようにすることが目標です。そのため、プロジェクトではエンジニアリングやスケーリングよりも、プロトタイピングを行い具体的に概念実証を行うことに集中します。 また、Amazonでは論文出版も推奨しています。従事した研究開発活動の成果物として出版される論文には著者として参加することになります。 フェローシッププログラムは目黒の東京オフィスで、他のチームと一緒に行われます。Amazonは、プログラム期間中に必要なIT機器(ラップトップなど)、給与と通勤費を支給します。 Are you a current PhD student enrolled in a Japanese university researching Statistics, Machine Learning, Economics, or a related discipline? The Japan Retail Science team is looking for Fellows for short term (1-3 months) projects to develop new prototypes and concepts that can then be translated into meaningful technologies impacting millions of customers. In this position, you will be assigned a project to carry out from areas including but not limited to natural language processing, representation learning, recommender systems, or causal inference. The project will be defined and carried out under the supervision of one or more of our senior scientists, and you will be assigned another scientist as a mentor to follow you during the project. Our goal is to maximize the time you spend on inventing new models and experimenting with new techniques, so the work will concentrate on prototyping and creating a tangible proof of concept, rather than engineering and scaling. Amazon encourages publications, and you will be included as an author of any published manuscript. The fellowship will be carried out from our Tokyo office in Meguro together with the rest of the team. Amazon will provide the necessary IT equipment (laptop, etc.) for the duration of the fellowship, a salary, and commuting expenses. A day in the life - チームの多くのメンバーは、午前9時くらいから10時半くらいまでの間に仕事を始め、夕方6時から7時には仕事を終えています。出席が必要なミーティングに参加していれば、勤務時間は自由に決められます。 - パートタイムを希望する場合、勤務時間数は採用担当者とともに決定します。フルタイムの場合、労働時間は通常の契約通り週40時間となります。 - オフィスは目黒にあり、週3回の出社が必要です。残りの2日間はリモートワーク、オフィスへの出勤いずれも可能です。 - The majority of the team starts working between 9 and 10.30am until 18-19. You will have complete flexibility to determine your working hours as long as you are present for the meetings where your attendance is required. - Number of working hours will be determined together with the hiring manager in case you want to pursue the Fellowship part-time. In case of full-time, working hours will be 40/week as per a standard contract. - Our office is located in Meguro, and presence in the office is required 3 times/week. You are free to work remotely for the remaining two days or come to the office if you prefer. About the team 私たちのチームは、日本および世界のすべてのAmazonのベンダー企業に提供されるソリューションを支える製品を発明し、開発しています。私たちは、プロダクトマネージャーやビジネス関係者と協力し、科学的なモデルを開発し、インパクトのあるアプリケーションに繋げることで、Amazonのベンダー企業がより速く成長し、顧客により良いサービスを提供できるようにします。 私たちは、科学者同士のコラボレーションが重要であり、孤立した状態で仕事をしても、幸せなチームにはならないと考えています。私たちは、科学者が専門性を高め、最先端の技術についていけるよう、社内の仕組みを通じて継続的に学ぶことに重きを置いています。私たちの目標は、世界中のAmazonのベンダーソリューションの主要なサイエンスチームとなることです。 Our team invents and develops products powering the solutions offered to all Amazon vendors, in Japan and worldwide. We interact with Product Managers and Business stakeholders to develop rigorous science models that are linked to impactful applications helping Amazon vendors grow faster and better serving their customers. We believe that collaboration between scientists is paramount, and working in isolation does not lead to a happy team. We place strong emphasis on continuous learning through internal mechanisms for our scientists to keep on growing their expertise and keep up with the state of the art. Our goal is to be primary science team for vendor solutions in Amazon, worldwide. We are open to hiring candidates to work out of one of the following locations: Tokyo, 13, JPN
JP, 13, Tokyo
日本の大学で機械学習や関連領域の研究に従事している学生の皆様に向けたフェローシッププログラムのご案内です。Amazon JapanのRetail Scienceチームでは、何百万人もの顧客にインパクトを与える価値あるテクノロジーに繋がるような、新しいプロトタイプやコンセプトを開発するプロジェクトに従事していただく学生を募集しています。プログラムは1ヶ月から3ヶ月の短期間のプロジェクトになります。 プロジェクトの対象となるテーマには、自然言語処理、表現学習、レコメンデーションシステム、因果推論といった領域が含まれますが、これらに限定されるわけではありません。プロジェクトは、チームのシニアサイエンティスト1名または複数名のガイダンスのもとで定義、遂行され、プロジェクト中は他のサイエンティストもメンターとしてフォローします。 学生の皆様が新しいモデルを考案したり、新しいテクノロジーを活用し実験する時間を最大化できるようにすることが目標です。そのため、プロジェクトではエンジニアリングやスケーリングよりも、プロトタイピングを行い具体的に概念実証を行うことに集中します。 また、Amazonでは論文出版も推奨しています。従事した研究開発活動の成果物として出版される論文には著者として参加することになります。 フェローシッププログラムは目黒の東京オフィスで、他のチームと一緒に行われます。Amazonは、プログラム期間中に必要なIT機器(ラップトップなど)、給与と通勤費を支給します。 Are you a current PhD student enrolled in a Japanese university researching Statistics, Machine Learning, Economics, or a related discipline? The Japan Retail Science team is looking for Fellows for short term (1-3 months) projects to develop new prototypes and concepts that can then be translated into meaningful technologies impacting millions of customers. In this position, you will be assigned a project to carry out from areas including but not limited to natural language processing, representation learning, recommender systems, or causal inference. The project will be defined and carried out under the supervision of one or more of our senior scientists, and you will be assigned another scientist as a mentor to follow you during the project. Our goal is to maximize the time you spend on inventing new models and experimenting with new techniques, so the work will concentrate on prototyping and creating a tangible proof of concept, rather than engineering and scaling. Amazon encourages publications, and you will be included as an author of any published manuscript. The fellowship will be carried out from our Tokyo office in Meguro together with the rest of the team. Amazon will provide the necessary IT equipment (laptop, etc.) for the duration of the fellowship, a salary, and commuting expenses. A day in the life - チームの多くのメンバーは、午前9時くらいから10時半くらいまでの間に仕事を始め、夕方6時から7時には仕事を終えています。出席が必要なミーティングに参加していれば、勤務時間は自由に決められます。 - パートタイムを希望する場合、勤務時間数は採用担当者とともに決定します。フルタイムの場合、労働時間は通常の契約通り週40時間となります。 - オフィスは目黒にあり、週3回の出社が必要です。残りの2日間はリモートワーク、オフィスへの出勤いずれも可能です。 - The majority of the team starts working between 9 and 10.30am until 18-19. You will have complete flexibility to determine your working hours as long as you are present for the meetings where your attendance is required. - Number of working hours will be determined together with the hiring manager in case you want to pursue the Fellowship part-time. In case of full-time, working hours will be 40/week as per a standard contract. - Our office is located in Meguro, and presence in the office is required 3 times/week. You are free to work remotely for the remaining two days or come to the office if you prefer. About the team 私たちのチームは、日本および世界のすべてのAmazonのベンダー企業に提供されるソリューションを支える製品を発明し、開発しています。私たちは、プロダクトマネージャーやビジネス関係者と協力し、科学的なモデルを開発し、インパクトのあるアプリケーションに繋げることで、Amazonのベンダー企業がより速く成長し、顧客により良いサービスを提供できるようにします。 私たちは、科学者同士のコラボレーションが重要であり、孤立した状態で仕事をしても、幸せなチームにはならないと考えています。私たちは、科学者が専門性を高め、最先端の技術についていけるよう、社内の仕組みを通じて継続的に学ぶことに重きを置いています。私たちの目標は、世界中のAmazonのベンダーソリューションの主要なサイエンスチームとなることです。 Our team invents and develops products powering the solutions offered to all Amazon vendors, in Japan and worldwide. We interact with Product Managers and Business stakeholders to develop rigorous science models that are linked to impactful applications helping Amazon vendors grow faster and better serving their customers. We believe that collaboration between scientists is paramount, and working in isolation does not lead to a happy team. We place strong emphasis on continuous learning through internal mechanisms for our scientists to keep on growing their expertise and keep up with the state of the art. Our goal is to be primary science team for vendor solutions in Amazon, worldwide. We are open to hiring candidates to work out of one of the following locations: Tokyo, 13, JPN
US, WA, Seattle
Selling Partner Promotions is seeking a Sr. Economist to use econometric and machine learning techniques to help offer Customers high quality deals and promotions. This role will be a key member of a team of scientists supporting the Pricing and Promotions related business. The Sr. Economist will work closely with other research scientists, machine learning experts, and economists to design and run experiments, research new algorithms, and find new ways to improve Seller Pricing and Promotions to optimize the Customer experience. Key job responsibilities - Build economic models to quantify the causal impact of pricing actions and promotions on customers and sellers. - Build models to define, measure and optimize for high quality deals - Define and execute an extensive experimental roadmap to test hypotheses and validate the outputs of models. - Create models that allow an optimization of selling partner ROI and customer long-term value. - Evaluate and validate the proposed models via offline benchmark tests as well as online A/B tests in production. - Publish and present your work at internal and external scientific venues. We are open to hiring candidates to work out of one of the following locations: Seattle, WA, USA
IN, KA, Bengaluru
The Amazon Artificial Generative Intelligence (AGI) team in India is seeking a talented, self-driven Applied Scientist to work on prototyping, optimizing, and deploying ML algorithms within the realm of Generative AI. Key job responsibilities - Research, experiment and build Proof Of Concepts advancing the state of the art in AI & ML for GenAI. - Collaborate with cross-functional teams to architect and execute technically rigorous AI projects. - Thrive in dynamic environments, adapting quickly to evolving technical requirements and deadlines. - Engage in effective technical communication (written & spoken) with coordination across teams. - Conduct thorough documentation of algorithms, methodologies, and findings for transparency and reproducibility. - Publish research papers in internal and external venues of repute - Support on-call activities for critical issues We are open to hiring candidates to work out of one of the following locations: Bengaluru, KA, IND
US, WA, Seattle
We’re working to improve shopping on Amazon using the conversational capabilities of LLMs, and are searching for pioneers who are passionate about technology, innovation, and customer experience, and are ready to make a lasting impact on the industry. You'll be working with talented scientists, engineers, across the breadth of Amazon Shopping and AGI to innovate on behalf of our customers. If you're fired up about being part of a dynamic, driven team, then this is your moment to join us on this exciting journey! We are open to hiring candidates to work out of one of the following locations: Seattle, WA, USA
US, NY, New York
The Identity and Diagnostics Engineering for Advertising (IDEA) team owns two charters: Identity and Diagnostics. The Identity (a.k.a. Identity Hub) program was established to be the single source of truth for all advertiser identity dimensions, offering authoritative and comprehensive views of an advertiser across all channels (ads, retail, partners, and real world). The Diagnostics program is responsible for building comprehensive issue identification capabilities across the ads lifecycle to empower advertisers and Amazon internal teams to identify problems affecting ads, campaigns, and accounts through access to accurate, timely, and easy-to-understand diagnostics. To this end we are currently engaged in the process of architecting building and rolling out key mission critical solutions to Amazon’s Advertising product stack which will make it seamless for millions of Advertisers to access vital information impacting the performance of their digital Ads across the entire foot print of Amazon digital properties. We are currently recruiting experienced Applied Scientists who enjoy solving technical challenges at massive scale, simplifying and eliminating complexity as well as demonstrate a high sense of ownership and the ability to drive solutions from inception to delivery. We have a strong culture of innovation, solving problems in creative ways and our track record of accomplishment’s speaks for itself. You will be joining a high-performing team at the core of the Amazon Ad’s ecosystem. You will be responsible for the entire development lifecycle, from feature ideation to experimentation and development to operational excellence. We will be counting on you to have strong technical skills - you should be able to lead the design, implementation, and successful delivery of solutions for scientifically-complex problems and systems in production, which can be brand new or evolving from existing ones. You will heavily influence the design and write a significant portion of critical-path code. We are open to hiring candidates to work out of one of the following locations: New York, NY, USA