Custom policy checks help democratize automated reasoning

New IAM Access Analyzer feature uses automated reasoning to ensure that access policies written in the IAM policy language don’t grant unintended access.

To control access to resources in the Amazon Web Services (AWS) Cloud, customers can author AWS Identity and Access Management (IAM) policies. The IAM policy language is expressive, allowing you to create fine-grained policies that control who can perform what actions on which resources. This control can be used to enforce the principle of least privilege, granting only the permissions required to perform a task.

But how can you verify that your IAM policies meet your security requirements? At AWS’s 2023 re:Invent conference, we announced the launch of IAM Access Analyzer custom policy checks, which help you benchmark policies against your security standards. Custom policy checks abstract away the task of converting policy statements into mathematical formulas, so customers can enjoy the benefits of automated reasoning without expertise in formal logic.

Policy checks in context.png
The role of IAM Access Analyzer custom policy checks in the development pipeline.

The IAM Access Analyzer API CheckNoNewAccess ensures that you do not inadvertently add permissions to a policy when you update it. With the CheckAccessNotGranted API, you can specify critical permissions that developers should not grant in their IAM policies.

We built custom policy checks on an internal AWS service called Zelkova, which uses automated reasoning to analyze IAM policies. Previously, we used Zelkova to build preventative and detective managed controls, such as Amazon S3 Block Public Access and IAM Access Analyzer public and cross-account findings. Now, with the release of custom policy checks, you can set a security standard and prevent policies that do not meet this standard from being deployed.

How does Zelkova work?

Zelkova models the semantics of the IAM policy language by translating policies into precise mathematical expressions. It then uses automated engines called satisfiability modulo theories (SMT) solvers to check properties of the policies. Satisfiability (SAT) solvers check if it is possible to assign true or false values to Boolean variables to satisfy a set of constraints; SMT is a generalization of SAT to include strings, integers, real numbers, or functions. The benefit of using SMT to analyze policies is that it is comprehensive. Unlike tools that simulate or evaluate a policy for a given request or a small set of requests, Zelkova can check properties of a policy for all possible requests.

Consider the following Amazon S3 bucket policy:

{
   "Version": "2012-10-17",
   "Statement": [
      {
         "Effect": "Allow",
         "Principal": "*",
         "Action": ["s3:PutObject"],
         "Resource": "arn:aws:s3:::DOC-EXAMPLE-BUCKET"
      }
   ]
}

Zelkova translates this policy into the following formula:

(Action = “s3:PutObject”) 
∧ (Resource = “arn:aws:s3:::DOC-EXAMPLE-BUCKET”)

In this formula, "∧" is the mathematical symbol for “and”. Action and Resource are variables that represent values from any possible request. The formula is true only when a request is allowed by the policy. This precise mathematical representation of a policy is useful because it allows us to answer questions about the policy exhaustively. For example, we can ask if the policy allows public access, and we receive the answer that it does.

For simple policies such as the preceding policy, we could perform manual reviews to determine whether they allow public access: the "Principal": "*" in the policy’s statement means that anyone (the public) is allowed access. But manual review can be error prone and is not scalable.

Alternatively, we could write simple syntactic checks for patterns such as "Principal": "*". However, these syntactic checks can miss the subtleties of policies and the interactions between different parts of a policy. Consider the following modification of the preceding policy, which adds a Deny statement with "NotPrincipal": "123456789012"; the policy still has the pattern "Principal": "*", but it no longer allows public access:

{
   "Version": "2012-10-17",
   "Statement": [
      {
         "Effect": "Allow",
         "Principal": "*",
         "Action": ["s3:PutObject"],
         "Resource": "arn:aws:s3:::DOC-EXAMPLE-BUCKET"
      },
      {
         "Effect": "Deny",
         "NotPrincipal": "123456789012",
         "Action": "*",
         "Resource": "*"
      }
   ]
}

With the mathematical representation of policy semantics in Zelkova, we can answer questions about access privileges precisely.

Answering questions with Zelkova

As an example, let’s consider a relatively simple question. With IAM policies, you can grant cross-account access to resources you want to share. For sensitive resources, you’d like to check that cross-account access is not possible.

Suppose we wanted to check whether the preceding policies allow anyone outside my account, 123456789012, to access my S3 bucket. Just as we translated the policy into a mathematical formula, we can translate the question we want to ask (or property we want to check) into a mathematical formula. To check whether all allowed accesses are from my account, we can translate the property to the following formula:

(Principal = 123456789012)

To show that the property holds true for the policy, we can now try to prove that only requests with (Principal = 123456789012) are allowed by the policy. A common trick used in mathematics is to flip the question around. Instead of trying to prove that the property holds, we can prove that it does not hold by finding requests that do not satisfy it — in other words, requests that satisfy (Principal 123456789012). To find such a counterexample, we look for assignments to the variables Principal, Action, and Resource such that the following is true:

(Action = “s3:PutObject”)
∧ (Resource = “arn:aws:s3:::DOC-EXAMPLE-BUCKET”)
∧ (Principal ≠ 123456789012)

Zelkova translates the policy and property into the preceding mathematical formula, and it efficiently searches for counterexamples using SMT solvers. For the preceding formula, the SMT solver can produce a counterexample showing that such access is indeed allowed by the policy (for example, with Principal = 111122223333).

For the previously modified policy with the Deny statement, the SMT solver can also prove that no solution is possible for the resulting formula and that no access is allowed for the policy from outside my account, 123456789012:

(Action = “s3:PutObject”) 
∧ (Resource = “arn:aws:s3:::DOC-EXAMPLE-BUCKET”) 
∧ (Principal = 123456789012) ∧ (Principal ≠ 123456789012)

The Deny statement in the policy with "NotPrincipal": "123456789012" is translated to the constraint (Principal = 123456789012). By inspecting the preceding formula, we can see that it can’t be satisfied: the constraints on Principal from the policy and from the property are contradictory. An SMT solver can prove this and more complicated formulas by exhaustively ruling out solutions.

Custom policy checks

To democratize access to Zelkova, we needed to abstract the construction of mathematical formulas behind a more accessible interface. To that end, we launched IAM Access Analyzer custom policy checks with two predefined checks: CheckNoNewAccess and CheckAccessNotGranted.

With CheckNoNewAccess, you can confirm that you do not accidentally add permissions to a policy when updating it. Developers often start with more-permissive policies and refine them over time toward least privilege. With CheckNoNewAccess, you can now compare two versions of a policy to confirm that the new version is not more permissive than the old version.

Suppose a developer updates the first example policy in this post to disallow cross-account access but at the same time also adds a new action:

{
   "Version": "2012-10-17",
   "Statement": [
      {
         "Effect": "Allow",
         "Principal": "123456789012",
         "Action": [ 
            "s3:PutObject",
            "s3:DeleteBucket" 
         ],
         "Resource": "arn:aws:s3:::DOC-EXAMPLE-BUCKET"
      }
   ]
}

CheckNoNewAccess translates the two versions of the policy into formulas Pold and Pnew, respectively. It then searches for solutions to the formula (Pnew ¬Pold) that represent requests that are allowed by the new policy but not allowed by the old policy (“¬” is the mathematical symbol for “not”). Because the new policy allows principals in 123456789012 to perform an action that the old policy did not, the check fails, and a security engineer can review whether this policy change is acceptable.

With CheckAccessNotGranted, security engineers can be more prescriptive by specifying critical permissions to be checked against policy updates. Let’s say we want to ensure that developers are not granting permissions to delete an important bucket. In our previous example, CheckNoNewAccess detected this only because the permission was added with an update. With CheckAccessNotGranted, the security engineer can specify s3:DeleteBucket as a critical permission. We then translate the critical permissions into a formula such as (Action = “s3:DeleteBucket”) and search for requests with that action that are allowed by the policy. Because the preceding policy allows this action, the check fails and that prevents the permission from being deployed.

With the ability to specify critical permissions as parameters to the CheckAccessNotGranted API, you can now check policies against your standards — and not just for canned, broadly applicable checks.

Debugging failures

By democratizing policy checks, without the need for costly and time-consuming manual reviews, custom policy checks help developers move faster. When policies pass the checks, developers can make updates with confidence. If policies fail the checks, IAM Access Analyzer provides additional information so that developers can debug and fix them.

Suppose a developer writes the following identity-based policy:

{
   "Version": "2012-10-17",
   "Statement": [
      {
         "Effect": "Allow",
         "Action": [
            "ec2:DescribeInstance*",
            "ec2:StartInstances", 
            "ec2:StopInstances" 
         ],
         "Resource": "arn:aws:ec2:*:*:instance/*"
      },
      {
         "Effect": "Allow",
         "Action": [ 
            "s3:GetObject*", 
            "s3:PutObject",
            "s3:DeleteBucket" 
         ],
         "Resource": "arn:aws:s3:::DOC-EXAMPLE-BUCKET/*"
      }
   ]
}

Let’s also suppose that a security engineer has specified critical permissions that include s3:DeleteBucket. As described above, CheckAccessNotGranted fails on this policy.

For any given policy, it can sometimes be hard to understand why a check failed. To give developers more clarity, IAM Access Analyzer uses Zelkova to solve additional problems that pinpoint the failure to a specific statement in the policy. For the preceding policy, the check failed with the description "New access in the statement with index: 1". This description indicates that the second statement contains a critical permission.

The key to democratizing automated reasoning is to make it simple to use and easy to specify properties. With additional custom checks, we will continue to enable our customers on their journey to least privilege.

Research areas

Related content

US, WA, Seattle
Here at Amazon, we embrace our differences. We are committed to furthering our culture of diversity and inclusion of our teams within the organization. How do you get items to customers quickly, cost-effectively, and—most importantly—safely, in less than an hour? And how do you do it in a way that can scale? Our teams of hundreds of scientists, engineers, aerospace professionals, and futurists have been working hard to do just that! We are delivering to customers, and are excited for what’s to come. Check out more information about Prime Air on the About Amazon blog (https://www.aboutamazon.com/news/transportation/amazon-prime-air-delivery-drone-reveal-photos). If you are seeking an iterative environment where you can drive innovation, apply state-of-the-art technologies to solve real world delivery challenges, and provide benefits to customers, Prime Air is the place for you. Come work on the Amazon Prime Air Team! Prime Air is seeking an experienced Applied Science Manager to help develop our advanced Navigation algorithms and flight software applications. In this role, you will lead a team of scientists and engineers to conduct analyses, support cross-functional decision-making, define system architectures and requirements, contribute to the development of flight algorithms, and actively identify innovative technological opportunities that will drive significant enhancements to meet our customers' evolving demands. This person must be comfortable working with a team of top-notch software developers and collaborating with our science teams. We’re looking for someone who innovates, and loves solving hard problems. You will work hard, have fun, and make history! Export Control License: This position may require a deemed export control license for compliance with applicable laws and regulations. Placement is contingent on Amazon’s ability to apply for and obtain an export control license on your behalf.
US, VA, Herndon
Application deadline: Applications will be accepted on an ongoing basis Are you excited to help the US Intelligence Community design, build, and implement AI algorithms, including advanced Generative AI solutions, to augment decision making while meeting the highest standards for reliability, transparency, and scalability? The Amazon Web Services (AWS) US Federal Professional Services team works directly with US Intelligence Community agencies and other public sector entities to achieve their mission goals through the adoption of Machine Learning (ML) and Generative AI methods. We build models for text, image, video, audio, and multi-modal use cases, leveraging both traditional ML approaches and state-of-the-art generative models including Large Language Models (LLMs), text-to-image generation, and other advanced AI capabilities to fit the mission. Our team collaborates across the entire AWS organization to bring access to product and service teams, to get the right solution delivered and drive feature innovation based on customer needs. At AWS, we're hiring experienced data scientists with a background in both traditional and generative AI who can help our customers understand the opportunities their data presents, and build solutions that earn the customer trust needed for deployment to production systems. In this role, you will work closely with customers to deeply understand their data challenges and requirements, and design tailored solutions that best fit their use cases. You should have broad experience building models using all kinds of data sources, and building data-intensive applications at scale. You should possess excellent business acumen and communication skills to collaborate effectively with stakeholders, develop key business questions, and translate requirements into actionable solutions. You will provide guidance and support to other engineers, sharing industry best practices and driving innovation in the field of data science and AI. This position requires that the candidate selected must currently possess and maintain an active TS/SCI Security Clearance with Polygraph. The position further requires the candidate to opt into a commensurate clearance for each government agency for which they perform AWS work. Key job responsibilities As an Data Scientist, you will: - Collaborate with AI/ML scientists and architects to research, design, develop, and evaluate AI algorithms to address real-world challenges - Interact with customers directly to understand the business problem, help and aid them in implementation of AI solutions, deliver briefing and deep dive sessions to customers and guide customer on adoption patterns and paths to production. - Create and deliver best practice recommendations, tutorials, blog posts, sample code, and presentations adapted to technical, business, and executive stakeholder - Provide customer and market feedback to Product and Engineering teams to help define product direction - This position may require up to 25% local travel. About the team About AWS Diverse Experiences AWS values diverse experiences. Even if you do not meet all of the qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why AWS? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Inclusive Team Culture Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences and inspire us to never stop embracing our uniqueness. Mentorship & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why flexible work hours and arrangements are part of our culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud.
US, TX, Austin
Our team is involved with pre-silicon design verification for custom IP. A critical requirement of the verification flow is the requirement of legal and realistic stimulus of a custom Machine Learning Accelerator Chip. Content creation is built using formal methods that model legal behavior of the design and then solving the problem to create the specific assembly tests. The entire frame work for creating these custom tests is developed using a SMT solver and custom software code to guide the solution space into templated scenarios. This highly visible and innovative role requires the design of this solving framework and collaborating with design verification engineers, hardware architects and designers to ensure that interesting content can be created for the projects needs. Key job responsibilities Develop an understanding for a custom machine learning instruction set architecture. Model correctness of instruction streams using first order logic. Create custom API's to allow control over scheduling and randomness. Deploy algorithms to ensure concurrent code is safely constructed. Create coverage metrics to ensure solution space coverage. Use novel methods like machine learning to automate content creation. About the team Utility Computing (UC) AWS Utility Computing (UC) provides product innovations — from foundational services such as Amazon’s Simple Storage Service (S3) and Amazon Elastic Compute Cloud (EC2), to consistently released new product innovations that continue to set AWS’s services and features apart in the industry. As a member of the UC organization, you’ll support the development and management of Compute, Database, Storage, Internet of Things (Iot), Platform, and Productivity Apps services in AWS, including support for customers who require specialized security solutions for customers who require specialized security solutions for their cloud services. Annapurna Labs (our organization within AWS UC) designs silicon and software that accelerates innovation. Customers choose us to create cloud solutions that solve challenges that were unimaginable a short time ago—even yesterday. Our custom chips, accelerators, and software stacks enable us to take on technical challenges that have never been seen before, and deliver results that help our customers change the world. About AWS Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Diverse Experiences AWS values diverse experiences. Even if you do not meet all of the qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. Inclusive Team Culture Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon conferences, inspire us to never stop embracing our uniqueness. Mentorship & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional.
CN, 11, Beijing
职位:Applied scientist 应用科学家实习生 毕业时间:2026年10月 - 2027年7月之间毕业的应届毕业生 · 入职日期:2026年6月及之前 · 实习时间:保证一周实习4-5天全职实习,至少持续3个月 · 工作地点:北京朝阳区 投递须知: 1 填写简历申请时,请把必填和非必填项都填写完整。提交简历之后就无法修改了哦! 2 学校的英文全称请准确填写。中英文对应表请查这里(无法浏览请登录后浏览)https://docs.qq.com/sheet/DVmdaa1BCV0RBbnlR?tab=BB08J2 如果您正在攻读计算机,AI,ML或搜索领域专业的博士或硕士研究生,而且对应用科学家的实习工作感兴趣。如果您也喜爱深入研究棘手的技术问题并提出解决方案,用成功的产品显著地改善人们的生活。 那么,我们诚挚邀请您加入亚马逊的International Technology搜索团队改善Amazon的产品搜索服务。我们的目标是帮助亚马逊的客户找到他们所需的产品,并发现他们感兴趣的新产品。 这会是一份收获满满的工作。您每天的工作都与全球数百万亚马逊客户的体验紧密相关。您将提出和探索创新,基于TB级别的产品和流量数据设计机器学习模型。您将集成这些模型到搜索引擎中为客户提供服务,通过数据,建模和客户反馈来完成闭环。您对模型的选择需要能够平衡业务指标和响应时间的需求。
CN, 44, Shenzhen
职位:Applied scientist 应用科学家实习生 毕业时间:2026年10月 - 2027年7月之间毕业的应届毕业生 · 入职日期:2026年6月及之前 · 实习时间:保证一周实习4-5天全职实习,至少持续3个月 · 工作地点:深圳福田区 投递须知: 1 填写简历申请时,请把必填和非必填项都填写完整。提交简历之后就无法修改了哦! 2 学校的英文全称请准确填写。中英文对应表请查这里(无法浏览请登录后浏览)https://docs.qq.com/sheet/DVmdaa1BCV0RBbnlR?tab=BB08J2 如果您正在攻读计算机,AI,ML领域专业的博士或硕士研究生,而且对应用科学家的实习工作感兴趣。如果您也喜爱深入研究棘手的技术问题并提出解决方案,用成功的产品显著地改善人们的生活。 那么,我们诚挚邀请您加入亚马逊。这会是一份收获满满的工作。您每天的工作都与全球数百万亚马逊客户的体验紧密相关。您将提出和探索创新,基于TB级别的产品和流量数据设计机器学习模型。您将集成这些为客户提供服务,通过数据,建模和客户反馈来完成闭环。您对模型的选择需要能够平衡业务指标和响应时间的需求。
LU, Luxembourg
Join our team as an Applied Scientist II where you'll develop innovative machine learning solutions that directly impact millions of customers. You'll work on ambiguous problems where neither the problem nor solution is well-defined, inventing novel scientific approaches to address customer needs at the project level. This role combines deep scientific expertise with hands-on implementation to deliver production-ready solutions that drive measurable business outcomes. Key job responsibilities Invent: - Design and develop novel machine learning models and algorithms to solve ambiguous customer problems where textbook solutions don't exist - Extend state-of-the-art scientific techniques and invent new approaches driven by customer needs at the project level - Produce internal research reports with the rigor of top-tier publications, documenting scientific findings and methodologies - Stay current with academic literature and research trends, applying latest techniques when appropriate Implement: - Write production-quality code that meets or exceeds SDE I standards, ensuring solutions are testable, maintainable, and scalable - Deploy components directly into production systems supporting large-scale applications and services - Optimize algorithm and model performance through rigorous testing and iterative improvements - Document design decisions and implementation details to enable reproducibility and knowledge transfer - Contribute to operational excellence by analyzing performance gaps and proposing solutions Influence: - Collaborate with cross-functional teams to translate business goals into scientific problems and metrics - Mentor junior scientists and help new teammates understand customer needs and technical solutions - Present findings and recommendations to both technical and non-technical stakeholders - Contribute to team roadmaps, priorities, and strategic planning discussions - Participate in hiring and interviewing to build world-class science teams
US, CA, Sunnyvale
The Artificial General Intelligence (AGI) team is looking for a passionate, talented, and inventive Applied Scientist with a strong deep learning background, to build Generative Artificial Intelligence (GenAI) technology with Large Language Models (LLMs) and multimodal systems. Key job responsibilities As an Applied Scientist with the AGI team, you will work with talented peers to support the development of GenAI algorithms and modeling techniques, to advance the state of the art with LLMs. Your work will directly impact our customers in the form of products and services that make use of speech and language technology. You will leverage Amazon’s heterogeneous data sources and large-scale computing resources to accelerate advances in GenAI. About the team The AGI team has a mission to push the envelope with GenAI in LLMs and multimodal systems, in order to provide the best-possible experience for our customers.
US, CA, East Palo Alto
Amazon Aurora DSQL is a serverless, distributed SQL database with virtually unlimited scale, highest availability, and zero infrastructure management. Aurora DSQL provides active-active high availability, providing strong data consistency designed for 99.99% single-Region and 99.999% multi-Region availability. Aurora DSQL automatically manages and scales system resources, so you don't have to worry about maintenance downtime and provisioning, patching, or upgrading infrastructure. As a Senior Applied Scientist, you will be expected to lead research and development in advanced query optimization techniques for distributed sql services. You will innovate in the query planning and execution layer to help Aurora DSQL succeed at delivering high performance for complex OLTP workloads. You will develop novel approaches to stats collection, query planning, execution and optimization. You will drive industry leading research, publish your research and help convert your research into implementations to make Aurora DSQL the fastest sql database for OLTP workloads. AWS Utility Computing (UC) provides product innovations — from foundational services such as Amazon’s Simple Storage Service (S3) and Amazon Elastic Compute Cloud (EC2), to consistently released new product innovations that continue to set AWS’s services and features apart in the industry. As a member of the UC organization, you’ll support the development and management of Compute, Database, Storage, Internet of Things (Iot), Platform, and Productivity Apps services in AWS, including support for customers who require specialized security solutions for their cloud services. Key job responsibilities Our engineers collaborate across diverse teams, projects, and environments to have a firsthand impact on our global customer base. You’ll bring a passion for innovation, data, search, analytics, and distributed systems. You’ll also: Solve challenging technical problems, often ones not solved before, at every layer of the stack. Design, implement, test, deploy and maintain innovative software solutions to transform service performance, durability, cost, and security. Build high-quality, highly available, always-on products. Research implementations that deliver the best possible experiences for customers. A day in the life As you design and code solutions to help our team drive efficiencies in software architecture, you’ll create metrics, implement automation and other improvements, and resolve the root cause of software defects. You’ll also: Build high-impact solutions to deliver to our large customer base. Participate in design discussions, code review, and communicate with internal and external stakeholders. Work cross-functionally to help drive business decisions with your technical input. Work in a startup-like development environment, where you’re always working on the most important stuff. About the team Our team is dedicated to supporting new members. We have a broad mix of experience levels and tenures, and we’re building an environment that celebrates knowledge-sharing and mentorship. Our senior members enjoy one-on-one mentoring and thorough, but kind, code reviews. We care about your career growth and strive to assign projects that help our team members develop your engineering expertise so you feel empowered to take on more complex tasks in the future. Diverse Experiences AWS values diverse experiences. Even if you do not meet all of the qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. About AWS Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Inclusive Team Culture Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon conferences, inspire us to never stop embracing our uniqueness. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. Mentorship & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional.
US, CA, Sunnyvale
The Region Flexibility Engineering (RFE) team builds and leverages foundational infrastructure capabilities, tools, and datasets needed to support the rapid global expansion of Amazon's SOA infrastructure. Our team focuses on robust and scalable architecture patterns and engineering best practices, driving adoption of ever-evolving and AWS technologies. RFE is looking for a passionate, results-oriented, inventive Data Scientist to refine and execute experiments towards our grand vision, influence and implement technical solutions for regional placement automation, cross-region libraries, and tooling useful for teams across Amazon. As a Data Scientist in Region Flexibility, you will work to enable Amazon businesses to leverage new AWS regions and improve the efficiency and scale of our business. Our project spans across all of Amazon Stores, Digital and Others (SDO) Businesses and we work closely with AWS teams to advise them on SDO requirements. As innovators who embrace new technology, you will be empowered to choose the right highly scalable and available technology to solve complex problems and will directly influence product design. The end-state architecture will enable services to break region coupling while retaining the ability to keep critical business functions within a region. This architecture will improve customer latency through local affinity to compute resources and reduce the blast radius in case of region failures. We leverage off the sciences of data, information processing, machine learning, and generative AI to improve user experience, automation, service resilience, and operational efficiency. Key job responsibilities As an RFE Data Scientist, you will work closely with product and technical leaders throughout Amazon and will be responsible for influencing technical decisions and building data-driven automation capabilities in areas of development/modeling that you identify as critical future region flexibility offerings. You will identify both enablers and blockers of adoption for region flex, and build models to raise the bar in terms of understanding questions related to data set and service relationships and predict the impact of region changes and provide offerings to mitigate that impact. About the team The Regional Flexibility Engineering (RFE) organization supports the rapid global expansion of Amazon's infrastructure. Our projects support Amazon businesses like Stores, Alexa, Kindle, and Prime Video. We drive adoption of ever-evolving and AWS and non-AWS technologies, and work closely with AWS teams to improve AWS public offerings. Our organization focuses on robust and scalable solutions, simple to use, and delivered with engineering best practices. We leverage and build foundational infrastructure capabilities, tools, and datasets that enable Amazon teams to delight our customers. With millions of people using Amazon’s products every day, we appreciate the importance of making our solutions “just work”.
US, WA, Seattle
Amazon Prime is looking for an ambitious Economist to help create econometric insights for world-wide Prime. Prime is Amazon's premiere membership program, with over 200M members world-wide. This role is at the center of many major company decisions that impact Amazon's customers. These decisions span a variety of industries, each reflecting the diversity of Prime benefits. These range from fast-free e-commerce shipping, digital content (e.g., exclusive streaming video, music, gaming, photos), and grocery offerings. Prime Science creates insights that power these decisions. As an economist in this role, you will create statistical tools that embed causal interpretations. You will utilize massive data, state-of-the-art scientific computing, econometrics (causal, counterfactual/structural, time-series forecasting, experimentation), and machine-learning, to do so. Some of the science you create will be publishable in internal or external scientific journals and conferences. You will work closely with a team of economists, applied scientists, data professionals (business analysts, business intelligence engineers), product managers, and software engineers. You will create insights from descriptive statistics, as well as from novel statistical and econometric models. You will create internal-to-Amazon-facing automated scientific data products to power company decisions. You will write strategic documents explaining how senior company leaders should utilize these insights to create sustainable value for customers. These leaders will often include the senior-most leaders at Amazon. The team is unique in its exposure to company-wide strategies as well as senior leadership. It operates at the research frontier of utilizing data, econometrics, artificial intelligence, and machine-learning to form business strategies. A successful candidate will have demonstrated a capacity for building, estimating, and defending statistical models (e.g., causal, counterfactual, time-series, machine-learning) using software such as R, Python, or STATA. They will have a willingness to learn and apply a broad set of statistical and computational techniques to supplement deep-training in one area of econometrics. For example, many applications on the team use structural econometrics, machine-learning, and time-series forecasting. They rely on building scalable production software, which involves a broad set of world-class software-building skills often learned on-the-job. As a consequence, already-obtained knowledge of SQL, machine learning, and large-scale scientific computing using distributed computing infrastructures such as Spark-Scala or PySpark would be a plus. Additionally, this candidate will show a track-record of delivering projects well and on-time, preferably in collaboration with other team members (e.g. co-authors). Candidates must have very strong writing and emotional intelligence skills (for collaborative teamwork, often with colleagues in different functional roles), a growth mindset, and a capacity for dealing with a high-level of ambiguity. Endowed with these traits and on-the-job-growth, the role will provide the opportunity to have a large strategic, world-wide impact on the customer experiences of Prime members.