New AWS tool recommends removal of unused permissions

IAM Access Analyzer feature uses automated reasoning to recommend policies that remove unused accesses, helping customers achieve “least privilege”.

AWS Identity and Access Management (IAM) policies provide customers with fine-grained control over who has access to what resources in the Amazon Web Services (AWS) Cloud. This control helps customers enforce the principle of least privilege by granting only the permissions required to perform particular tasks. In practice, however, writing IAM policies that enforce least privilege requires customers to understand what permissions are necessary for their applications to function, which can become challenging when the scale of the applications grows.

To help customers understand what permissions are not necessary, we launched IAM Access Analyzer unused access findings at the 2023 re:Invent conference. IAM Access Analyzer analyzes your AWS accounts to identify unused access and creates a centralized dashboard to report its findings. The findings highlight unused roles and unused access keys and passwords for IAM users. For active IAM roles and users, the findings provide visibility into unused services and actions.

Related content
New IAM Access Analyzer feature uses automated reasoning to ensure that access policies written in the IAM policy language don’t grant unintended access.

To take this service a step further, in June 2024 we launched recommendations to refine unused permissions in Access Analyzer. This feature recommends a refinement of the customer’s original IAM policies that retains the policy structure while removing the unused permissions. The recommendations not only simplify removal of unused permissions but also help customers enact the principle of least privilege for fine-grained permissions.

In this post, we discuss how Access Analyzer policy recommendations suggest policy refinements based on unused permissions, which completes the circle from monitoring overly permissive policies to refining them.

Policy recommendation in practice

Let's dive into an example to see how policy recommendation works. Suppose you have the following IAM policy attached to an IAM role named MyRole:

{
  "Version": "2012-10-17",
  "Statement": [
   {
      "Effect": "Allow",
      "Action": [
        "lambda:AddPermission",
        "lambda:GetFunctionConfiguration",
        "lambda:UpdateFunctionConfiguration",
        "lambda:UpdateFunctionCode",
        "lambda:CreateFunction",
        "lambda:DeleteFunction",
        "lambda:ListVersionsByFunction",
        "lambda:GetFunction",
        "lambda:Invoke*"
      ],
      "Resource": "arn:aws:lambda:us-east-1:123456789012:function:my-lambda"
   },
  {
    "Effect" : "Allow",
    "Action" : [
      "s3:Get*",
      "s3:List*"
    ],
    "Resource" : "*"
  }
 ]
}

The above policy has two policy statements:

  • The first statement allows actions on a function in AWS Lambda, an AWS offering that provides function execution as a service. The allowed actions are specified by listing individual actions as well as via the wildcard string lambda:Invoke*, which permits all actions starting with Invoke in AWS Lambda, such as lambda:InvokeFunction.
  • The second statement allows actions on any Amazon Simple Storage Service (S3) bucket. Actions are specified by two wildcard strings, which indicate that the statement allows actions starting with Get or List in Amazon S3.

Enabling Access Analyzer for unused finding will provide you with a list of findings, each of which details the action-level unused permissions for specific roles. For example, for the role with the above policy attached, if Access Analyzer finds any AWS Lambda or Amazon S3 actions that are allowed but not used, it will display them as unused permissions.

Related content
Amazon Web Services (AWS) is a cloud computing services provider that has made significant investments in applying formal methods to proving correctness of its internal systems and providing assurance of correctness to their end-users. In this paper, we focus on how we built abstractions and eliminated specifications to scale a verification engine for AWS access policies, Zelkova, to be usable by all AWS

The unused permissions define a list of actions that are allowed by the IAM policy but not used by the role. These actions are specific to a namespace, a set of resources that are clustered together and walled off from other namespaces, to improve security. Here is an example in Json format that shows unused permissions found for MyRole with the policy we attached earlier:

[
 {
    "serviceNamespace": "lambda",
    "actions": [
      "UpdateFunctionCode",
      "GetFunction",
      "ListVersionsByFunction",
      "UpdateFunctionConfiguration",
      "CreateFunction",
      "DeleteFunction",
      "GetFunctionConfiguration",
      "AddPermission"
    ]
  },
  {
    "serviceNamespace": "s3",
    "actions": [
        "GetBucketLocation",
        "GetBucketWebsite",
        "GetBucketPolicyStatus",
        "GetAccelerateConfiguration",
        "GetBucketPolicy",
        "GetBucketRequestPayment",
        "GetReplicationConfiguration",
        "GetBucketLogging",
        "GetBucketObjectLockConfiguration",
        "GetBucketNotification",
        "GetLifecycleConfiguration",
        "GetAnalyticsConfiguration",
        "GetBucketCORS",
        "GetInventoryConfiguration",
        "GetBucketPublicAccessBlock",
        "GetEncryptionConfiguration",
        "GetBucketAcl",
        "GetBucketVersioning",
        "GetBucketOwnershipControls",
        "GetBucketTagging",
        "GetIntelligentTieringConfiguration",
        "GetMetricsConfiguration"
    ]
  }
]

This example shows actions that are not used in AWS Lambda and Amazon S3 but are allowed by the policy we specified earlier.

Related content
Rungta had a promising career with NASA, but decided the stars aligned for her at Amazon.

How could you refine the original policy to remove the unused permissions and achieve least privilege? One option is manual analysis. You might imagine the following process:

  • Find the statements that allow unused permissions;
  • Remove individual actions from those statements by referencing unused permissions.

This process, however, can be error prone when dealing with large policies and long lists of unused permissions. Moreover, when there are wildcard strings in a policy, removing unused permissions from them requires careful investigation of which actions should replace the wildcard strings.

Policy recommendation does this refinement automatically for customers!

The policy below is one that Access Analyzer recommends after removing the unused actions from the policy above (the figure also shows the differences between the original and revised policies):

{
  "Version": "2012-10-17",
  "Statement" : [
   {
      "Effect" : "Allow",
      "Action" : [
-       "lambda:AddPermission",
-       "lambda:GetFunctionConfiguration",
-       "lambda:UpdateFunctionConfiguration",
-       "lambda:UpdateFunctionCode",
-       "lambda:CreateFunction",
-       "lambda:DeleteFunction",
-       "lambda:ListVersionsByFunction",
-       "lambda:GetFunction",
        "lambda:Invoke*"
      ],
      "Resource" : "arn:aws:lambda:us-east-1:123456789012:function:my-lambda"
    },
    {
     "Effect" : "Allow",
     "Action" : [
-      "s3:Get*",
+      "s3:GetAccess*",
+      "s3:GetAccountPublicAccessBlock",
+      "s3:GetDataAccess",
+      "s3:GetJobTagging",
+      "s3:GetMulti*",
+      "s3:GetObject*",
+      "s3:GetStorage*",
       "s3:List*"
     ],
     "Resource" : "*"
   }
  ]
}

Let’s take a look at what’s changed for each policy statement.

For the first statement, policy recommendation removes all individually listed actions (e.g., lambda:AddPermission), since they appear in unused permissions. Because none of the unused permissions starts with lambda:Invoke, the recommendation leaves lambda:Invoke* untouched.

For the second statement, let’s focus on what happens to the wildcard s3:Get*, which appears in the original policy. There are many actions that can start with s3:Get, but only some of them are shown in the unused permissions. Therefore, s3:Get* cannot just be removed from the policy. Instead, the recommended policy replaces s3:Get* with seven actions that can start with s3:Get but are not reported as unused.

Related content
Amazon scientists are on the cutting edge of using math-based logic to provide better network security, access management, and greater reliability.

Some of these actions (e.g., s3:GetJobTagging) are individual ones, whereas others contain wildcards (e.g., s3:GetAccess* and s3:GetObject*). One way to manually replace s3:Get* in the revised policy would be to list all the actions that start with s3:Get except for the unused ones. However, this would result in an unwieldy policy, given that there are more than 50 actions starting with s3:Get.

Instead, policy recommendation identifies ways to use wildcards to collapse multiple actions, outputting actions such as s3:GetAccess* or s3:GetMulti*. Thanks to these wildcards, the recommended policy is succinct but still permits all the actions starting with s3:Get that are not reported as unused.

How do we decide where to place a wildcard in the newly generated wildcard actions? In the next section, we will dive deep on how policy recommendation generalizes actions with wildcards to allow only those actions that do not appear in unused permissions.

A deep dive into how actions are generalized

Policy recommendation is guided by the mathematical principle of “least general generalization” — i.e., finding the least permissive modification of the recommended policy that still allows all the actions allowed by the original policy. This theorem-backed approach guarantees that the modified policy still allows all and only the permissions granted by the original policy that are not reported as unused.

To implement the least-general generalization for unused permissions, we construct a data structure known as a trie, which is a tree each of whose nodes extends a sequence of tokens corresponding to a path through the tree. In our case, the nodes represent prefixes shared among actions, with a special marker for actions reported in unused permissions. By traversing the trie, we find the shortest string of prefixes that does not contain unused actions.

The diagram below shows a simplified trie delineating actions that replace the S3 Get* wildcard from the original policy (we have omitted some actions for clarity):

Access Analyzer trie.png
A trie delineating actions that can replace the Get* wildcard in an IAM policy. Nodes containing unused actions are depicted in orange; the remaining nodes are in green.

At a high level, the trie represents prefixes that are shared by some of the possible actions starting with s3:Get. Its root node represents the prefix Get; child nodes of the root append their prefixes to Get. For example, the node named Multi represents all actions that start with GetMulti.

Related content
Automated reasoning and optimizations specific to CPU microarchitectures improve both performance and assurance of correct implementation.

We say that a node is safe (denoted in green in the diagram) if none of the unused actions start with the prefix corresponding to that node; otherwise, it is unsafe (denoted in orange). For example, the node s3:GetBucket is unsafe because the action s3:GetBucketPolicy is unused. Similarly, the node ss is safe since there are no unused permissions that start with GetAccess.

We want our final policies to contain wildcard actions that correspond only to safe nodes, and we want to include enough safe nodes to permit all used actions. We achieve this by selecting the nodes that correspond to the shortest safe prefixes—i.e., nodes that are themselves safe but whose parents are not. As a result, the recommended policy replaces s3:Get* with the shortest prefixes that do not contain unused permissions, such as s3:GetAccess*, s3:GetMulti* and s3:GetJobTagging.

Together, the shortest safe prefixes form a new policy that, while syntactically similar to the original policy, is the least-general generalization to result from removing the unused actions. In other words, we have not removed more actions than necessary.

You can find how to start using policy recommendation with unused access in Access Analyzer. To learn more about the theoretical foundations powering policy recommendation, be sure to check out our science paper.

Related content

RO, Bucharest
Amazon's Compliance and Safety Services (CoSS) Team is looking for a smart and creative Applied Scientist to apply and extend state-of-the-art research in NLP, multi-modal modeling, domain adaptation, continuous learning and large language model to join the Applied Science team. At Amazon, we are working to be the most customer-centric company on earth. Millions of customers trust us to ensure a safe shopping experience. This is an exciting and challenging position to drive research that will shape new ML solutions for product compliance and safety around the globe in order to achieve best-in-class, company-wide standards around product assurance. You will research on large amounts of tabular, textual, and product image data from product detail pages, selling partner details and customer feedback, evaluate state-of-the-art algorithms and frameworks, and develop new algorithms to improve safety and compliance mechanisms. You will partner with engineers, technical program managers and product managers to design new ML solutions implemented across the entire Amazon product catalog. Key job responsibilities As an Applied Scientist on our team, you will: - Research and Evaluate state-of-the-art algorithms in NLP, multi-modal modeling, domain adaptation, continuous learning and large language model. - Design new algorithms that improve on the state-of-the-art to drive business impact, such as synthetic data generation, active learning, grounding LLMs for business use cases - Design and plan collection of new labels and audit mechanisms to develop better approaches that will further improve product assurance and customer trust. - Analyze and convey results to stakeholders and contribute to the research and product roadmap. - Collaborate with other scientists, engineers, product managers, and business teams to creatively solve problems, measure and estimate risks, and constructively critique peer research - Consult with engineering teams to design data and modeling pipelines which successfully interface with new and existing software - Publish research publications at internal and external venues. About the team The science team delivers custom state-of-the-art algorithms for image and document understanding. The team specializes in developing machine learning solutions to advance compliance capabilities. Their research contributions span multiple domains including multi-modal modeling, unstructured data matching, text extraction from visual documents, and anomaly detection, with findings regularly published in academic venues.
US, WA, Seattle
At Amazon Selection and Catalog Systems (ASCS), our mission is to power the online buying experience for customers worldwide so they can find, discover, and buy any product they want. We innovate on behalf of our customers to ensure uniqueness and consistency of product identity and to infer relationships between products in Amazon Catalog to drive the selection gateway for the search and browse experiences on the website. We're solving a fundamental AI challenge: establishing product identity and relationships at unprecedented scale. Using Generative AI, Visual Language Models (VLMs), and multimodal reasoning, we determine what makes each product unique and how products relate to one another across Amazon's catalog. The scale is staggering: billions of products, petabytes of multimodal data, millions of sellers, dozens of languages, and infinite product diversity—from electronics to groceries to digital content. The research challenges are immense. GenAI and VLMs hold transformative promise for catalog understanding, but we operate where traditional methods fail: ambiguous problem spaces, incomplete and noisy data, inherent uncertainty, reasoning across both images and textual data, and explaining decisions at scale. Establishing product identities and groupings requires sophisticated models that reason across text, images, and structured data—while maintaining accuracy and trust for high-stakes business decisions affecting millions of customers daily. Amazon's Item and Relationship Platform group is looking for an innovative and customer-focused applied scientist to help us make the world's best product catalog even better. In this role, you will partner with technology and business leaders to build new state-of-the-art algorithms, models, and services to infer product-to-product relationships that matter to our customers. You will pioneer advanced GenAI solutions that power next-generation agentic shopping experiences, working in a collaborative environment where you can experiment with massive data from the world's largest product catalog, tackle problems at the frontier of AI research, rapidly implement and deploy your algorithmic ideas at scale, across millions of customers. Key job responsibilities Key job responsibilities include: * Formulate novel research problems at the intersection of GenAI, multimodal learning, and large-scale information retrieval—translating ambiguous business challenges into tractable scientific frameworks * Design and implement leading models leveraging VLMs, foundation models, and agentic architectures to solve product identity, relationship inference, and catalog understanding at billion-product scale * Pioneer explainable AI methodologies that balance model performance with scalability requirements for production systems impacting millions of daily customer decisions * Own end-to-end ML pipelines from research ideation to production deployment—processing petabytes of multimodal data with rigorous evaluation frameworks * Define research roadmaps aligned with business priorities, balancing foundational research with incremental product improvements * Mentor peer scientists and engineers on advanced ML techniques, experimental design, and scientific rigor—building organizational capability in GenAI and multimodal AI * Represent the team in the broader science community—publishing findings, delivering tech talks, and staying at the forefront of GenAI, VLM, and agentic system research
US, CA, Pasadena
The Amazon Web Services (AWS) Center for Quantum Computing in Pasadena, CA, is looking to hire a Quantum Research Scientist in the Fabrication group. You will join a multi-disciplinary team of theoretical and experimental physicists, materials scientists, and hardware and software engineers working at the forefront of quantum computing. You should have a deep and broad knowledge of device fabrication techniques. Candidates with a track record of original scientific contributions will be preferred. We are looking for candidates with strong engineering principles, resourcefulness and a bias for action, superior problem solving, and excellent communication skills. Working effectively within a team environment is essential. As a research scientist you will be expected to work on new ideas and stay abreast of the field of experimental quantum computation. Key job responsibilities In this role, you will drive improvements in qubit performance by characterizing the impact of environmental and material noise on qubit dynamics. This will require designing experiments to assess the role of specific noise sources, ensuring the collection of statistically significant data through automation, analyzing the results, and preparing clear summaries for the team. Finally, you will work with hardware engineers, material scientists, and circuit designers to implement changes which mitigate the impact of the most significant noise sources. About the team Why AWS? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. AWS Utility Computing (UC) provides product innovations — from foundational services such as Amazon’s Simple Storage Service (S3) and Amazon Elastic Compute Cloud (EC2), to consistently released new product innovations that continue to set AWS’s services and features apart in the industry. As a member of the UC organization, you’ll support the development and management of Compute, Database, Storage, Internet of Things (Iot), Platform, and Productivity Apps services in AWS. Within AWS UC, Amazon Dedicated Cloud (ADC) roles engage with AWS customers who require specialized security solutions for their cloud services. Inclusive Team Culture AWS values curiosity and connection. Our employee-led and company-sponsored affinity groups promote inclusion and empower our people to take pride in what makes us unique. Our inclusion events foster stronger, more collaborative teams. Our continual innovation is fueled by the bold ideas, fresh perspectives, and passionate voices our teams bring to everything we do. Diverse Experiences AWS values diverse experiences. Even if you do not meet all of the qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Mentorship & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. Export Control Requirement: Due to applicable export control laws and regulations, candidates must be either a U.S. citizen or national, U.S. permanent resident (i.e., current Green Card holder), or lawfully admitted into the U.S. as a refugee or granted asylum, or be able to obtain a US export license. If you are unsure if you meet these requirements, please apply and Amazon will review your application for eligibility.
US, VA, Herndon
AWS Infrastructure Services owns the design, planning, delivery, and operation of all AWS global infrastructure. In other words, we’re the people who keep the cloud running. We support all AWS data centers and all of the servers, storage, networking, power, and cooling equipment that ensure our customers have continual access to the innovation they rely on. We work on the most challenging problems, with thousands of variables impacting the supply chain — and we’re looking for talented people who want to help. You’ll join a diverse team of software, hardware, and network engineers, supply chain specialists, security experts, operations managers, and other vital roles. You’ll collaborate with people across AWS to help us deliver the highest standards for safety and security while providing seemingly infinite capacity at the lowest possible cost for our customers. And you’ll experience an inclusive culture that welcomes bold ideas and empowers you to own them to completion. AWS Infrastructure Services Science (AISS) researches and builds machine learning models that influence the power utilization at our data centers to ensure the health of our thermal and electrical infrastructure at high infrastructure utilization. As a Data Scientist, you will work on our Science team and partner closely with other scientists and data engineers as well as Business Intelligence, Technical Program Management, and Software teams to accurately model and optimize our power infrastructure. Outputs from your models will directly influence our data center topology and will drive exceptional cost savings. You will be responsible for building data science prototypes that optimize our power and thermal infrastructure, working across AWS to solve data mapping and quality issues (e.g. predicting when we might have bad sensor readings), and contribute to our Science team vision. You are skeptical. When someone gives you a data source, you pepper them with questions about sampling biases, accuracy, and coverage. When you’re told a model can make assumptions, you actively try to break those assumptions. You have passion for excellence. The wrong choice of data could cost the business dearly. You maintain rigorous standards and take ownership of the outcome of your data pipelines and code. You do whatever it takes to add value. You don’t care whether you’re building complex ML models, writing blazing fast code, integrating multiple disparate data-sets, or creating baseline models - you care passionately about stakeholders and know that as a curator of data insight you can unlock massive cost savings and preserve customer availability. You have a limitless curiosity. You constantly ask questions about the technologies and approaches we are taking and are constantly learning about industry best practices you can bring to our team. You have excellent business and communication skills to be able to work with product owners to understand key business questions and earn the trust of senior leaders. You will need to learn Data Center architecture and components of electrical engineering to build your models. You are comfortable juggling competing priorities and handling ambiguity. You thrive in an agile and fast-paced environment on highly visible projects and initiatives. The tradeoffs of cost savings and customer availability are constantly up for debate among senior leadership - you will help drive this conversation. Key job responsibilities - Proactively seek to identify opportunities and insights through analysis and provide solutions to automate and optimize power utilization based on a broad and deep knowledge of AWS data center systems and infrastructure. - Apply a range of data science techniques and tools combined with subject matter expertise to solve difficult customer or business problems and cases in which the solution approach is unclear. - Collaborate with Engineering teams to obtain useful data by accessing data sources and building the necessary SQL/ETL queries or scripts. - Build models and automated tools using statistical modeling, econometric modeling, network modeling, machine learning algorithms and neural networks. - Validate these models against alternative approaches, expected and observed outcome, and other business defined key performance indicators. - Collaborate with Engineering teams to implement these models in a manner which complies with evaluations of the computational demands, accuracy, and reliability of the relevant ETL processes at various stages of production. About the team Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. *Why AWS* Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. *Diverse Experiences* Amazon values diverse experiences. Even if you do not meet all of the preferred qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. *Work/Life Balance* We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. *Inclusive Team Culture* Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) conferences, inspire us to never stop embracing our uniqueness. *Mentorship and Career Growth* We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional.
US, CA, San Francisco
Join the next revolution in robotics at Amazon's Frontier AI & Robotics team, where you'll work alongside world-renowned AI pioneers to push the boundaries of what's possible in robotic intelligence. As an Applied Scientist, you'll be at the forefront of developing breakthrough foundation models that enable robots to perceive, understand, and interact with the world in unprecedented ways. You'll drive independent research initiatives in areas such as perception, manipulation, science understanding, locomotion, manipulation, sim2real transfer, multi-modal foundation models and multi-task robot learning, designing novel frameworks that bridge the gap between state-of-the-art research and real-world deployment at Amazon scale. In this role, you'll balance innovative technical exploration with practical implementation, collaborating with platform teams to ensure your models and algorithms perform robustly in dynamic real-world environments. You'll have access to Amazon's vast computational resources, enabling you to tackle ambitious problems in areas like very large multi-modal robotic foundation models and efficient, promptable model architectures that can scale across diverse robotic applications. Key job responsibilities - Drive independent research initiatives across the robotics stack, including robotics foundation models, focusing on breakthrough approaches in perception, and manipulation, for example open-vocabulary panoptic scene understanding, scaling up multi-modal LLMs, sim2real/real2sim techniques, end-to-end vision-language-action models, efficient model inference, video tokenization - Design and implement novel deep learning architectures that push the boundaries of what robots can understand and accomplish - Lead full-stack robotics projects from conceptualization through deployment, taking a system-level approach that integrates hardware considerations with algorithmic development, ensuring robust performance in production environments - Collaborate with platform and hardware teams to ensure seamless integration across the entire robotics stack, optimizing and scaling models for real-world applications - Contribute to the team's technical strategy and help shape our approach to next-generation robotics challenges A day in the life - Design and implement novel foundation model architectures and innovative systems and algorithms, leveraging our extensive infrastructure to prototype and evaluate at scale - Collaborate with our world-class research team to solve complex technical challenges - Lead technical initiatives from conception to deployment, working closely with robotics engineers to integrate your solutions into production systems - Participate in technical discussions and brainstorming sessions with team leaders and fellow scientists - Leverage our massive compute cluster and extensive robotics infrastructure to rapidly prototype and validate new ideas - Transform theoretical insights into practical solutions that can handle the complexities of real-world robotics applications About the team At Frontier AI & Robotics, we're not just advancing robotics – we're reimagining it from the ground up. Our team is building the future of intelligent robotics through innovative foundation models and end-to-end learned systems. We tackle some of the most challenging problems in AI and robotics, from developing sophisticated perception systems to creating adaptive manipulation strategies that work in complex, real-world scenarios. What sets us apart is our unique combination of ambitious research vision and practical impact. We leverage Amazon's massive computational infrastructure and rich real-world datasets to train and deploy state-of-the-art foundation models. Our work spans the full spectrum of robotics intelligence – from multimodal perception using images, videos, and sensor data, to sophisticated manipulation strategies that can handle diverse real-world scenarios. We're building systems that don't just work in the lab, but scale to meet the demands of Amazon's global operations. Join us if you're excited about pushing the boundaries of what's possible in robotics, working with world-class researchers, and seeing your innovations deployed at unprecedented scale.
US, WA, Seattle
Innovators wanted! Are you an entrepreneur? A builder? A dreamer? This role is part of an Amazon Special Projects team that takes the company’s Think Big leadership principle to the next level. We focus on creating entirely new products and services with a goal of positively impacting the lives of our customers. No industries or subject areas are out of bounds. If you’re interested in innovating at scale to address big challenges in the world, this is the team for you. As a Senior Research Scientist, you will work with a unique and gifted team developing exciting products for consumers and collaborate with cross-functional teams. Our team rewards intellectual curiosity while maintaining a laser-focus in bringing products to market. Competitive candidates are responsive, flexible, and able to succeed within an open, collaborative, entrepreneurial, startup-like environment. At the intersection of both academic and applied research in this product area, you have the opportunity to work together with some of the most talented scientists, engineers, and product managers. Here at Amazon, we embrace our differences. We are committed to furthering our culture of inclusion. We have thirteen employee-led affinity groups, reaching 40,000 employees in over 190 chapters globally. We are constantly learning through programs that are local, regional, and global. Amazon’s culture of inclusion is reinforced within our 16 Leadership Principles, which remind team members to seek diverse perspectives, learn and be curious, and earn trust. Our team highly values work-life balance, mentorship and career growth. We believe striking the right balance between your personal and professional life is critical to life-long happiness and fulfillment. We care about your career growth and strive to assign projects and offer training that will challenge you to become your best.
CA, BC, Vancouver
Join our Amazon Private Brands Selection Guidance organization in building science and tech solutions at scale to delight our customers with products across our leading private brands such as Amazon Basics, Amazon Essentials, and by Amazon. The Selection Guidance team applies Generative AI, Machine Learning, Statistics, and Economics solutions to drive our private brands product assortment, strategic business decisions, and product inputs such as title, price, merchandising and ordering. We are an interdisciplinary team of Scientists, Economists, Engineers, and Product Managers incubating and building day one solutions using novel technology, to solve some of the toughest business problems at Amazon. As a Data Scientist you will investigate business problems using data, invent novel solutions and prototypes, and directly contribute to bringing your ideas to life through production implementation. Current research areas include named entity recognition, product substitutes, pricing optimization, agentic AI, and large language models. You will review and guide scientists across the team on their designs and implementations, and raise the team bar for science research and prototypes. This is a unique, high visibility opportunity for someone who wants to develop ambitious science solutions and have direct business and customer impact. Key job responsibilities - Partner with business stakeholders to deeply understand APB business problems and frame ambiguous business problems as science problems and solutions. - Perform data analysis and build data pipelines to drive business decisions. - Invent novel science solutions, develop prototypes, and deploy production software to solve business problems. - Review and guide science solutions across the team. - Publish and socialize your and the team's research across Amazon and external avenues as appropriate - Leverage industry best practices to establish repeatable applied science practices, principles & processes.
US, VA, Arlington
This position requires that the candidate selected be a US Citizen and currently possess and maintain an active Top Secret security clearance. The Amazon Web Services Professional Services (ProServe) team seeks an experienced Principal Data Scientist to join our ProServe Shared Delivery Team (SDT). In this role, you will serve as a technical leader and strategic advisor to AWS enterprise customers, partners, and internal AWS teams on transformative AI/ML projects. You will leverage your deep technical expertise to architect and implement innovative machine learning and generative AI solutions that drive significant business outcomes. As a Principal Data Scientist, you will lead complex, high-impact AI/ML initiatives across multiple customer engagements. You will collaborate with Director and C-level executives to translate business challenges into technical solutions. You will drive innovation through thought leadership, establish technical standards, and develop reusable solution frameworks that accelerate customer adoption of AWS AI/ML services. Your work will directly influence the strategic direction of AWS Professional Services AI/ML offerings and delivery approaches. Your extensive experience in designing and implementing sophisticated AI/ML solutions will enable you to tackle the most challenging customer problems. You will provide technical mentorship to other data scientists, establish best practices, and represent AWS as a subject matter expert in customer-facing engagements. You will build trusted advisor relationships with customers and partners, helping them achieve their business outcomes through innovative applications of AWS AI/ML services. The AWS Professional Services organization is a global team of experts that help customers realize their desired business outcomes when using the AWS Cloud. We work together with customer teams and the AWS Partner Network (APN) to execute enterprise cloud computing initiatives. Our team provides a collection of offerings which help customers achieve specific outcomes related to enterprise cloud adoption. We also deliver focused guidance through our global specialty practices, which cover a variety of solutions, technologies, and industries. Key job responsibilities Architecting and implementing complex, enterprise-scale AI/ML solutions that solve critical customer business challenges Providing technical leadership across multiple customer engagements, establishing best practices and driving innovation Collaborating with Delivery Consultants, Engagement Managers, Account Executives, and Cloud Architects to design and deploy AI/ML solutions Developing reusable solution frameworks, reference architectures, and technical assets that accelerate customer adoption of AWS AI/ML services Representing AWS as a subject matter expert in customer-facing engagements, including executive briefings and technical workshops Identifying and driving new business opportunities through technical innovation and thought leadership Mentoring junior data scientists and contributing to the growth of AI/ML capabilities within AWS Professional Services
IN, KA, Bengaluru
The Amazon Alexa AI team in India is seeking a talented, self-driven Applied Scientist to work on prototyping, optimizing, and deploying ML algorithms within the realm of Generative AI. Key responsibilities include: - Research, experiment and build Proof Of Concepts advancing the state of the art in AI & ML for GenAI. - Collaborate with cross-functional teams to architect and execute technically rigorous AI projects. - Thrive in dynamic environments, adapting quickly to evolving technical requirements and deadlines. - Engage in effective technical communication (written & spoken) with coordination across teams. - Conduct thorough documentation of algorithms, methodologies, and findings for transparency and reproducibility. - Publish research papers in internal and external venues of repute - Support on-call activities for critical issues
US, VA, Arlington
This position requires that the candidate selected be a US Citizen and currently possess and maintain an active Top Secret security clearance. The Amazon Web Services Professional Services (ProServe) team seeks an experienced Principal Data Scientist to join our ProServe Shared Delivery Team (SDT). In this role, you will serve as a technical leader and strategic advisor to AWS enterprise customers, partners, and internal AWS teams on transformative AI/ML projects. You will leverage your deep technical expertise to architect and implement innovative machine learning and generative AI solutions that drive significant business outcomes. As a Principal Data Scientist, you will lead complex, high-impact AI/ML initiatives across multiple customer engagements. You will collaborate with Director and C-level executives to translate business challenges into technical solutions. You will drive innovation through thought leadership, establish technical standards, and develop reusable solution frameworks that accelerate customer adoption of AWS AI/ML services. Your work will directly influence the strategic direction of AWS Professional Services AI/ML offerings and delivery approaches. Your extensive experience in designing and implementing sophisticated AI/ML solutions will enable you to tackle the most challenging customer problems. You will provide technical mentorship to other data scientists, establish best practices, and represent AWS as a subject matter expert in customer-facing engagements. You will build trusted advisor relationships with customers and partners, helping them achieve their business outcomes through innovative applications of AWS AI/ML services. The AWS Professional Services organization is a global team of experts that help customers realize their desired business outcomes when using the AWS Cloud. We work together with customer teams and the AWS Partner Network (APN) to execute enterprise cloud computing initiatives. Our team provides a collection of offerings which help customers achieve specific outcomes related to enterprise cloud adoption. We also deliver focused guidance through our global specialty practices, which cover a variety of solutions, technologies, and industries. Key job responsibilities Architecting and implementing complex, enterprise-scale AI/ML solutions that solve critical customer business challenges Providing technical leadership across multiple customer engagements, establishing best practices and driving innovation Collaborating with Delivery Consultants, Engagement Managers, Account Executives, and Cloud Architects to design and deploy AI/ML solutions Developing reusable solution frameworks, reference architectures, and technical assets that accelerate customer adoption of AWS AI/ML services Representing AWS as a subject matter expert in customer-facing engagements, including executive briefings and technical workshops Identifying and driving new business opportunities through technical innovation and thought leadership Mentoring junior data scientists and contributing to the growth of AI/ML capabilities within AWS Professional Services