Demystifying AI agents

Amazon vice president and distinguished engineer Marc Brooker explains how agentic systems work under the hood — and how AWS’s new AgentCore framework implements their essential components.

Agents are the trendiest topic in AI today, and with good reason. AI agents act on their users’ behalf, autonomously doing things like making online purchases, building software, researching business trends, or booking travel. By taking generative AI out of the sandbox of the chat interface and allowing it to act directly on the world, agentic AI represents a leap forward in the power and utility of AI.

Amazon vice president and distinguished engineer Marc Brooker on Amazon Bedrock AgentCore.

Agentic AI has been moving really fast: for example, one of the core building blocks of today’s agents, the model context protocol (MCP), is only a year old! As in any fast-moving field, there are many competing definitions, hot takes, and misleading opinions.

To cut through the noise, I’d like to describe the core components of an agentic AI system and how they fit together. Hopefully, when you’ve finished reading this post, agents won’t seem as mysterious. You’ll also understand why we made the choices we did in designing Amazon Web Services’ Bedrock AgentCore, a set of services and tools that lets customers quickly and easily design and build their own agentic AI systems.

The agentic ecosystem

Definitions of the word “agent” abound, but I like a slight variation on the British programmer Simon Willison’s minimalist take: An agent runs models and tools in a loop to achieve a goal.

The user prompts an AI model (typically a large language model, or LLM) with the goal to be attained — say, booking a table at a restaurant near the theater where a movie is playing. Along with the goal, the model receives a list of the tools available to it, such as a database of restaurant locations or a record of the user’s food preferences. The model then plans how to achieve the goal and takes a first step by calling one of the tools. The tool provides a response, and based on that, the model calls a new tool. Through repetitions of this process, the agent ratchets toward accomplishment of the goal. In some cases, the model’s orchestration and planning choices are complemented or enhanced by combining them with imperative code.

Related content
Optimizing placement of configuration data ensures that it’s available and consistent during “network partitions”.

That seems simple enough. But what kind of infrastructure does it take to realize this approach? An agentic system needs a few core components:

  1. A way to build the agent. When you deploy an agent, you don’t want to have to code it from scratch. There are several agent development frameworks out there, but I’m partial to Amazon Web Services’ own Strands Agents.
  2. Somewhere to run the AI model. A seasoned AI developer can download an open-weight LLM, but it takes expertise to do that right. It also takes expensive hardware that’s going to be poorly utilized for the average user.
  3. Somewhere to run the agentic code. With frameworks like Strands, the user creates code for an agent object with a defined set of functions. Most of those functions involve sending prompts to an AI model, but the code needs to run somewhere. In practice, most agents will run in the cloud, because we want them to keep running when our laptops are closed, and we want them to scale up and out to do their work.
  4. A mechanism for translating between the text-based LLM and tool calls.
  5. A short-term memory for tracking the content of agentic interactions.
  6. A long-term memory for tracking the user’s preferences and affinities across sessions.
  7. A way to trace the system’s execution, to evaluate the agent’s performance.

In what follows I’ll go into more detail about each of these components and explain how AgentCore implements them.

Building an agent

It’s well known that asking an LLM to explain how it plans to approach a task improves its performance on that task. Such “chain-of-thought reasoning” is now ubiquitous in AI.

Related content
Firecracker “microVMs” combine the security of virtual machines with the efficiency of containers.

The analogue in agentic systems is the ReAct (reasoning + action) model, in which the agent has a thought (“I’ll use the map function to locate nearby restaurants”), performs an action (issuing an API call to the map function), and then makes an observation (“There are two pizza places and one Indian restaurant within two blocks of the movie theater”).

ReAct isn’t the only way to build agents, but it is at the core of most successful agentic systems. Today, agents are commonly loops over the thought-action-observation sequence.

The tools available to the agent can include local tools and remote tools such as databases, microservices, and software as a service. A tool’s specification includes a natural-language explanation of how and when it’s used and the syntax of its API calls.

Agent development.gif
AgentCore lets the developer use any agent development framework and any model.

The developer can also tell the agent to, essentially, build its own tools on the fly. Say that a tool retrieves a table stored as comma-separated text, and to fulfill its goal, the agent needs to sort the table.

Sorting a table by repeatedly sending it through an LLM and evaluating the results would be a colossal waste of resources — and it’s not even guaranteed to give the right result. Instead, the developer can simply instruct the agent to generate its own Python code when it encounters a simple but repetitive task. These snippets of code can run locally alongside the agent or in a dedicated secure code interpreter tool like AgentCore’s Code Interpreter.

One of the things I like about Strands is the flexibility it offers in dividing responsibility between the LLM and the developer. Once the tools available to the agent have been specified, the developer can simply tell the agent to use the appropriate tool when necessary. Or the developer can specify which tool to use for which types of data and even which data items to use as arguments during which function calls.

Related content
In tests, a new way to allocate virtual machines across servers outperforms baselines by 10%.

Similarly, the developer can simply tell the agent to generate Python code when necessary to automate repetitive tasks or, alternatively, tell it which algorithms to use for which data types and even provide pseudocode. The approach can vary from agent to agent.

Strands is open source and can be used by developers deploying agents in any context; conversely, AgentCore customers can build their agents using any development tools they choose.

Runtime

Historically, there were two main ways to isolate code running on shared servers: containerization, which was efficient but offered lower security, and virtual machines (VMs), which were secure but came with a lot of computational overhead.

AWS AgentCore

In 2018, Amazon Web Services’ (AWS’s) Lambda serverless-computing service deployed Firecracker, a new paradigm in server isolation that offered the best of both worlds. Firecracker creates “microVMs”, complete with hardware isolation and their own Linux kernels but with reduced overhead (as low as a few megabytes) and startup times (as low as a few milliseconds). The low overhead means that each function executed on a Lambda server can have its own microVM.

However, because instantiating an agent requires deploying an LLM, together with the memory resources to track the LLM’s inputs and outputs, the per-function isolation model is impractical. So AgentCore uses session-based isolation, where every session with an agent is assigned its own Firecracker microVM. When the session finishes, the LLM’s state information is copied to long-term memory, and the microVM is destroyed. This ensures secure and efficient deployment of hosts of agents across AWS servers.

Tool calls

AWS AgentCore
AgentCore Gateway manages the tool calls issued by the agent.

Just as there are several existing development frameworks for agent creation, there are several existing standards for communication between agents and tools, the most popular of which is MCP. MCP establishes a standard format for passing data between the LLM and its server and a way for servers to describe to the agent what tools and data they have available.

In AgentCore, tool calls are handled by the AgentCore Gateway service. Gateway uses MCP by default, but like most of the other AgentCore components, it’s configurable, and it will support a growing set of protocols over time.

Sometimes, however, the necessary tool is one without a public API. In such cases, the only way to retrieve data or perform an action is by pointing and clicking on a website. There are a number of services available to perform such computer use, including Amazon’s own Nova Act, which can be used with AgentCore’s secure Browser tool. Computer use makes any website a potential tool for agents, opening up decades of content and valuable services that aren’t yet available directly through APIs.

I mentioned before that code generated by the agent is executed by AgentCore Code Interpreter, but Gateway, again, manages the translation between the LLM’s output and Code Interpreter’s input specs.

Memory

Short-term memory

LLMs are next-word prediction engines. What makes them so astoundingly versatile is that their predictions are based on long sequences of words they’ve already seen, known as context. Context is, in itself, a kind of memory. But it’s not the only kind an agentic system needs.

Related content
Watch the recording of Marc Brooker's presentation on Firecracker, an open-source virtualization platform.

Suppose, again, that an agent is trying to book a restaurant near a movie theater, and from a map tool, it’s retrieved a couple dozen restaurants within a mile radius. It doesn’t want to dump information about all those restaurants into the LLM’s context: that could wreak havoc with next-word probabilities.

Instead, it can store the complete list in short-term memory and retrieve one or two records at a time, based on, say, the user’s price and cuisine preferences and proximity to the theater. If none of those restaurants pans out, the agent can dip back into short-term memory, rather than having to execute another tool call.

Long-term memory

Agents also need to remember their prior interactions with their clients. If last week I told the restaurant booking agent what type of food I like, I don’t want to have to tell it again this week. The same goes for my price tolerance, the sort of ambiance I’m looking for, and so on.

Long-term memory allows the agent to look up what it needs to know about prior conversations with the user. Agents don’t typically create long-term memories themselves, however. Instead, after a session is complete, the whole conversation passes to a separate AI model, which creates new long-term memories or updates existing ones.

With AgentCore, memory creation can involve LLM summarization and “chunking”, in which documents are split into sections grouped according to topic for ease of retrieval during subsequent sessions. AgentCore lets the user select strategies and algorithms for summarization, chunking, and other information extraction techniques.

Observability

Agents are a new kind of software system, and they require new ways to think about observing, monitoring, and auditing their behavior. Some of the questions we ask will look familiar: whether the agents are running fast enough, how much they’re costing, how many tool calls they’re making, and whether users are happy. But new questions will arise, too, and we can’t necessarily predict what data we’ll need to answer them.

AWS AgentCore
AgentCore Observability lets the customer track an agent's execution.

In AgentCore Observability, traces provide an end-to-end view of the execution of a session with an agent, breaking down step-by-step which actions were taken and why. For the agent builder, these traces are key to understanding how well agents are working — and providing the data to make them work better.

I hope that this explanation has demystified agentic AI enough that you’re ready to try building your own agents. You can find all the tools you’ll need at the AgentCore website.

Related content

US, CA, San Francisco
Join the next revolution in robotics at Amazon's Frontier AI & Robotics team, where you'll work alongside world-renowned AI pioneers to push the boundaries of what's possible in robotic intelligence. As an Applied Scientist, you'll be at the forefront of developing breakthrough foundation models that enable robots to perceive, understand, and interact with the world in unprecedented ways. You'll drive independent research initiatives in areas such as perception, manipulation, science understanding, locomotion, manipulation, sim2real transfer, multi-modal foundation models and multi-task robot learning, designing novel frameworks that bridge the gap between state-of-the-art research and real-world deployment at Amazon scale. In this role, you'll balance innovative technical exploration with practical implementation, collaborating with platform teams to ensure your models and algorithms perform robustly in dynamic real-world environments. You'll have access to Amazon's vast computational resources, enabling you to tackle ambitious problems in areas like very large multi-modal robotic foundation models and efficient, promptable model architectures that can scale across diverse robotic applications. Key job responsibilities - Drive independent research initiatives across the robotics stack, including robotics foundation models, focusing on breakthrough approaches in perception, and manipulation, for example open-vocabulary panoptic scene understanding, scaling up multi-modal LLMs, sim2real/real2sim techniques, end-to-end vision-language-action models, efficient model inference, video tokenization - Design and implement novel deep learning architectures that push the boundaries of what robots can understand and accomplish - Lead full-stack robotics projects from conceptualization through deployment, taking a system-level approach that integrates hardware considerations with algorithmic development, ensuring robust performance in production environments - Collaborate with platform and hardware teams to ensure seamless integration across the entire robotics stack, optimizing and scaling models for real-world applications - Contribute to the team's technical strategy and help shape our approach to next-generation robotics challenges A day in the life - Design and implement novel foundation model architectures and innovative systems and algorithms, leveraging our extensive infrastructure to prototype and evaluate at scale - Collaborate with our world-class research team to solve complex technical challenges - Lead technical initiatives from conception to deployment, working closely with robotics engineers to integrate your solutions into production systems - Participate in technical discussions and brainstorming sessions with team leaders and fellow scientists - Leverage our massive compute cluster and extensive robotics infrastructure to rapidly prototype and validate new ideas - Transform theoretical insights into practical solutions that can handle the complexities of real-world robotics applications About the team At Frontier AI & Robotics, we're not just advancing robotics – we're reimagining it from the ground up. Our team is building the future of intelligent robotics through innovative foundation models and end-to-end learned systems. We tackle some of the most challenging problems in AI and robotics, from developing sophisticated perception systems to creating adaptive manipulation strategies that work in complex, real-world scenarios. What sets us apart is our unique combination of ambitious research vision and practical impact. We leverage Amazon's massive computational infrastructure and rich real-world datasets to train and deploy state-of-the-art foundation models. Our work spans the full spectrum of robotics intelligence – from multimodal perception using images, videos, and sensor data, to sophisticated manipulation strategies that can handle diverse real-world scenarios. We're building systems that don't just work in the lab, but scale to meet the demands of Amazon's global operations. Join us if you're excited about pushing the boundaries of what's possible in robotics, working with world-class researchers, and seeing your innovations deployed at unprecedented scale.
IN, HR, Gurugram
Our customers have immense faith in our ability to deliver packages timely and as expected. A well planned network seamlessly scales to handle millions of package movements a day. It has monitoring mechanisms that detect failures before they even happen (such as predicting network congestion, operations breakdown), and perform proactive corrective actions. When failures do happen, it has inbuilt redundancies to mitigate impact (such as determine other routes or service providers that can handle the extra load), and avoids relying on single points of failure (service provider, node, or arc). Finally, it is cost optimal, so that customers can be passed the benefit from an efficiently set up network. Amazon Shipping is hiring Applied Scientists to help improve our ability to plan and execute package movements. As an Applied Scientist in Amazon Shipping, you will work on multiple challenging machine learning problems spread across a wide spectrum of business problems. You will build ML models to help our transportation cost auditing platforms effectively audit off-manifest (discrepancies between planned and actual shipping cost). You will build models to improve the quality of financial and planning data by accurately predicting ship cost at a package level. Your models will help forecast the packages required to be pick from shipper warehouses to reduce First Mile shipping cost. Using signals from within the transportation network (such as network load, and velocity of movements derived from package scan events) and outside (such as weather signals), you will build models that predict delivery delay for every package. These models will help improve buyer experience by triggering early corrective actions, and generating proactive customer notifications. Your role will require you to demonstrate Think Big and Invent and Simplify, by refining and translating Transportation domain-related business problems into one or more Machine Learning problems. You will use techniques from a wide array of machine learning paradigms, such as supervised, unsupervised, semi-supervised and reinforcement learning. Your model choices will include, but not be limited to, linear/logistic models, tree based models, deep learning models, ensemble models, and Q-learning models. You will use techniques such as LIME and SHAP to make your models interpretable for your customers. You will employ a family of reusable modelling solutions to ensure that your ML solution scales across multiple regions (such as North America, Europe, Asia) and package movement types (such as small parcel movements and truck movements). You will partner with Applied Scientists and Research Scientists from other teams in US and India working on related business domains. Your models are expected to be of production quality, and will be directly used in production services. You will work as part of a diverse data science and engineering team comprising of other Applied Scientists, Software Development Engineers and Business Intelligence Engineers. You will participate in the Amazon ML community by authoring scientific papers and submitting them to Machine Learning conferences. You will mentor Applied Scientists and Software Development Engineers having a strong interest in ML. You will also be called upon to provide ML consultation outside your team for other problem statements. If you are excited by this charter, come join us!
CN, 31, Shanghai
Are you looking to work at the forefront of Machine Learning and AI? Would you be excited to apply Generative AI algorithms to solve real world problems with significant impact? The Generative AI Innovation Center helps AWS customers implement Generative AI solutions and realize transformational business opportunities. This is a team of strategists, scientists, engineers, and architects working step-by-step with customers to build bespoke solutions that harness the power of generative AI. Starting in 2024, the Innovation Center launched a new Custom Model and Optimization program to help customers develop and scale highly customized generative AI solutions. The team helps customers imagine and scope bespoke use cases that will create the greatest value for their businesses, define paths to navigate technical or business challenges, develop and optimize models to power their solutions, and make plans for launching solutions at scale. The GenAI Innovation Center team provides guidance on best practices for applying generative AI responsibly and cost efficiently. You will work directly with customers and innovate in a fast-paced organization that contributes to game-changing projects and technologies. You will design and run experiments, research new algorithms, and find new ways of optimizing risk, profitability, and customer experience. We’re looking for Applied Scientists capable of using GenAI and other techniques to design, evangelize, and implement state-of-the-art solutions for never-before-solved problems. As an Applied Scientist, you will - Collaborate with AI/ML scientists and architects to research, design, develop, and evaluate generative AI solutions to address real-world challenges - Interact with customers directly to understand their business problems, aid them in implementation of generative AI solutions, brief customers and guide them on adoption patterns and paths to production - Help customers optimize their solutions through approaches such as model selection, training or tuning, right-sizing, distillation, and hardware optimization - Provide customer and market feedback to product and engineering teams to help define product direction About the team Diverse Experiences AWS values diverse experiences. Even if you do not meet all of the qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why AWS? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Inclusive Team Culture Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (diversity) conferences, inspire us to never stop embracing our uniqueness. Mentorship & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud.
US, WA, Redmond
Amazon Leo is Amazon’s low Earth orbit satellite network. Our mission is to deliver fast, reliable internet connectivity to customers beyond the reach of existing networks. From individual households to schools, hospitals, businesses, and government agencies, Amazon Leo will serve people and organizations operating in locations without reliable connectivity. Export Control Requirement: Due to applicable export control laws and regulations, candidates must be a U.S. citizen or national, U.S. permanent resident (i.e., current Green Card holder), or lawfully admitted into the U.S. as a refugee or granted asylum. This position is part of the Satellite Attitude Determination and Control team. You will design and analyze the control system and algorithms, support development of our flight hardware and software, help integrate the satellite in our labs, participate in flight operations, and see a constellation of satellites flow through the production line in the building next door. Key job responsibilities - Design and analyze algorithms for estimation, flight control, and precise pointing using linear methods and simulation. - Develop and apply models and simulations, with various levels of fidelity, of the satellite and our constellation. - Component level environmental testing, functional and performance checkout, subsystem integration, satellite integration, and in space operations. - Manage the spacecraft constellation as it grows and evolves. - Continuously improve our ability to serve customers by maximizing payload operations time. - Develop autonomy for Fault Detection and Isolation on board the spacecraft. A day in the life This is an opportunity to play a significant role in the design of an entirely new satellite system with challenging performance requirements. The large, integrated constellation brings opportunities for advanced capabilities that need investigation and development. The constellation size also puts emphasis on engineering excellence so our tools and methods, from conceptualization through manufacturing and all phases of test, will be state of the art as will the satellite and supporting infrastructure on the ground. You will find that Amazon Leo's mission is compelling, so our program is staffed with some of the top engineers in the industry. Our daily collaboration with other teams on the program brings constant opportunity for discovery, learning, and growth. About the team Our team has lots of experience with various satellite systems and many other flight vehicles. We have bench strength in both our mission and core GNC disciplines. We design, prototype, test, iterate and learn together. Because GNC is central to safe flight, we tend to drive Concepts of Operation and many system level analyses.
US, CA, San Francisco
If you are interested in this position, please apply on Twitch's Career site https://www.twitch.tv/jobs/en/ About Us: Twitch is the world’s biggest live streaming service, with global communities built around gaming, entertainment, music, sports, cooking, and more. It is where thousands of communities come together for whatever, every day. We’re about community, inside and out. You’ll find coworkers who are eager to team up, collaborate, and smash (or elegantly solve) problems together. We’re on a quest to empower live communities, so if this sounds good to you, see what we’re up to on LinkedIn and X, and discover the projects we’re solving on our Blog. Be sure to explore our Interviewing Guide to learn how to ace our interview process. About the Role We are looking for applied scientists to solve challenging and open-ended problems in the domain of user and content safety. As an applied scientist on Twitch's Community team, you will use machine learning to develop data products tackling problems such as harassment, spam, and illegal content. You will use a wide toolbox of ML tools to handle multiple types of data, including user behavior, metadata, and user generated content such as text and video. You will collaborate with a team of passionate scientists and engineers to develop these models and put them into production, where they can help Twitch's creators and viewers succeed and build communities. You will report to our Senior Applied Science Manager in San Francisco, CA. You can work from San Francisco, CA or Seattle, WA. You Will - Build machine learning products to protect Twitch and its users from abusive behavior such as harassment, spam, and violent or illegal content. - Work backwards from customer problems to develop the right solution for the job, whether a classical ML model or a state-of-the-art one. - Collaborate with Community Health's engineering and product management team to productionize your models into flexible data pipelines and ML-based services. - Continue to learn and experiment with new techniques in ML, software engineering, or safety so that we can better help communities on Twitch grow and stay safe. Perks * Medical, Dental, Vision & Disability Insurance * 401(k) * Maternity & Parental Leave * Flexible PTO * Amazon Employee Discount
US, WA, Redmond
As a Guidance, Navigation & Control Hardware Engineer, you will directly contribute to the planning, selection, development, and acceptance of Guidance, Navigation & Control hardware for Amazon Leo's constellation of satellites. Specializing in critical satellite hardware components including reaction wheels, star trackers, magnetometers, sun sensors, and other spacecraft sensors and actuators, you will play a crucial role in the integration and support of these precision systems. You will work closely with internal Amazon Leo hardware teams who develop these components, as well as Guidance, Navigation & Control engineers, software teams, systems engineering, configuration & data management, and Assembly, Integration & Test teams. A key aspect of your role will be actively resolving hardware issues discovered during both factory testing phases and operational space missions, working hand-in-hand with internal Amazon Leo hardware development teams to implement solutions and ensure optimal satellite performance. Export Control Requirement: Due to applicable export control laws and regulations, candidates must be a U.S. citizen or national, U.S. permanent resident (i.e., current Green Card holder), or lawfully admitted into the U.S. as a refugee or granted asylum. Key job responsibilities * Planning and coordination of resources necessary to successfully accept and integrate satellite Guidance, Navigation & Control components including reaction wheels, star trackers, magnetometers, and sun sensors provided by internal Amazon Leo teams * Partner with internal Amazon Leo hardware teams to develop and refine spacecraft actuator and sensor solutions, ensuring they meet requirements and providing technical guidance for future satellite designs * Collaborate with internal Amazon Leo hardware development teams to resolve issues discovered during both factory test phases and operational space missions, implementing corrective actions and design improvements * Work with internal Amazon Leo teams to ensure state-of-the-art satellite hardware technologies including precision pointing systems, attitude determination sensors, and spacecraft actuators meet mission requirements * Lead verification and testing activities, ensuring satellite Guidance, Navigation & Control hardware components meet stringent space-qualified requirements * Drive implementation of hardware-in-the-loop testing for satellite systems, coordinating with internal Amazon Leo hardware engineers to validate component performance in simulated space environments * Troubleshoot and resolve complex hardware integration issues working directly with internal Amazon Leo hardware development teams
IN, KA, Bengaluru
Do you want to join an innovative team of scientists who use machine learning and statistical techniques to create state-of-the-art solutions for providing better value to Amazon’s customers? Do you want to build and deploy advanced algorithmic systems that help optimize millions of transactions every day? Are you excited by the prospect of analyzing and modeling terabytes of data to solve real world problems? Do you like to own end-to-end business problems/metrics and directly impact the profitability of the company? Do you like to innovate and simplify? If yes, then you may be a great fit to join the Machine Learning and Data Sciences team for India Consumer Businesses. If you have an entrepreneurial spirit, know how to deliver, love to work with data, are deeply technical, highly innovative and long for the opportunity to build solutions to challenging problems that directly impact the company's bottom-line, we want to talk to you. Major responsibilities - Use machine learning and analytical techniques to create scalable solutions for business problems - Analyze and extract relevant information from large amounts of Amazon’s historical business data to help automate and optimize key processes - Design, development, evaluate and deploy innovative and highly scalable models for predictive learning - Research and implement novel machine learning and statistical approaches - Work closely with software engineering teams to drive real-time model implementations and new feature creations - Work closely with business owners and operations staff to optimize various business operations - Establish scalable, efficient, automated processes for large scale data analyses, model development, model validation and model implementation - Mentor other scientists and engineers in the use of ML techniques
US, WA, Seattle
The Sponsored Products and Brands team at Amazon Ads is re-imagining the advertising landscape through industry leading generative AI technologies, revolutionizing how millions of customers discover products and engage with brands across Amazon.com and beyond. We are at the forefront of re-inventing advertising experiences, bridging human creativity with artificial intelligence to transform every aspect of the advertising lifecycle from ad creation and optimization to performance analysis and customer insights. We are a passionate group of innovators dedicated to developing responsible and intelligent AI technologies that balance the needs of advertisers, enhance the shopping experience, and strengthen the marketplace. If you're energized by solving complex challenges and pushing the boundaries of what's possible with AI, join us in shaping the future of advertising. The Demand Utilization team with Sponsored Products and Brands owns finding the appropriate ads to surface to customers when they search for products on Amazon. We strive to understand our customers’ intent and identify relevant ads which enable them to discover new and alternate products. This also enables sellers on Amazon to showcase their products to customers, which may at times be buried deeper in the search results. Our systems and algorithms operate on one of the world's largest product catalogs, matching shoppers with products - with a high relevance bar and strict latency constraints. We are a team of machine learning scientists and software engineers working on complex solutions to understand the customer intent and present them with ads that are not only relevant to their actual shopping experience, but also non-obtrusive. This area is of strategic importance to Amazon Retail and Marketplace business, driving long term-growth. We are looking for an Applied Scientist III, with a background in Machine Learning to optimize serving ads on billions of product pages. The solutions you create would drive step increases in coverage of sponsored ads across the retail website and ensure relevant ads are served to Amazon's customers. You will directly impact our customers’ shopping experience while helping our sellers get the maximum ROI from advertising on Amazon. You will be expected to demonstrate strong ownership and should be curious to learn and leverage the rich textual, image, and other contextual signals. This role will challenge you to utilize innovative machine learning techniques in the domain of predictive modeling, natural language processing (NLP), deep learning, reinforcement learning, query understanding, vector search (kNN) and image recognition to deliver significant impact for the business. Ideal candidates will be able to work cross functionally across multiple stakeholders, synthesize the science needs of our business partners, develop models to solve business needs, and implement solutions in production. In addition to being a strongly motivated IC, you will also be responsible for mentoring junior scientists and guiding them to deliver high impacting products and services for Amazon customers and sellers. Why you will love this opportunity: Amazon is investing heavily in building a world-class advertising business. This team defines and delivers a collection of advertising products that drive discovery and sales. Our solutions generate billions in revenue and drive long-term growth for Amazon’s Retail and Marketplace businesses. We deliver billions of ad impressions, millions of clicks daily, and break fresh ground to create world-class products. We are a highly motivated, collaborative, and fun-loving team with an entrepreneurial spirit - with a broad mandate to experiment and innovate. Impact and Career Growth: You will invent new experiences and influence customer-facing shopping experiences to help suppliers grow their retail business and the auction dynamics that leverage native advertising; this is your opportunity to work within the fastest-growing businesses across all of Amazon! Define a long-term science vision for our advertising business, driven from our customers' needs, translating that direction into specific plans for research and applied scientists, as well as engineering and product teams. This role combines science leadership, organizational ability, technical strength, product focus, and business understanding. Team video https://youtu.be/zD_6Lzw8raE Key job responsibilities As an Applied Scientist III on this team, you will: - Drive end-to-end Machine Learning projects that have a high degree of ambiguity, scale, complexity. - Perform hands-on analysis and modeling of enormous data sets to develop insights that increase traffic monetization and merchandise sales, without compromising the shopper experience. - Build machine learning models, perform proof-of-concept, experiment, optimize, and deploy your models into production; work closely with software engineers to assist in deploying your ML models. - Run A/B experiments, gather data, and perform statistical analysis. - Establish scalable, efficient, automated processes for large-scale data analysis, machine-learning model development, model validation and serving. - Research new and innovative machine learning approaches.
US, VA, Arlington
Customer Experience and Business Trends (CXBT) is looking for an Applied Scientist to join its team. CXBT's mission is to create best-in-class AI agents that seamlessly integrate multimodal inputs, enabling natural, empathetic, and adaptive interactions. We leverage advanced architectures, cross-modal learning, interpretability, and responsible AI techniques to provide coherent, context-aware responses augmented by real-time knowledge retrieval. As part of CXBT, we have a vision to revolutionize how we understand, test, and optimize customer experiences at scale. Where traditional testing approaches fall short, we create AI-powered solutions that enable rapid experimentation, de-risk product launches, and generate actionable insights, -all before a single real customer is impacted. Be a part of our agentic initiative and shape how Amazon leverages artificial intelligence to run tests at scale and improve customer experiences. As an Applied Scientist, you will research state-of-the-art techniques in agent-based modeling, and lead scientific innovation by building foundational agentic simulation capabilities. If you are passionate about the intersection of AI and human behavior modeling, and want to fundamentally influence how Amazon tests and improves customer experiences, this role offers a great opportunity to make your mark. Key job responsibilities - Design and implement frameworks for creating representative, diverse agents that faithfully capture real-world characteristics - Use state-of-the-art techniques in user modeling and behavioral simulation to build robust agentic frameworks - Develop data simulation approaches that mimic real-world speech interactions. - Research and implement novel algorithms and modeling techniques. - Acquire and curate diverse datasets while ensuring user privacy. - Create robust evaluation metrics and test sets to assess language model performance. - Innovate in data representation and model training techniques. - Apply responsible AI practices throughout the development process. - Write clear, scientific documentation describing methodologies, solutions, and design choices. A day in the life Our team is dedicated to improving Amazon's products and services through evaluation of the end-to-end customer experience using both internal and external processes and technology. Our mission is to deeply understand our customers' experiences, challenge the status quo, and provide insights that drive innovation to improve that experience. Through our analysis and insights, we inform business decisions that directly impact customer experience as customers of new GenAI and LLM technologies. About the team Customer Experience and Business Trends (CXBT) is an organization made up of a diverse suite of functions dedicated to deeply understanding and improving customer experience, globally. We are a team of builders that develop products, services, ideas, and various ways of leveraging data to influence product and service offerings – for almost every business at Amazon – for every customer (e.g., consumers, developers, sellers/brands, employees, investors, streamers, gamers).
US, WA, Seattle
We are looking for a passionate Applied Scientist to contribute to the next generation of agentic AI applications for Amazon advertisers. In this role, you will support the development of agentic architectures, help build tools and datasets, and contribute to systems that can reason, plan, and act autonomously across complex advertiser workflows. You will work alongside senior scientists at the forefront of applied AI, gaining hands-on experience with methods for fine-tuning, reinforcement learning, and preference optimization, while contributing to evaluation frameworks that ensure safety, reliability, and trust at scale. You will work backwards from the needs of advertisers—contributing to customer-facing products that directly help them create, optimize, and grow their campaigns. Beyond building models, you will support the agent ecosystem by experimenting with and applying core primitives such as tool orchestration, multi-step reasoning, and adaptive preference-driven behavior. This role involves tackling well-scoped technical problems, while collaborating with engineers and product managers to bring solutions into production. Key Job Responsibilities - Contribute to building agents that guide advertisers in conversational and non-conversational experiences. - Implement model and agent optimization techniques, including supervised fine-tuning, instruction tuning, and preference optimization (e.g., DPO/IPO) under guidance from senior scientists. - Support dataset curation and tool development for MCP. - Contribute to evaluation pipelines for agent workflows, including automated benchmarks, multi-step reasoning tests, and safety guardrails. - Implement and iterate on agentic architectures (e.g., CoT, ToT, ReAct) that integrate planning, tool use, and long-horizon reasoning. - Support prototyping of multi-agent orchestration frameworks and workflows. - Collaborate with peers across engineering, science, and product to bring scientific innovations into production. - Stay current with the latest research in LLMs, RL, and agent-based AI, and apply findings to practical problems. About the team The Sponsored Products and Brands team at Amazon Ads is re-imagining the advertising landscape through the latest generative AI technologies, revolutionizing how millions of customers discover products and engage with brands across Amazon.com and beyond. We are at the forefront of re-inventing advertising experiences, bridging human creativity with artificial intelligence to transform every aspect of the advertising lifecycle from ad creation and optimization to performance analysis and customer insights. We are a passionate group of innovators dedicated to developing responsible and intelligent AI technologies that balance the needs of advertisers, enhance the shopping experience, and strengthen the marketplace. If you're energized by solving complex challenges and pushing the boundaries of what's possible with AI, join us in shaping the future of advertising. The Advertiser Guidance team within Sponsored Products and Brands is focused on guiding and supporting 1.6MM advertisers to meet their advertising needs of creating and managing ad campaigns. At this scale, the complexity of diverse advertiser goals, campaign types, and market dynamics creates both a massive technical challenge and a transformative opportunity: even small improvements in guidance systems can have outsized impact on advertiser success and Amazon’s retail ecosystem. Our vision is to build a highly personalized, context-aware agentic advertiser guidance system that leverages LLMs together with tools such as auction simulations, ML models, and optimization algorithms. This agentic framework, will operate across both chat and non-chat experiences in the ad console, scaling to natural language queries as well as proactively delivering guidance based on deep understanding of the advertiser. To execute this vision, we collaborate closely with stakeholders across Ad Console, Sales, and Marketing to identify opportunities—from high-level product guidance down to granular keyword recommendations—and deliver them through a tailored, personalized experience. Our work is grounded in state-of-the-art agent architectures, tool integration, reasoning frameworks, and model customization approaches (including tuning, MCP, and preference optimization), ensuring our systems are both scalable and adaptive.