Alexa enters the “age of self”

More-autonomous machine learning systems will make Alexa more self-aware, self-learning, and self-service.

Alexa launched in 2014, and in the more than six years since, we’ve been making good on our promise to make Alexa smarter every day. In addition to foundational improvements in Alexa’s core AI technologies, such as speech recognition and natural-language-understanding systems, Alexa scientists have developed technologies that continue to delight our customers, such as whispered speech and Alexa’s new live translation service.

Prem Natarajan, Alexa AI vice president of natural understanding, giving a presentation
Prem Natarajan, Alexa AI vice president of natural understanding, at a conference in 2018.

But some of the technologies we’ve begun to introduce, together with others we’re now investigating, are harbingers of a step change in Alexa’s development — and in the field of AI itself. Collectively, these technologies will bring a new level of generalizability and autonomy to both the Alexa voice service and the tools available to Alexa developers, ushering in what I like to think of as a new “age of self” in artificial intelligence, an age in which AI systems such as Alexa become more self-aware and more self-learning, and in which they lend themselves to self-service by experienced developers and even end users.

By self-awareness, I mean the ability to maintain an awareness of ambient state (e.g., time of day, thermostat readings, and recent actions) and to employ commonsense reasoning to make inferences that reflect that awareness and prior/world knowledge. Alexa hunches can already recognize anomalies in customers’ daily routines and suggest corrections — noticing that a light was left on at night and offering to turn it off, for instance. Powered by commonsense reasoning, self-awareness goes further: for instance, if a customer turns on the television five minutes before the kids’ soccer practice is scheduled to end, an AI of the future might infer that the customer needs a reminder about pickup.

Smart home.png
In the "age of self", AIs will be able to infer customers’ implicit intentions from observable temporal patterns, such as interactions with smart-home devices like thermostats, door locks, and lights.

Self-learning is Alexa’s ability to improve and expand its capabilities without human intervention. And like self-awareness, self-learning employs reasoning: for example, does the customer’s response to an action indicate dissatisfaction with that action? Similarly, when a customer issues an unfamiliar command, a truly self-learning Alexa would be able to infer what it might mean — perhaps by searching the web or exploring a knowledge base — and suggest possibilities.

Self-service means, essentially, the democratization of AI. Alexa customers with no programming experience should be able to customize Alexa’s services and even create new Alexa capabilities, and skill developers without machine learning experience should be able to build complex yet robust conversational skills. Colloquially, these are the conversational-AI equivalents of no-code and low-code development environments.

To be clear, the age of self is not yet upon us, and its dawning will require the maturation of technologies still under development, at Amazon and elsewhere. But some of Alexa’s recently launched capabilities herald a lightening in the Eastern sky.

Self-awareness

In 2018, we launched Alexa hunches for the smart home, with Alexa suggesting actions to take in response to anomalous sensor data. By early 2021, the science has advanced adequately for us to launch an opt-in service in which Alexa can take action immediately and automatically. In the meantime, we’ve also been working to expand hunches to Alexa services other than the smart home.

Technologies will bring a new level of generalizability and autonomy to both the Alexa voice service and the tools available to Alexa developers, ushering in what I like to think of as a new 'age of self' in artificial intelligence.
Prem Natarajan

But commonsense reasoning requires something more — the ability to infer customers’ implicit intentions from observable temporal patterns. For instance, what does it mean if the customer turns down the thermostat, turns out the lights, locks the front door, and opens the garage? What if the customer initiates an interaction with a query like “Alexa, what’s playing at Rolling Hills Cine Plaza?”

In 2020, we took steps toward commonsense reasoning with a new Alexa function that can infer a customer’s latent goal— the ultimate aim that lies behind a sequence of requests. When a customer asks for the weather at the beach, for instance, Alexa might use that query, in combination with other contextual information, to infer that the customer may be interested in a trip to the beach. Alexa could then offer the current driving time to the beach.

To retrieve that information, Alexa has to know to map the location of the weather request to the destination variable in the route-planning function. This illustrates another aspect of self-awareness: the ability to track information across contexts.

That ability is at the core of the night-out experience we’ve developed, which engages the customer in a multiturn conversation to plan a complete night out, from buying movie tickets to making restaurant and ride-share reservations. The night-out experience tracks times and locations across skills, revising them on the fly as customers evaluate different options. To build the experience, we leveraged the machinery of Alexa Conversations, a service that enables developers to quickly and easily create dialogue-driven skills, and we drew on our growing body of research on dialogue state tracking.

Slot_tracking.png._CB436837753_.png
Dialogue states at several successive dialogue turns

Self-awareness, however, includes an understanding not only of the conversational context but also of the customer’s physical context. In 2020, we demonstrated natural turn-taking on Alexa-enabled devices with cameras. When multiple speakers are engaging with Alexa, Alexa can use visual cues to distinguish between speech the customers are directing at each other and speech they’re directing at Alexa. In ongoing work, we’re working to expand this functionality to devices without cameras, by relying solely on acoustic and linguistic signals.

Finally, self-awareness also entails the capacity for self-explanation. Today, most machine learning models are black boxes; even their creators have no idea how they’re doing what they do. That uncertainty has turned explainable or interpretable AI into a popular research topic.

Amazon actively publishes on explainable-AI topics. In addition, the Alexa Fund, an Amazon venture capital investment program, invested in fiddler.ai, a startup that uses techniques based on the game-theoretical concept of Shapley values to do explainable AI.

Self-learning

Historically, the AI development cycle has involved collection of data, annotation of that data, and retraining of models on the newly annotated data — all of which add up to a laborious process.

In 2019, we launched Alexa’s self-learning system, which automatically learns to correct errors — both customer errors and errors in Alexa’s language-understanding models — without human involvement. The system relies on implicit signals that a request was improperly handled, as when a customer interrupts a response and rephrases the same request.

Absorbing-Markov-chain models for three different sequences of utterances
Alexa's self-learning system models customer interactions with Alexa as sequences of states; different customer utterances (u0, u1, u2) can correspond to the same state (h0). The final state of a sequence, known as the "absorbing state", indicates the success (checkmark) or failure (X) of a transaction.
Stacy Reilly

Currently, that fully automatic system is correcting 15% of defects. But those are defects that occur across a spectrum of users; only when enough people implicitly identify the same flaw does the system address it. We are working to adapt the same machinery to individual customers’ preferences — so that, for instance, Alexa can learn that when a particular customer asks for the song “Wow”, she means not the Post Malone hit from 2019 but the 1978 Kate Bush song.

Customers today also have the option of explicitly teaching Alexa their preferences. In the fall of 2020, we launched interactive teaching by customers, a capability that enables customers to instruct Alexa how they want certain requests to be handled. For instance, the customer can teach Alexa that the command “reading mode” means lights turned all the way up, while “movie mode” means only twenty percent up.

Self-service

Interactive teaching is also an early example of how Alexa is enabling more self-service. It extends prior Alexa features, like blueprints, which let customers build their own simple skills from preexisting templates, and routines, which let customers chain together sequences of actions under individual commands.

In March 2021, we announced the public release of Alexa Conversations, which allows developers to create dialogue-driven skills by uploading sample dialogues. Alexa Conversations’ sophisticated machine learning models use those dialogues as templates for generating larger corpora of synthetic training data. From that data, Alexa Conversations automatically trains a machine learning model.

Alexa Conversations does, however, require the developer to specify the set of entities that the new model should act upon and an application programming interface for the skill. So while it requires little familiarity with machine learning, it assumes some programming experience. 

ambiguous_slots.gif._CB438712971_.gif
An Alexa feature known as catalogue value suggestions suggests entity names to skill developers on the basis of their "embeddings", or locations in a representational space. If the embeddings of values (such as bird, dog, or cat) for a particular entity type are close enough (dotted circles) to their averages (solid circle and square), the system suggests new entity names; otherwise, it concludes that suggestions would be unproductive.
Animation by Nick Little

We are steadily chipping away at even that requirement, by making development for Alexa easier and more intuitive. As Alexa’s repertory of skills grows, for instance, entities are frequently reused, and we already have systems that can inform developers about entity types that they might not have thought to add to their skills. This is a step toward a self-service model in which developers no longer have to provide exhaustive lists of entities — or, in some cases, any entities at all.

Another technique that makes it easier to build machine learning models is few-shot learning, in which an existing model is generalized to a related task using only a handful of new training examples. This is an active area of research at Alexa: earlier this year, for example, we presented a paper at the Spoken Language Technologies conference that described a new approach to few-shot learning for natural-language-understanding tasks. Compared to its predecessors, our approach reduced the error rate on certain natural-language-understanding tasks by up to 12.4%, when each model was trained on only 10 examples.

These advances, along with the others reported on Amazon Science, demonstrate that the Alexa AI team continues to accelerate its pace of invention. More exciting announcements lie just over the horizon. I’ll be stopping back here every once in a while to update you on Alexa’s journey into the age of self.

Research areas

Related content

IN, KA, Bengaluru
Are you passionate about solving complex business problems at scale through Generative AI? Do you want to help build intelligent systems that reason, act, and learn from minimal supervision? If so, we have an exciting opportunity for you on Amazon's Trustworthy Shopping Experience (TSE) team. At TSE, our vision is to guarantee customers a worry-free shopping experience by earning their trust that the products they buy are safe, authentic, and compliant with regulations and policy. We do this in close partnership with our selling partners, empowering them with best-in-class tools and expertise to offer a high-quality, compliant selection that customers trust. As an Applied Scientist I, you will bring subject matter expertise in at least one relevant discipline (e.g., NLP, computer vision, representation learning, agentic architecture) to contribute to next-generation agentic AI solutions that automate complex manual investigation processes at Amazon scale. Working alongside senior scientists, you will map business goals—such as reducing cost-of-serving while maintaining trust and safety standards—to well-defined scientific problems and metrics. You will invent, refine, and experiment with solutions spanning agentic reasoning, self-supervised representation learning, few-shot adaptation, multimodal understanding, and model compression. With guidance from senior scientists, you will stay current on research trends and benchmark your results against the state of the art. You will help design and execute experiments to identify optimal solutions, initiating the development and implementation of small components with team guidance. You will write secure, stable, testable, and well-documented production code at the level of an SDE I, rigorously evaluating models and quantifying performance. You will handle data in accordance with Amazon policies, troubleshoot issues to root cause, and ensure your work does not put the company at risk. Your scope of influence will typically be at the self-level, with the possibility of mentoring interns. You will participate in team design and prioritization discussions, learn the business context behind TSE's products, and escalate problems with proposed solutions. You will publish internal technical reports and may contribute to peer-reviewed publications and external review activities when aligned with business needs. This role offers a unique opportunity to contribute to end-to-end AI development—from research through production—with your contributions serving hundreds of millions of customers within months, not years. Key job responsibilities • Contribute to the design and development of agentic AI systems with multi-step reasoning, autonomous task execution, and multimodal intelligence, including feedback and memory mechanisms, leveraging reinforcement learning techniques for agent decision-making and policy optimization, with input and guidance from senior scientists • Help productionize models built on top of SFT (Supervised Fine-tuning) and RFT (Reinforced Fine-tuning) approaches, as well as few-shot approaches based on multimodal datasets spanning text, images, and structured data, applying mathematical optimization techniques to improve efficiency, resource allocation, and decision-making in complex workflows, working alongside senior scientists to identify optimal solutions • Contribute to building production-ready deep learning and conventional ML solutions, including multimodal fusion and cross-modal alignment techniques that seamlessly connect visual, textual, and relational understanding, to support automation requirements within your team's scope • Help identify customer and business problems; use reasonable assumptions, data, and customer requirements to solve well-defined scientific problems involving multimodal inputs such as unstructured text, documents, product images, and relational data, developing representations that capture complementary signals across modalities and mapping business goals to scientific metrics • May co-author research papers for peer-reviewed internal and/or external venues, including contributions in areas such as multimodal representation learning and vision-language modeling, and contribute to the wider scientific community by reviewing research submissions, when aligned with business needs • Prototype rapidly, iterate based on feedback, and deliver small components at SDE I level—including multimodal data pipelines and inference modules—that integrate into production-scale systems • Write secure, stable, testable, maintainable, and well-documented code, balancing model capability, deployment cost, and resource usage across multimodal architectures while understanding state-of-the-art data structures, algorithms, and performance tradeoffs • Rigorously test code and evaluate models across individual and combined modalities, quantifying their performance; troubleshoot issues, research root causes, and thoroughly resolve defects, leaving systems more maintainable • Participate in team design, scoping, and prioritization discussions through clear verbal and written communication; seek to learn the business context, science, and engineering behind your team's products, including how multimodal signals contribute to trust and safety decisions • Participate in engineering best practices with peer reviews; clearly document approaches and communicate design decisions; publish internal technical reports to institutionalize scientific learning • Help train and mentor scientist interns; identify and escalate problems with proposed solutions, taking ownership or ensuring clear hand-off to the right owner About the team Trustworthy Shopping Experience Product team in TSE is responsible for the human-in-the-loop products and technology used in the risk investigations at Amazon. The team is also responsible for reducing the cost of performing the investigations, by automating wherever possible and optimizing the experience where manual interventions are needed. The team leverages state-of-the art technology and GenAI to deliver the products and associated goals.
US, NY, New York
Do you want to lead the Ads industry and redefine how we measure the effectiveness of Amazon Ads business? Are you passionate about causal inference, Deep Learning & AI, raising the science bar, and connecting leading-edge science research to Amazon-scale implementation? If so, come join Amazon Ads to be a science leader within our Advertising Incrementality Measurement science team! Our work builds the foundations for providing customer-facing advertising measurement tools, furthering internal research & development, and building out Amazon's advertising measurement offerings. Incrementality is a lynchpin for the next generation of Amazon Advertising measurement solutions, and this role will play a key role in the release and expansion of these offerings. We are looking for a thought leader that has an aptitude for delivering customer-focused solutions and who enjoys working on the intersection of Big-Data analytics, Machine/Deep Learning, and Causal Inference. A successful candidate will be a self-starter, comfortable with ambiguity, able to think big and be creative, while still paying careful attention to detail. You should be able to translate how data represents the customer journey, be comfortable dealing with large and complex data sets, and have experience using machine learning and/or econometric modeling to solve business problems. You should have strong analytical and communication skills, be able to work with product managers to define key business questions and work with the engineering team to bring our solutions into production. You will join a highly collaborative and diverse working environment that will empower you to shape the future of Amazon advertising, and also allow you to become part of our large science community. Key job responsibilities • Apply expertise in ML/DL, AI, and causal modeling to develop new models that describe how advertising impacts customers’ actions • Own the end-to-end development of novel scientific models that address the most pressing needs of our business stakeholders and help guide their future actions • Improve upon and simplify our existing solutions and frameworks • Review and audit modeling processes and results for other scientists, both junior and senior • Work with leadership to align our scientific developments with the business strategy • Identify new opportunities that are suggested by the data insights • Bring a department-wide perspective into decision making • Develop and document scientific research to be shared with the greater science community at Amazon About the team AIM is a cross disciplinary team of engineers, product managers, economists, data scientists, and applied scientists with a charter to build scientifically-rigorous causal inference methodologies at scale. Our job is to help customers cut through the noise of the modern advertising landscape and understand what actions, behaviors, and strategies actually have a real, measurable impact on key outcomes. The data we produce becomes the effective ground truth for advertisers and partners making decisions affecting millions in advertising spend.
US, NY, New York
The Ads Measurement Science team in the Measurement, Ad Tech, and Data Science (MADS) team of Amazon Ads serves a centralized role developing solutions for a multitude of performance measurement products. We create solutions which measure the comprehensive impact of advertiser's ad spend, including sales impacts both online and offline and across timescales, and provide actionable insights that enable our advertisers to optimize their media portfolios. We also own the science solutions for AI tools that unlock new insights and automate high-effort customer workflows, such as custom query and report generation based on natural language user requests. We leverage a host of scientific technologies to accomplish this mission, including Generative AI, classical ML, Causal Inference, Natural Language Processing, and Computer Vision. As an Applied Scientist on the team, you will lead measurement solutions end-to-end from inception to production. You will propose, design, analyze, and productionize models to provide novel measurement insights to our customers. Key job responsibilities - Leverage deep expertise in one or more scientific disciplines to invent solutions to ambiguous ads measurement problems - Disambiguate problems to propose clear evaluation frameworks and success criteria - Work autonomously and write high quality technical documents - Implement a significant portion of critical-path code, and partner with engineers to directly carry solutions into production - Partner closely with other scientists to deliver large, multi-faceted technical projects - Share and publish works with the broader scientific community through meetings and conferences - Communicate clearly to both technical and non-technical audiences - Contribute new ideas that shape the direction of the team's work - Mentor more junior scientists and participate in the hiring process About the team We are a team of scientists across Applied, Research, Data Science and Economist disciplines. You will work with colleagues with deep expertise in ML, NLP, CV, Gen AI, and Causal Inference with a diverse range of backgrounds. We partner closely with top-notch engineers, product managers, sales leaders, and other scientists with expertise in the ads industry and on building scalable modeling and software solutions.
US, NY, New York
The Ads Measurement Science team in the Measurement, Ad Tech, and Data Science (MADS) team of Amazon Ads serves a centralized role developing solutions for a multitude of performance measurement products. We create solutions which measure the comprehensive impact of advertiser's ad spend, including sales impacts both online and offline and across timescales, and provide actionable insights that enable our advertisers to optimize their media portfolios. We also own the science solutions for AI tools that unlock new insights and automate high-effort customer workflows, such as custom query and report generation based on natural language user requests. We leverage a host of scientific technologies to accomplish this mission, including Generative AI, classical ML, Causal Inference, Natural Language Processing, and Computer Vision. As an Applied Scientist on the team, you will lead measurement solutions end-to-end from inception to production. You will propose, design, analyze, and productionize models to provide novel measurement insights to our customers. Key job responsibilities - Leverage deep expertise in one or more scientific disciplines to invent solutions to ambiguous ads measurement problems - Disambiguate problems to propose clear evaluation frameworks and success criteria - Work autonomously and write high quality technical documents - Implement a significant portion of critical-path code, and partner with engineers to directly carry solutions into production - Partner closely with other scientists to deliver large, multi-faceted technical projects - Share and publish works with the broader scientific community through meetings and conferences - Communicate clearly to both technical and non-technical audiences - Contribute new ideas that shape the direction of the team's work - Mentor more junior scientists and participate in the hiring process About the team We are a team of scientists across Applied, Research, Data Science and Economist disciplines. You will work with colleagues with deep expertise in ML, NLP, CV, Gen AI, and Causal Inference with a diverse range of backgrounds. We partner closely with top-notch engineers, product managers, sales leaders, and other scientists with expertise in the ads industry and on building scalable modeling and software solutions.
ES, B, Barcelona
Are you interested in changing how Amazon does marketing — moving beyond platform-optimized broad reach to campaigns that find the right customer, at the right moment, using Amazon's unmatched 1P data? We are seeking an Applied Scientist to join PRIMAS (Prime & Marketing Analytics and Science). In this role, you will design and run the experiments that answer the foundational question for EU marketing: does adding 1P audience signal on top of Value-Based Optimization (VBO) improve marketing efficiency — and if so, for which customer cohorts, on which surfaces, and at what scale? Amazon's current marketing model is largely platform-led: we set objectives and let platforms optimize toward conversion. This approach works well for broad acquisition but systematically underserves lifecycle goals — it cannot distinguish between a Bargain Hunter who will never pay full price and a high-potential customer one nudge away from becoming a Prime member. This role sits at the center of changing that. You will build the 1P audiences, design the experiments that test them, and generate the evidence that guides how Amazon allocates hundreds of millions in marketing spend. Year 1 is an experimentation year. You will deploy 1P audiences across multiple surfaces and channels — Meta, Google, Amazon Display Ads — and measure incrementally against VBO baselines. The goal is not to replace platform optimization but to understand when and where the combination of 1P signal + VBO outperforms VBO alone, and to build the experimental infrastructure that makes this learning scalable. Key job responsibilities 1P Audience Development & Experimentation: - Build and validate 1P audience segments from Amazon behavioral, transactional, and lifecycle data - Design experiments that isolate the incremental effect of 1P audience signal over platform VBO baselines - Deploy audiences across activation surfaces and establish measurement standards that make cross-surface comparison valid Causal Measurement & Incrementality: - Apply causal inference methods to measure the true incremental lift of audience-based targeting vs. VBO - Develop power analysis frameworks and guardrails that enable rapid experimentation without underpowered or conflated tests - Deliver optimization recommendations grounded in experimental evidence: which cohorts respond, which surfaces deliver, which creative strategies drive behavior change Scaling the Learning: - Build reusable audience and measurement frameworks that can be deployed across campaigns and channels — year 1 experiments should produce infrastructure, not one-off analyses - Document experimental learnings in a way that informs both the 2026 roadmap and the business case for investing further in 1P audience capabilities in 2027+ - Partner with engineering and PMT to translate validated audience prototypes into production-ready solutions that scale beyond the experimentation phase About the team The PRIMAS team, is part of a larger tech tech team of 100+ people called WIMSI (WW Integrated Marketing Systems and Intelligence). WIMSI core mission is to accelerate marketing technology capabilities that enable de-averaged customer experiences across the marketing funnel: awareness, consideration, and conversion.
US, MA, Boston
We're a new research lab based in San Francisco and Boston focused on developing foundational capabilities for useful AI agents. We're pursuing several key research bets that will enable AI agents to perform real-world actions, learn from human feedback, self-course-correct, and infer human goals. We're particularly excited about combining large language models (LLMs) with reinforcement learning (RL) to solve reasoning and planning, learned world models, and generalizing agents to physical environments. We're a small, talent-dense team with the resources and scale of Amazon. Each team has the autonomy to move fast and the long-term commitment to pursue high-risk, high-payoff research. AI agents are the next frontier—the right research bets can reinvent what's possible. Join us and help build this lab from the ground up. Key job responsibilities * Define the product vision and roadmap for our agentic developer platform, translating research into products developers love * Partner deeply with research and engineering to identify which capabilities are ready for productization and shape how they're exposed to customers * Own the developer experience end-to-end from API design and SDK ergonomics to documentation, sample apps, and onboarding flows * Understand our customers deeply by engaging directly with developers and end-users, synthesizing feedback, and using data to drive prioritization * Shape how the world builds AI agents by defining new primitives, patterns, and best practices for agentic applications About the team Our team brings the AGI Lab's agent capabilities to customers. We build accessible, usable products: interfaces, frameworks, and solutions, that turn our platform and model capabilities into AI agents developers can use. We own the Nova Act agent playground, Nova Act IDE extension, Nova Act SDK, Nova Act AWS Console, reference architectures, sample applications, and more.
US, CA, San Francisco
Amazon is on a mission to redefine the future of automation — and we're looking for exceptional talent to help lead the way. We are building the next generation of advanced robotic systems that seamlessly blend cutting-edge AI, sophisticated control systems, and novel mechanical design to create adaptable, intelligent automation solutions capable of operating safely alongside humans in dynamic, real-world environments. At Amazon, we leverage the power of machine learning, artificial intelligence, and advanced robotics to solve some of the most complex operational challenges at a scale unlike anywhere else in the world. Our fleet of robots spans hundreds of facilities globally, working in sophisticated coordination to deliver on our promise of customer excellence — and we're just getting started. As a Sr. Scientist in Robot Navigation, you will be at the forefront of this transformation — architecting and delivering navigation systems that are intelligent, safe, and scalable. You will bring deep expertise in learning-based planning and control, a strong understanding of foundation models and their application to embodied agents, and as well as have in-depth understanding of control-theoretic approaches such as model predictive control (MPC)-based trajectory planning. You will develop navigation solutions that seamlessly blend data-driven intelligence with principled control-theoretic guarantees. Our vision is bold: to build navigation systems that allow robots to move fluidly and safely through dynamic environments — understanding context, anticipating change, and adapting in real time. You will lead research that bridges the gap between cutting-edge academic advances and production grade deployment, collaborating with world-class teams pushing the boundaries of robotic autonomy, manipulation, and human-robot interaction. Join us in building the next generation of intelligent navigation systems that will define the future of autonomous robotics at scale. Key job responsibilities - Design, develop, and deploy perception algorithms for robotics systems, including object detection, segmentation, tracking, depth estimation, and scene understanding - Lead research initiatives in computer vision, sensor fusion and 3D perception - Collaborate with cross-functional teams including robotics engineers, software engineers, and product managers to define and deliver perception capabilities - Drive end-to-end ownership of ML models — from data collection and labeling strategy to training, evaluation, and deployment - Mentor junior scientists and engineers; contribute to a culture of technical excellence - Define and track key metrics to measure perception system performance in real-world environments - Publish research findings in top-tier venues (CVPR, ICCV, ECCV, ICRA, NeurIPS, etc.) and contribute to patents A day in the life - Train ML models for deployment in simulation and real-world robots, identify and document their limitations post-deployment - Drive technical discussions within your team and with key stakeholders to develop innovative solutions to address identified limitations - Actively contribute to brainstorming sessions on adjacent topics, bringing fresh perspectives that help peers grow and succeed — and in doing so, build lasting trust across the team - Mentor team members while maintaining significant hands-on contribution to technical solutions About the team Our team is a group is a diverse group of scientists and engineers passionate about building intelligent machines. We value curiosity, rigor, and a bias for action. We believe in learning from failure and iterating quickly toward solutions that matter.
US, NY, New York
The Ads Measurement Science team in the Measurement, Ad Tech, and Data Science (MADS) team of Amazon Ads serves a centralized role developing solutions for a multitude of performance measurement products. We create solutions which measure the comprehensive impact of advertiser's ad spend, including sales impacts both online and offline and across timescales, and provide actionable insights that enable our advertisers to optimize their media portfolios. We also own the science solutions for AI tools that unlock new insights and automate high-effort customer workflows, such as custom query and report generation based on natural language user requests. We leverage a host of scientific technologies to accomplish this mission, including Generative AI, classical ML, Causal Inference, Natural Language Processing, and Computer Vision. As a Senior Applied Scientist on the team, you will be at the forefront of innovation, developing measurement solutions end-to-end from inception to production. You will set the technical vision and innovate on behalf of our customers. You will propose, design, analyze, and productionize models to provide novel measurement insights to our customers. You will partner with engineering to deploy these solutions into production. You will work with key stakeholders from various business teams to enable advertisers to act upon those metrics. Key job responsibilities * Lead the development of ad measurement models and solutions that address the full spectrum of an advertiser's investment, focusing on scalable and efficient methodologies. * Collaborate closely with cross-functional teams including engineering, product management, and business teams to define and implement measurement solutions. * Use state-of-the-art scientific technologies including Generative AI, Classical Machine Learning, Causal Inference, Natural Language Processing, and Computer Vision to develop state of the art models that measure the impact of ad spend across multiple platforms and timescales. * Drive experimentation and the continuous improvement of ML models through iterative development, testing, and optimization. * Translate complex scientific challenges into clear and impactful solutions for business stakeholders. * Mentor and guide junior scientists, fostering a collaborative and high-performing team culture. * Foster collaborations between scientists to move faster, with broader impact. * Regularly engage with the broader scientific community with presentations, publications, and patents. A day in the life You will solve real-world problems by getting and analyzing large amounts of data, generate business insights and opportunities, design simulations and experiments, and develop statistical and ML models. The team is driven by business needs, which requires collaboration with other Scientists, Engineers, and Product Managers across the advertising organization. You will prepare written and verbal presentations to share insights to audiences of varying levels of technical sophistication. Team video https://advertising.amazon.com/help/G4LNN5YWHP6SM9TJ About the team We are a team of scientists across Applied, Research, Data Science and Economist disciplines. You will work with colleagues with deep expertise in ML, NLP, CV, Gen AI, and Causal Inference with a diverse range of backgrounds. We partner closely with top-notch engineers, product managers, sales leaders, and other scientists with expertise in the ads industry and on building scalable modeling and software solutions.
US, WA, Seattle
At Amazon Selection and Catalog Systems (ASCS), our mission is to power the online buying experience for customers worldwide so they can find, discover, and buy any product they want. We innovate on behalf of our customers to ensure uniqueness and consistency of product identity and to infer relationships between products in Amazon Catalog to drive the selection gateway for the search and browse experiences on the website. We're solving a fundamental AI challenge: establishing product identity and relationships at unprecedented scale. Using Generative AI, Visual Language Models (VLMs), and multimodal reasoning, we determine what makes each product unique and how products relate to one another across Amazon's catalog. The scale is staggering: billions of products, petabytes of multimodal data, millions of sellers, dozens of languages, and infinite product diversity—from electronics to groceries to digital content. The research challenges are immense. GenAI and VLMs hold transformative promise for catalog understanding, but we operate where traditional methods fail: ambiguous problem spaces, incomplete and noisy data, inherent uncertainty, reasoning across both images and textual data, and explaining decisions at scale. Establishing product identities and groupings requires sophisticated models that reason across text, images, and structured data—while maintaining accuracy and trust for high-stakes business decisions affecting millions of customers daily. Amazon's Item and Relationship Platform group is looking for an innovative and customer-focused applied scientist to help us make the world's best product catalog even better. In this role, you will partner with technology and business leaders to build new state-of-the-art algorithms, models, and services to infer product-to-product relationships that matter to our customers. You will pioneer advanced GenAI solutions that power next-generation agentic shopping experiences, working in a collaborative environment where you can experiment with massive data from the world's largest product catalog, tackle problems at the frontier of AI research, rapidly implement and deploy your algorithmic ideas at scale, across millions of customers. Key job responsibilities Key job responsibilities include: * Formulate novel research problems at the intersection of GenAI, multimodal learning, and large-scale information retrieval—translating ambiguous business challenges into tractable scientific frameworks * Design and implement leading models leveraging VLMs, foundation models, and agentic architectures to solve product identity, relationship inference, and catalog understanding at billion-product scale * Pioneer explainable AI methodologies that balance model performance with scalability requirements for production systems impacting millions of daily customer decisions * Own end-to-end ML pipelines from research ideation to production deployment—processing petabytes of multimodal data with rigorous evaluation frameworks * Define research roadmaps aligned with business priorities, balancing foundational research with incremental product improvements * Mentor peer scientists and engineers on advanced ML techniques, experimental design, and scientific rigor—building organizational capability in GenAI and multimodal AI * Represent the team in the broader science community—publishing findings, delivering tech talks, and staying at the forefront of GenAI, VLM, and agentic system research
US, CA, San Francisco
In this role, you will act as the primary specialist for physics engine internals and dynamics, developing high-fidelity, vectorized simulation environments for robotics locomotion, navigation, and interaction/manipulation. You will collaborate with hardware engineers to validate robot models and partner with research scientists to ensure numerical stability and physical accuracy for Sim2Real transfer. Your work focuses on tuning solvers, optimizing collision dynamics, and performing system identification to enable the training of robust robot control policies for complex, physical interactions. Key job responsibilities * Develop and maintain the shared simulation software framework, specifically owning the physics integration, robot state management, and control layers * Develop and optimize parallelized (vectorized) physics environments for high-throughput reinforcement learning (e.g., Isaac Lab, MuJoCo) * Tune physics engine parameters (solvers, friction, restitution) to support complex contact-rich scenarios required for dexterous manipulation and agile locomotion. * Implement and validate complex robot models (URDF/MJCF) involving precise actuator and sensor modeling * Collaborate with robot engineers and scientists to perform System Identification (SysID) to minimize the Sim2Real gap About the team At Frontier AI & Robotics (FAR), we're not just advancing robotics – we're reimagining it from the ground up. Our team is building the future of intelligent robotics through frontier foundation models and end-to-end learned systems. We tackle some of the most challenging problems in AI and robotics, from developing sophisticated perception systems to creating adaptive manipulation strategies that work in complex, real-world scenarios. What sets us apart is our unique combination of ambitious research vision and practical impact. We leverage Amazon's massive computational infrastructure and rich real-world datasets to train and deploy state-of-the-art foundation models. Our work spans the full spectrum of robotics intelligence – from multimodal perception using images, videos, and sensor data, to sophisticated manipulation strategies that can handle diverse real-world scenarios. We're building systems that don't just work in the lab, but scale to meet the demands of Amazon's global operations. Join us if you're excited about pushing the boundaries of what's possible in robotics, working with world-class researchers, and seeing your innovations deployed at unprecedented scale.