How Amazon Robotics researchers are solving a “beautiful problem”

Teaching robots to stow items presents a challenge so large it was previously considered impossible — until now.

The rate of innovation in machine learning is simply off the chart — what is possible today was barely on the drawing board even a handful of years ago. At Amazon, this has manifested in a robotic system that can not only identify potential space in a cluttered storage bin, but also sensitively manipulate that bin’s contents to create that space before successfully placing additional items inside — a result that, until recently, was impossible.

Related content
Why multimodal identification is a crucial step in automating item identification at Amazon scale.

This journey starts when a product arrives at an Amazon fulfillment center (FC). The first order of business is to make it available to customers by adding it to the FC's available inventory.

The stowing process

In practice, this means picking it up and stowing it in a storage pod. A pod is akin to a big bookcase, made of sturdy yellow fabric, that comprises up to 40 cubbies, known as bins. Each bin has strips of elastic across its front to keep the items inside from falling out. These pods are carried by a wheeled robot, or drive unit, to the workstation of the Amazon associate doing the stowing. When the pod is mostly full, it is wheeled back into the warehouse, where the items it contains await a customer order.

Stowing is a major component of Amazon’s operations. It is also a task that seemed an intractable problem from a robotic automation perspective, due to the subtlety of thought and dexterity required to do the job.

Picture the task. You have an item for stowing in your hand. You gauge its size and weight. You look at the array of bins before you, implicitly perceiving which are empty, which are already full, which bins have big chunks of space in them, and which have the potential to make space if you, say, pushed all the items currently in the bin to one side. You select a bin, move the elastic out of the way, make room for the item, and pop it in. Job done. Now repeat.

“Breaking all existing industrial robot thinking”

This stow task requires two high-level capabilities not generally found in robots. One, an excellent understanding of the three-dimensional world. Two, the ability to manipulate a wide range of packaged but sometimes fragile objects — from lightbulbs to toys — firmly, but sensitively: pushing items gently aside, flipping them up, slotting one item at an angle between other items and so on.

A simulation of robotic stowing

For a robotic system to stand a chance at this task, it would need intelligent visual perception, a free-moving robot arm, an end-of-arm manipulator unknown to engineering, and a keen sense of how much force it is exerting. In short: good luck with that.

“Stow fundamentally breaks all existing industrial robotic thinking,” says Siddhartha Srinivasa, director of Amazon Robotics AI. “Industrial manipulators are typically bulky arms that execute fixed trajectories very precisely. It’s very positional.”

When Srinivasa joined Amazon in 2018, multiple robotics programs had already attempted to stow to fabric pods using stiff positional manipulators.

“They failed miserably at it because it's a nightmare. It just doesn't work unless you have the right computational tool: you must not think physically, but computationally.”

Srinivasa knew the science for robotic stow didn’t exist yet, but he knew the right people to hire to develop it. He approached Parker Owan as he completed his PhD at the University of Washington.

A “beautiful problem”

Parker Owan, Robotics AI senior applied scientist, poses next to a robotic arm and in front of a yellow soft sided storage pod
Parker Owan, Robotics AI senior applied scientist

“At the time I was working on robotic contact, imitation learning, and force control,” says Owan, now a Robotics AI senior applied scientist. “Sidd said ‘Hey, there’s this beautiful problem at Amazon that you might be interested in taking a look at’, and he left it at that.”

The seed was planted. Owan joined Amazon, and then in 2019 dedicated himself to the stow challenge.

“I came at it from the perspective of decision-making algorithms: the perception needs; how to match items to the appropriate bin; how to leverage information of what's in the bin to make better decisions; motion planning for a robot arm moving through free space; and then actually making contact with products and creating space in bins.”

Aaron Parness, Robotics AI senior manager of applied science, poses near a robotic arm
Aaron Parness, Robotics AI senior manager of applied science

About six months into his exploratory work, Owan was joined by a small team of applied scientists, and hardware expert Aaron Parness, now a Robotics AI senior manager of applied science. Parness admits he was skeptical.

“My initial reaction was ‘Oh, how brave and naïve that this guy, fresh out of his PhD, thinks robots can deal with this level of clutter and physical contact!’”

But Parness was quickly hooked. “Once you see how the problem can be broken down and structured, it suddenly becomes clear that there's something super useful and interesting here.”

“Uncharted territory”

From a hardware perspective, the team needed to find a robot arm with force feedback. They tried several, before the team landed on an effective model. The arm provides feedback hundreds of times per second on how much force it is applying and any resistance it is meeting. Using this information to control the robot is called compliant manipulation.

“We knew from the beginning that we needed compliant manipulation, and we hadn't seen anybody in industry do this at scale before,” says Owan. “It was uncharted territory.”

Parness got to work on the all-important hardware. The problem of moving the elastics aside to stow an item was resolved using a relatively simple hooking system.

How the band separator works

The end-of-arm tool (EOAT) proved to be a next-level challenge. One reason that stowing is difficult for robots is the sheer diversity of items Amazon sells, and their associated packaging. You might have an unpumped soccer ball next to a book, next to a sports drink, next to a T-shirt, next to a jewelry box. A robot would need to handle this level of variety. The EOAT evolved quickly over two years, with multiple failures and iterations.

Paddles grip an array of items

“In the end, we found that gently squeezing an item between two paddles was the more stable way to hold items than using suction cups or mechanical pinchers,” says Parness.

However, the paddle set up presented a challenge when trying to insert held items into bins — the paddles kept getting in the way. Parness and his growing team hit upon an alternative: holding the item next to a bin, before simultaneously opening the paddles and using a plunger to push the item in. This drop-and-push technique was prone to errors because not all items reacted to it in the same way.

The EOAT’s next iteration saw the team put miniature conveyor belts on each paddle, enabling the EOAT to feed items smoothly into the bins without having to enter the bin itself.

The miniature conveyor belt works to bring an item to its designated bin

“With that change, our stowing success rate jumped from about 80% to 99%. That was a eureka moment for us — we knew we had our winner,” says Parness.

Making space with motion primitives

The ability to place items in bins is crucial, but so is making space in cluttered bins. To better understand what would be required of the robot system, the team closely studied how they performed the task themselves. Owan even donned a head camera to record his efforts.

The team was surprised to find that the vast majority of space-making hand movements within a fabric bin could be boiled down to four types or “motion primitives”. These include a sideways sweep of the bin’s current contents, flipping upright things that are lying flat, stacking, and slotting something at an angle into the gap between other items.

The process of making space

The engineers realized that the EOAT’s paddles could not get involved with this bin-manipulation task, because they would get in the way. The solution, in the end, was surprisingly simple: a thin metal sheet that could extend from the EOAT, dubbed “the spatula”. The extended spatula can firmly, but sensitively, push items to one side, flip them up, and generally be used to make room in a bin, before the paddles eject an item into the space created.

But how does the system know how full the pod’s bins are, and how does it decide where, and how, it will make space for the next item to be stowed? This is where visual perception and machine learning come into play.

Deciding where to attempt to stow an item requires a good understanding of how much space, in total, is available in each fabric bin. In an ideal world, this is where 3D sensor technologies such as LiDAR would be used. However, because the elastic cords across the front of every bin partially blocks the view inside, this option isn’t feasible.

A robot arm executes motion primitives

Instead, the system’s visual perception is based on cameras pointed at the pod that feed their image data to a machine learning system. Based on what it can see of each bin’s contents, the system “erases” the elastics and models what is lying unseen in the bin, and then estimates the total available space in each of the pod’s bins.

Often there is space available in a cluttered bin, but it is not contiguous: there are pockets of space here and there. The ML system — based in part on existing models developed by the Amazon Fulfillment Technologies team — then predicts how much contiguous space it can create in each bin, given the motion primitives at its disposal.

How the perception system "sees" available space

“These primitives, each of which can be varied as needed, can be chained in infinitely many ways,” Srinivasa explains. “It can, say, flip it over here, then push it across and drop the item in. Humans are great at identifying these primitives in the first place, and machine learning is great at organizing and orchestrating them.”

When the system has a firm idea of the options, it considers the items in its buffer — an area near the robot arm’s gantry in which products of various shapes and sizes wait to be stowed — and decides which items are best placed in which bins for maximum efficiency.

“For every potential stow, the system will predict its likelihood of success,” says Parness. “When the best prediction of success falls to about 96%, which happens when a pod is nearly full, we send that pod off and wheel in a new one.”

“Robots and people work together”

At the end of summer 2021, with its potential feasibility and value becoming clearer, the senior leadership team at Amazon gave the project their full backing.

“They said ‘As fast as you can go; whatever you need’. So this year has been a wild, wild ride. It feels like we’re a start-up within Amazon,” says Parness, who noted the approach has significant advantages for FC employees as well.

Related content
Amazon fulfillment centers use thousands of mobile robots. To keep products moving, Amazon Robotics researchers have crafted unique solutions.

“Robots and people work together in a hybrid system. Robots handle repetitive tasks and easily reach to the high and low shelves. Humans handle more complex items that require intuition and dexterity. The net effect will be more efficient operations that are also safer for our workers.”

Prototypes of the robotic stow workstation are installed at a lab in Seattle, Washington, and another system has been installed at an FC in Sumner, Washington, where it deals with live inventory. Already, the prototypes are stowing items well and showcasing the viability of the system.

“And there are always four or five scientists and engineers hovering around the robot, documenting issues and looking for improvements,” says Parness.

Stow will be the first brownfield automation project, at scale, at Amazon. We're enacting a future in which robots and humans can actually work side by side without us having to dramatically change the human working environment.
Siddhartha Srinivasa

This year, in a stowing test designed to include a variety of challenging product attributes — bagged items, irregular items with an offset center of gravity, and so on — the system successfully stowed 94 of 95 items. Of course, some items can never be stowed by this system, including particularly bulky or heavy products, or cylindrical items that don’t behave themselves on conveyor belts. The team’s ultimate target is to be able to stow 85% of products stocked by a standard Amazon FC.

“Interacting with chaotic arrangements of items, unknown items with different shapes and sizes, and learning to manipulate them in intelligent ways, all at Amazon scale — this is ground-breaking,” says Owan. “I feel like I’m at ground zero for a big thing, and that’s what makes me excited to come to work every day.”

“Stow will be the first brownfield automation project, at scale, at Amazon,” says Srinivasa. “Surgically inserting automation into existing buildings is very challenging, but we're enacting a future in which robots and humans can actually work side by side without us having to dramatically change the human working environment.

Related content
Company is testing a new class of robots that use artificial intelligence and computer vision to move freely throughout facilities.

"One of the advantages of the type of brownfield automation we do at Robotics AI is that it’s minimally disruptive to the process flow or the building space, which means that our robots can truly work alongside humans," Srinivasa adds. "This is also a future benefit of compliant arms as they can, via software and AI, be made safer than industrial arms.”

Robots and humans working side by side is key to the long-term expansion of this technology beyond retail, says Parness.

“Think of robots loading delicate groceries or, longer term, loading dishwashers or helping people with tasks around the house. Robots with a sense of force in their control loop is a new paradigm in compliant-robotics applications.”

Research areas

Related content

US, WA, Seattle
The Artificial General Intelligence (AGI) team is looking for a passionate, talented, and inventive Applied Scientist with a strong deep learning background, to build industry-leading Generative Artificial Intelligence (GenAI) technology with Large Language Models (LLMs) and multimodal systems. Key job responsibilities - Leverage Amazon’s heterogeneous data sources and large-scale computing resources to accelerate advances in generative artificial intelligence (GenAI). - Work with talented peers to lead the development of novel algorithms and modeling techniques to advance the state of the art with LLMs. - Collaborate with other science and engineering teams as well as business stakeholders to maximize the velocity and impact of your contributions. About the team It's an exciting time to be a leader in AI research. In Amazon's AGI Information team, you can make your mark by improving information-driven experiences of Amazon customers worldwide. Your work will directly impact our customers in the form of products and services that make use of language and multimodal technology!
US, WA, Seattle
Are you excited about developing foundation models to revolutionize automation, robotics and computer vision? Are you looking for opportunities to build and deploy them on real problems at truly vast scale? At Amazon Fulfillment Technologies and Robotics we are on a mission to build high-performance autonomous systems that perceive and act to further improve our world-class customer experience - at Amazon scale. We are looking for collaborative scientists, engineers and program managers for a variety of roles. The Amazon Robotics software team is seeking an experienced and senior Applied Scientist to focus on computer vision machine learning models. This includes building multi-viewpoint and time-series computer vision systems. It includes building large-scale models using data from many different tasks and scenes. This work spans from basic research such as cross domain training, to experimenting on prototype in the lab, to running wide-scale A/B tests on robots in our facilities. Key job responsibilities * Research vision - Where should we be focusing our efforts * Research delivery – Proving/dis-proving strategies in offline data or in the lab * Production studies - Insights from production data or ad-hoc experimentation. A day in the life Amazon offers a full range of benefits that support you and eligible family members, including domestic partners and their children. Benefits can vary by location, the number of regularly scheduled hours you work, length of employment, and job status such as seasonal or temporary employment. The benefits that generally apply to regular, full-time employees include: 1. Medical, Dental, and Vision Coverage 2. Maternity and Parental Leave Options 3. Paid Time Off (PTO) 4. 401(k) Plan If you are not sure that every qualification on the list above describes you exactly, we'd still love to hear from you! At Amazon, we value people with unique backgrounds, experiences, and skillsets. If you’re passionate about this role and want to make an impact on a global scale, please apply!
US, CA, East Palo Alto
The Customer Engagement Technology team leads AI/LLM-driven customer experience transformation using task-oriented dialogue systems. We develop multi-modal, multi-turn, goal-oriented dialog systems that can handle customer issues at Amazon scale across multiple languages. These systems are designed to adapt to changing company policies and invoke correct APIs to automate solutions to customer problems. Additionally, we enhance associate productivity through response/action recommendation, summarization to capture conversation context succinctly, retrieving precise information from documents to provide useful information to the agent, and machine translation to facilitate smoother conversations when the customer and agent speak different languages. Key job responsibilities Research and development of LLM-based chatbots and conversational AI systems for customer service applications. Design and implement state-of-the-art NLP and ML models for tasks such as language understanding, dialogue management, and response generation. Collaborate with cross-functional teams, including data scientists, software engineers, and product managers, to integrate LLM-based solutions into Amazon's customer service platforms. 4. Develop and implement strategies for data collection, annotation, and model training to ensure high-quality and robust performance of the chatbots. Conduct experiments and evaluations to measure the performance of the developed models and systems, and identify areas for improvement. Stay up-to-date with the latest advancements in NLP, LLMs, and conversational AI, and explore opportunities to incorporate new techniques and technologies into Amazon's customer service solutions. Collaborate with internal and external research communities, participate in conferences and publications, and contribute to the advancement of the field. A day in the life Amazon offers a full range of benefits that support you and eligible family members, including domestic partners and their children. Benefits can vary by location, the number of regularly scheduled hours you work, length of employment, and job status such as seasonal or temporary employment. The benefits that generally apply to regular, full-time employees include: 1. Medical, Dental, and Vision Coverage 2. Maternity and Parental Leave Options 3. Paid Time Off (PTO) 4. 401(k) Plan If you are not sure that every qualification on the list above describes you exactly, we'd still love to hear from you! At Amazon, we value people with unique backgrounds, experiences, and skillsets. If you’re passionate about this role and want to make an impact on a global scale, please apply!
US, MA, Boston
The Amazon Dash Cart team is seeking a highly motivated Research Scientist (Level 5) to join our team that is focused on building new technologies for grocery stores. We are a team of scientists invent new algorithms (especially artificial intelligence, computer vision and sensor fusion) to improve customer experiences in grocery shopping. The Amazon Dash Cart is a smart shopping cart that uses sensors to keep track of what a shopper has added. Once done, they can bypass the checkout lane and just walk out. The cart comes with convenience features like a store map, a basket that can weigh produce, and product recommendations. Amazon Dash Cart’s are available at Amazon Fresh, Whole Foods. Learn more about the Dash Cart at https://www.amazon.com/b?ie=UTF8&node=21289116011. Key job responsibilities As a research scientist, you will help solve a variety of technical challenges and mentor other engineers. You will play an active role in translating business and functional requirements into concrete deliverables and build quick prototypes or proofs of concept in partnership with other technology leaders within the team. You will tackle challenging, novel situations every day and given the size of this initiative, you’ll have the opportunity to work with multiple technical teams at Amazon in different locations. You should be comfortable with a degree of ambiguity that’s higher than most projects and relish the idea of solving problems that, frankly, haven’t been solved before - anywhere. Along the way, we guarantee that you’ll learn a ton, have fun and make a positive impact on millions of people. About the team Amazon Dash cart allows shoppers to checkout without lines — you just place the items in the cart and the cart will take care of the rest. When you’re done shopping, you leave the store through a designated dash lane. We charge the payment method in your Amazon account as you walk through the dash lane and send you a receipt. Check it out at https://www.amazon.com/b?ie=UTF8&node=21289116011. Designed and custom-built by Amazonians, our Dash cart uses a variety of technologies including computer vision, sensor fusion, and advanced machine learning.
US, WA, Seattle
The Customer Engagement Technology team leads AI/LLM-driven customer experience transformation using task-oriented dialogue systems. We develop multi-modal, multi-turn, goal-oriented dialog systems that can handle customer issues at Amazon scale across multiple languages. These systems are designed to adapt to changing company policies and invoke correct APIs to automate solutions to customer problems. Additionally, we enhance associate productivity through response/action recommendation, summarization to capture conversation context succinctly, retrieving precise information from documents to provide useful information to the agent, and machine translation to facilitate smoother conversations when the customer and agent speak different languages. Key job responsibilities Research and development of LLM-based chatbots and conversational AI systems for customer service applications. Design and implement state-of-the-art NLP and ML models for tasks such as language understanding, dialogue management, and response generation. Collaborate with cross-functional teams, including data scientists, software engineers, and product managers, to integrate LLM-based solutions into Amazon's customer service platforms. Develop and implement strategies for data collection, annotation, and model training to ensure high-quality and robust performance of the chatbots. Conduct experiments and evaluations to measure the performance of the developed models and systems, and identify areas for improvement. Stay up-to-date with the latest advancements in NLP, LLMs, and conversational AI, and explore opportunities to incorporate new techniques and technologies into Amazon's customer service solutions. Collaborate with internal and external research communities, participate in conferences and publications, and contribute to the advancement of the field. A day in the life We thrive on solving challenging problems to innovate for our customers. By pushing the boundaries of technology, we create unparalleled experiences that enable us to rapidly adapt in a dynamic environment. Our decisions are guided by data, and we collaborate with engineering, science, and product teams to foster an innovative learning environment. If you are not sure that every qualification on the list above describes you exactly, we'd still love to hear from you! At Amazon, we value people with unique backgrounds, experiences, and skillsets. If you’re passionate about this role and want to make an impact on a global scale, please apply! Benefits Summary: Amazon offers a full range of benefits that support you and eligible family members, including domestic partners and their children. Benefits can vary by location, the number of regularly scheduled hours you work, length of employment, and job status such as seasonal or temporary employment. The benefits that generally apply to regular, full-time employees include: 1. Medical, Dental, and Vision Coverage 2. Maternity and Parental Leave Options 3. Paid Time Off (PTO) 4. 401(k) Plan About the team Join our team of scientists and engineers who develop and deploy LLM-based Conversational AI systems to enhance Amazon's customer service experience and effectiveness. We work on innovative solutions that help customers solve their issues and get their questions answered efficiently, and associate-facing products that support our customer service associate workforce.
US, CA, San Francisco
If you are interested in this position, please apply on Twitch's Career site https://www.twitch.tv/jobs/en/ About Us: Twitch is the world’s biggest live streaming service, with global communities built around gaming, entertainment, music, sports, cooking, and more. It is where thousands of communities come together for whatever, every day. We’re about community, inside and out. You’ll find coworkers who are eager to team up, collaborate, and smash (or elegantly solve) problems together. We’re on a quest to empower live communities, so if this sounds good to you, see what we’re up to on LinkedIn and X, and discover the projects we’re solving on our Blog. Be sure to explore our Interviewing Guide to learn how to ace our interview process. About the Role Data is critical to the algorithms that power the recommendation, search, and ranking systems. It's also critical to making decisions, especially working on systems that are themselves data-driven. As a Senior Data Scientist on the CDML team, you'll be responsible for helping drive improvements to the machine learning systems as well as analytics to drive decision-making. While there is a team of Applied Scientists building and shipping the algorithms themselves, data science can help improve these systems directly. In this role, you can identify and build new signals to input into the models. We're also working on the value model that the algorithm optimizes, and your input will be critical to understanding the tradeoffs and balancing multiple objectives in a scientific way. We also still have big unanswered analytics questions to solve. How often do viewers just want to get to the content they already know they want to watch, and when are they open to exploring new channels? These are the sorts of questions you'll be tackling. You Will - Inform product strategies by defining and updating core metrics for each initiative - Estimate the opportunity sizing of new features the team could take on - Identify and build new signals to incorporate into the algorithms driving recommendations, search, and feed ranking at Twitch - Identify metric tradeoff ratios that help inform value model choices, long-term impact from early-growth-funnel users, and other product decisions - Establish analytical framework for your team: ad-hoc analysis, automated dashboards, and self-service reporting tools to surface key data to stakeholders - Design A/B experiments to drive product direction with iterative innovation and measurement - Work hand-in-hand with business, product, engineering, and design to proactively influence and inform teammates' decisions throughout the product life cycle - Distill ambiguous product or business questions, find clever ways to answer them, and to quantify the uncertainty Perks - Medical, Dental, Vision & Disability Insurance - 401(k) - Maternity & Parental Leave - Flexible PTO - Amazon Employee Discount
US, WA, Seattle
The People eXperience and Technology (PXT) Central Science Team uses economics, behavioral science, statistics, and machine learning to proactively identify mechanisms, process improvements and products, which simultaneously improve Amazon and the lives, wellbeing, and the value of work of Amazonians. We are an interdisciplinary team which combines the talents of science and engineering to develop and deliver solutions that measurably achieve this goal. We invest in innovation and rapid prototyping of scientific models, AI/ML technologies and software solutions to accelerate informed, accurate, and reliable decision backed by science and data. As a research scientist you will you will design and carry out surveys to address business questions; analyze survey and other forms of data with regression models; perform weighting and multiple imputation to reduce bias due to nonresponse. You will conduct methodological and statistical research to understand the quality of survey data. You will work with economists, engineers, and computer scientists to select samples, draft and test survey questions, calculate nonresponse adjusted weights, and estimate regression models on large scale data. You will evaluate, diagnose, understand, and surface drivers and moderators for key research streams, including (but are not limited to) attrition, engagement, productivity, inclusion, and Amazon culture. Key job responsibilities Help to design and execute a scalable global content development and validation strategy to drive more effective decisions and improve the employee experience across all of Amazon Conduct psychometric and econometric analyses to evaluate integrity and practical application of survey questions and data Identify and execute research streams to evaluate how to mitigate or remove sources of measurement error Partner closely and drive effective collaborations across multi-disciplinary research and product teams Manage full life cycle of large-scale research programs (Develop strategy, gather requirements, manage and execute)
US, WA, Seattle
Join the next revolution in robotics at Amazon's Frontier AI & Robotics team, where you'll work alongside world-renowned AI pioneers like Pieter Abbeel, Rocky Duan, and Peter Chen to push the boundaries of what's possible in robotic intelligence. As an Applied Scientist, you'll be at the forefront of developing breakthrough foundation models that enable robots to perceive, understand, and interact with the world in unprecedented ways. You'll drive independent research initiatives in areas such as perception, manipulation, scence understanding, sim2real transfer, multi-modal foundation models, and multi-task learning, designing novel algorithms that bridge the gap between cutting-edge research and real-world deployment at Amazon scale. In this role, you'll balance innovative technical exploration with practical implementation, collaborating with platform teams to ensure your models and algorithms perform robustly in dynamic real-world environments. You'll have access to Amazon's vast computational resources, enabling you to tackle ambitious problems in areas like very large multi-modal robotic foundation models and efficient, promptable model architectures that can scale across diverse robotic applications. Key job responsibilities - Design and implement novel deep learning architectures that push the boundaries of what robots can understand and accomplish - Drive independent research initiatives in robotics foundation models, focusing on breakthrough approaches in perception, and manipulation, for example open-vocabulary panoptic scene understanding, scaling up multi-modal LLMs, sim2real/real2sim techniques, end-to-end vision-language-action models, efficient model inference, video tokenization - Lead technical projects from conceptualization through deployment, ensuring robust performance in production environments - Collaborate with platform teams to optimize and scale models for real-world applications - Contribute to the team's technical strategy and help shape our approach to next-generation robotics challenges A day in the life - Design and implement novel foundation model architectures, leveraging our extensive compute infrastructure to train and evaluate at scale - Collaborate with our world-class research team to solve complex technical challenges - Lead technical initiatives from conception to deployment, working closely with robotics engineers to integrate your solutions into production systems - Participate in technical discussions and brainstorming sessions with team leaders and fellow scientists - Leverage our massive compute cluster and extensive robotics infrastructure to rapidly prototype and validate new ideas - Transform theoretical insights into practical solutions that can handle the complexities of real-world robotics applications Amazon offers a full range of benefits that support you and eligible family members, including domestic partners and their children. Benefits can vary by location, the number of regularly scheduled hours you work, length of employment, and job status such as seasonal or temporary employment. The benefits that generally apply to regular, full-time employees include: 1. Medical, Dental, and Vision Coverage 2. Maternity and Parental Leave Options 3. Paid Time Off (PTO) 4. 401(k) Plan If you are not sure that every qualification on the list above describes you exactly, we'd still love to hear from you! At Amazon, we value people with unique backgrounds, experiences, and skillsets. If you’re passionate about this role and want to make an impact on a global scale, please apply! About the team At Frontier AI & Robotics, we're not just advancing robotics – we're reimagining it from the ground up. Our team, led by pioneering AI researchers Pieter Abbeel, Rocky Duan, and Peter Chen, is building the future of intelligent robotics through groundbreaking foundation models and end-to-end learned systems. We tackle some of the most challenging problems in AI and robotics, from developing sophisticated perception systems to creating adaptive manipulation strategies that work in complex, real-world scenarios. What sets us apart is our unique combination of ambitious research vision and practical impact. We leverage Amazon's massive computational infrastructure and rich real-world datasets to train and deploy state-of-the-art foundation models. Our work spans the full spectrum of robotics intelligence – from multimodal perception using images, videos, and sensor data, to sophisticated manipulation strategies that can handle diverse real-world scenarios. We're building systems that don't just work in the lab, but scale to meet the demands of Amazon's global operations. Join us if you're excited about pushing the boundaries of what's possible in robotics, working with world-class researchers, and seeing your innovations deployed at unprecedented scale.
IN, KA, Bengaluru
Alexa is the voice activated digital assistant powering devices like Amazon Echo, Echo Dot, Echo Show, and Fire TV, which are at the forefront of this latest technology wave. To preserve our customers’ experience and trust, the Alexa Sensitive Content Intelligence (ASCI) team builds services and tools through Machine Learning techniques to implement our policies to detect and mitigate sensitive content in across Alexa. We are looking for a passionate, talented, and inventive Data Scientist-II to help build industry-leading technology with Large Language Models (LLMs) and multimodal systems, requiring good learning and generative models knowledge. You will be working with a team of exceptional Data Scientists working in a hybrid, fast-paced organization where scientists, engineers, and product managers work together to build customer facing experiences. You will collaborate with other data scientists while understanding the role data plays in developing data sets and exemplars that meet customer needs. You will analyze and automate processes for collecting and annotating LLM inputs and outputs to assess data quality and measurement. You will apply state-of-the-art Generative AI techniques to analyze how well our data represents human language and run experiments to gauge downstream interactions. You will work collaboratively with other data scientists and applied scientists to design and implement principled strategies for data optimization. Key job responsibilities A Data Scientist-II should have a reasonably good understanding of NLP models (e.g. LSTM, LLMs, other transformer based models) or CV models (e.g. CNN, AlexNet, ResNet, GANs, ViT) and know of ways to improve their performance using data. You leverage your technical expertise in improving and extending existing models. Your work will directly impact our customers in the form of products and services that make use of speech, language, and computer vision technologies. You will be joining a select group of people making history producing one of the most highly rated products in Amazon's history, so if you are looking for a challenging and innovative role where you can solve important problems while growing in your career, this may be the place for you. A day in the life You will be working with a group of talented scientists on running experiments to test scientific proposal/solutions to improve our sensitive contents detection and mitigation for worldwide coverage. This will involve collaboration with partner teams including engineering, PMs, data annotators, and other scientists to discuss data quality, policy, model development, and solution implementation. You will work with other scientists, collaborating and contributing to extending and improving solutions for the team. About the team The mission of the Alexa Sensitive Content Intelligence (ASCI) team is to (1) minimize negative surprises to customers caused by sensitive content, (2) detect and prevent potential brand-damaging interactions, and (3) build customer trust through appropriate interactions on sensitive topics. The term “sensitive content” includes within its scope a wide range of categories of content such as offensive content (e.g., hate speech, racist speech), profanity, content that is suitable only for certain age groups, politically polarizing content, and religiously polarizing content. The term “content” refers to any material that is exposed to customers by Alexa (including both 1P and 3P experiences) and includes text, speech, audio, and video.
US, WA, Seattle
The AWS Marketplace & Partner Services Science team is hiring an Applied Scientist to develop state-of-the-art recommendations systems, Conversational AI agents, and personalization capabilities within AWS Marketplace. This role will revolutionize discovery of solutions that accelerate customer cloud migrations for our customers, bringing personalization to AWS customers. The ideal candidate is comfortable leading production level recommendations strategies, implementing agent based conversationalAI experience, and mentoring other scientists on the team. You able to evaluate feasibility of scientific approaches and influence business leaders to develop the best experience for our customers. You thrive in a collaborative environment, where mentorship, learning, and teamwork is critical. Key job responsibilities - Work with customers, product managers, scientists, and engineers to deliver production level recommendation experiences - Ability to write production level code and support requirements for MLOps/LLMOps - Mentor Scientists on the team, and guide scientific approach across the organization About the team The AWS Marketplace & Partner Services Science team supports science models and recommendations that are deployed directly to AWS Customers (via AWS Marketplace), to our partners (via Partner Central), and to our internal AWS Sellers. Our mission is to accelerate cloud migrations and modernizations, supporting AWS customers to innovate, and the growth of our AWS Partners.