Zoox 3D map.gif
This visualization shows a Zoox vehicle aligning lidar data to Zoox's 3D map to localize itself in downtown San Francisco. Central to the Zoox navigation system is a cluster of capabilities: calibration, localization, and mapping.
Zoox

How Zoox vehicles “find themselves” in an ever-changing world

Advanced machine learning systems help autonomous vehicles react to unexpected changes.

For a human to drive successfully around an urban environment, they must be able to trust their eyes and other senses, know where they are, understand the permissible ways to move their vehicle safely, and of course know how to reach their destination.

Building these abilities, and so many more, into an autonomous electric vehicle designed to transport customers smoothly and safely around densely populated cities takes an astonishing amount of technological innovation. Since its founding in 2014, Zoox has been developing autonomous ride-hailing vehicles, and the systems that support them, from the ground up. The company, which is based in Foster City, California, became an independent subsidiary of Amazon in 2020.

Zoox Fully Autonomous Vehicle at Coit Tower San Francsico
The Zoox L5 fully autonomous, all-electric robotaxi has no forward or backward, can reach speeds of up to 75 miles per hour, and can move all four wheels independently.
Zoox

The Zoox purpose-built robot is an autonomous, pod-like electric vehicle that can carry four passengers in comfort. It has no forward or backward, being equally happy to drive in either direction, at up to 75 miles per hour, and can move all four wheels independently. There are no manual driving controls inside the vehicle.

Zoox has already done a great deal of testing of its autonomous driving systems using a fleet of retrofitted Toyota Highlander vehicles — with a human driver at the wheel, ready to take over if needed — in San Francisco, Las Vegas, Foster City, and Seattle.

Central to the Zoox navigation system is a cluster of capabilities called calibration, localization, and mapping. Only through this combination of abilities can Zoox vehicles understand their environment with exquisite precision, know where they are in relation to everything in their vicinity and beyond, and know exactly where they are going.

Zoox test vehicles, in this instance Toyota Highlanders, are retrofitted with an almost identical sensor configuration and compute system to their purpose-built vehicle.
Zoox has already done a great deal of testing of its autonomous driving systems using a fleet of Toyota Highlanders retrofitted with an almost identical sensor configuration and compute system to the purpose-built vehicle — with a human driver at the wheel, ready to take over if needed — in San Francisco, Las Vegas, Foster City, and Seattle.
Zoox

This is the domain of Zoox’s CLAMS (Calibration, Localization, and Mapping Simultaneously) and Zoox Road Network (ZRN) teams, which together enable the vehicle to meaningfully understand its location and process its surroundings. To get an idea of how these elements work in concert, Amazon Science spoke to several members of these teams.

In terms of awareness of its environment, the Zoox vehicle can fairly be likened to an all-seeing eye. Its state-of-the-art sensor architecture is made up of LiDARs (Light Detection and Ranging), radars, visual cameras, and longwave-infrared cameras. These are arrayed symmetrically around the outside of the vehicle, providing an overlapping, 360-degree field of view.

With this many sensors in play, it is critical that their input is stitched together accurately to create a true and self-consistent picture of everything happening all around the vehicle, moment to moment. To do that, the vehicle needs to know exactly where its sensors are in relation to each other, and with sensors of such high resolution, it’s not enough simply to know where the sensors were attached to the vehicle in the first place.

“To a very minor but still important degree, every vehicle is a special snowflake in some way,” says Taylor Arnicar, staff technical program manager, who oversees the CLAMS and ZRN teams. “And the other reality is we’re exposing these vehicles to rather harsh real-world conditions. There’s shock and vibe, thermal events, and all these things can cause very slight changes in sensor positioning.” Were such changes to be ignored, it could result in unacceptably “blurry” vision, Arnicar says.

In other autonomous-robotics applications, sensor calibration typically entails the robot looking at a specific calibration target, displayed on surrounding infrastructure, such as a wall. With the Zoox vehicle destined for the ever-changing urban environment, the Zoox team is pioneering infrastructure-free calibration.

This animation shows a Zoox system aligning color camera edges to lidar depth edges
This animation shows a Zoox system aligning color camera edges to lidar depth edges. With the Zoox vehicle destined for the ever-changing urban environment, the Zoox team is pioneering infrastructure-free calibration.
Zoox

“That means we rely on the natural environment — whatever objects, shapes, and colors are in the world around the vehicle as it drives,” says Arnicar. One way the team does this is by automatically identifying image gradients — such as the edges of buildings or the trunks of trees — from the vehicle’s color camera data and aligning those with depth edges in the LiDAR data.

It is worth emphasizing that a superpower of the Zoox vehicle is seeing its surroundings with superhuman perception. With so many sensors mounted externally, in pods on the corners of the vehicles, it can see what’s coming around every corner before a human driver would. Its LiDARs and visual cameras mean it knows what lies in every direction with high precision. It even boasts a kind of X-ray vision: “Certain materials don’t obstruct the radar,” says Elena Strumm, Zoox’s engineering manager for mapping algorithms. “When a bicyclist is cycling behind a bush, for example, we might get a really clear radar signature on them, even if that bush has occluded the LiDAR and visual cameras.”

Related content
Jesse Levinson, co-founder and CTO of Zoox, answers 3 questions about the challenges of developing autonomous vehicles and why he’s excited about Zoox’s robotaxi fleet.

Now that the vehicle can rely on what it senses, it needs a map. The Zoox team gathers its map data first-hand by driving around the cities in which it will operate in Toyota Highlanders retrofitted with the full Zoox sensor architecture. LiDAR data and visual images collected in this way can be made into high-definition maps by the CLAMS team. But first, all the people, cars, and other ephemeral aspects of the urban landscape must be removed from the LiDAR data. For this, machine learning is required.

When the Zoox vehicle is in normal urban operations, it is fundamental that its perception system recognizes the aspects of incoming LiDAR data that represent pedestrians, cyclists, cars or trucks — or indeed anything that may move in ways that need to be anticipated. LiDARs create enormous amounts of information about the dynamic 3D environment around the vehicle in the form of “point clouds” — sets of points that describe the objects and surfaces visible to the LiDAR. Using machine learning to instantly identify people in a fast-moving, dynamic environment is a challenge, particularly as people may be moving, static, partly occluded, in a wheelchair, only visible from the knees down, or any number of possibilities.

A raw lidar point cloud of Caesars Palace in Las Vegas, before it’s turned into an efficient mesh representation for the 3D map.
A raw lidar point cloud of Caesars Palace in Las Vegas, before it’s turned into an efficient mesh representation for the 3D map.
Zoox

“Machine-learned AI systems excel at this kind of pattern-matching problem. You feed millions of examples of something and then they can do a great good job of recognizing that thing in the abstract,” Arnicar explains.

In a beautiful piece of synergy, the Zoox mapping team benefits from this safety-critical application of machine learning because they require the reverse information — they want to take the people and cars out of the data so that they can create 3D maps of the road landscape and infrastructure alone.

“Once these elements are identified and removed from the mapping data, it becomes possible to combine LiDAR-based point clouds from overlapping locations to create high resolution 3D maps,” says Strumm.

But maps are not useful to the vehicle without meaning. To create a “semantic map,” the ZRN team adds layers of information to the 3D map that encode everything static that the vehicle needs to navigate the road safely, including speed limits, traffic light locations, one-way streets, keep-clear zones and more.

Related content
Deep learning to produce invariant representations, estimations of sensor reliability, and efficient map representations all contribute to Astro’s superior spatial intelligence.

The final core piece of the CLAMS team’s work is localization. Zoox’s localization technology allows each vehicle to know where it is in the world — and on its map — to within a few centimeters, and its direction to within a fraction of a degree. The vehicle does this not only by comparing its visual inputs with its map, but also by utilizing GPS, accelerometers, wheel speeds, gyroscopes, and more. It can therefore check its precise location and velocity hundreds of times per second. Armed with a combination of the physical and semantic maps, and always aware of its place in relation to every object or person in its vicinity, the vehicle can navigate safely to its destination.

Part of the localization challenge is that any map will become dated over time, Arnicar explains. “Once you build the map — from the moment the data is collected — you need to consider that it could be out of date.” This is because the world can change anytime, anywhere, without notice. “On one occasion one of our Toyota Highlanders was driving down the street collecting data, and right in front of us was a construction truck with a guy hanging off the back, repainting the lane line in a different place as they drove along. No amount of fast mapping can catch up with these sorts of scenarios.” In practice, this means the map needs to be treated as a guidebook for the vehicle, not as gospel.

“This changeability of the real world led us to create the ZRN Monitor, a system on the vehicle that determines whether the actual road environment has differed from our semantic map data,” says Chris Gibson, engineering manager for the Zoox Road Network team. “For example, if lane markings have changed and now the double yellow lines have moved, then if we don’t detect that dynamically, we could potentially end up driving into opposing traffic. From a safety perspective, we must make absolutely certain that the vehicle does not drive into such areas.” The ZRN Monitor’s role is to identify and, to an extent, evaluate the safety implications of such unanticipated environmental modifications. These notifications are also an indication that it may be time to update the map for that area with more recent sensor data.

In the uncommon situation in which the vehicle encounters a challenging driving situation and it isn’t highly confident of a safe way to proceed, it can request “TeleGuidance” — a human operator located in a dedicated service center is provided with the full 3D understanding of the vehicle’s environment, as well as live-streamed sensor data.

A Zoox TeleGuidance tactician providing remote guidance to a vehicle from the Zoox HQ
A Zoox TeleGuidance tactician provides remote guidance to a vehicle from Zoox HQ.
Zoox

“Imagine a construction zone. The Zoox vehicle might need to be directed to drive on the other side of the road, which would normally carry oncoming traffic. That’s a rule that under most circumstances you shouldn’t break, but in this instance, a TeleGuidance tactician might provide the robot with waypoints to ensure it knows where it needs to go in that moment,” says Gibson. The vehicle remains responsible for the safety of its passengers, however, and continues to drive autonomously at all times while acting on the TeleGuidance information.

Before paying customers will be able to use their smartphones to hail a Zoox vehicle, more on-road testing first needs to be done. Zoox has built dozens of its purpose-built vehicles and is testing them on “semi-private courses” in California, according to Zoox’s co-founder and chief technology officer, Jesse Levinson. Next on the agenda is full testing on public roads, says Levinson, who promises that is “really not that far away. We’re not talking about years.”

So, what does it feel like to be transported in a Zoox vehicle?

“I’ve ridden in a Zoox vehicle, with no safety driver, no steering wheel, no anything — just me in the vehicle,” says Arnicar. “And it is magical. It’s what I’ve been working at Zoox seven years to experience. I’ve seen Zoox go from sketches on a napkin to something I can ride in. That's pretty amazing.”

When an autonomous Zoox vehicle ultimately comes around a corner near you, know this for a fact: no matter how striking and novel it looks, it will see you before you see it.

Research areas

Related content

US, WA, Seattle
The Artificial General Intelligence (AGI) team is looking for a passionate, talented, and inventive Applied Scientist with a strong deep learning background, to build industry-leading Generative Artificial Intelligence (GenAI) technology with Large Language Models (LLMs) and multimodal systems. Key job responsibilities - Leverage Amazon’s heterogeneous data sources and large-scale computing resources to accelerate advances in generative artificial intelligence (GenAI). - Work with talented peers to lead the development of novel algorithms and modeling techniques to advance the state of the art with LLMs. - Collaborate with other science and engineering teams as well as business stakeholders to maximize the velocity and impact of your contributions. About the team It's an exciting time to be a leader in AI research. In Amazon's AGI Information team, you can make your mark by improving information-driven experiences of Amazon customers worldwide. Your work will directly impact our customers in the form of products and services that make use of language and multimodal technology!
US, WA, Seattle
Are you excited about developing foundation models to revolutionize automation, robotics and computer vision? Are you looking for opportunities to build and deploy them on real problems at truly vast scale? At Amazon Fulfillment Technologies and Robotics we are on a mission to build high-performance autonomous systems that perceive and act to further improve our world-class customer experience - at Amazon scale. We are looking for collaborative scientists, engineers and program managers for a variety of roles. The Amazon Robotics software team is seeking an experienced and senior Applied Scientist to focus on computer vision machine learning models. This includes building multi-viewpoint and time-series computer vision systems. It includes building large-scale models using data from many different tasks and scenes. This work spans from basic research such as cross domain training, to experimenting on prototype in the lab, to running wide-scale A/B tests on robots in our facilities. Key job responsibilities * Research vision - Where should we be focusing our efforts * Research delivery – Proving/dis-proving strategies in offline data or in the lab * Production studies - Insights from production data or ad-hoc experimentation. A day in the life Amazon offers a full range of benefits that support you and eligible family members, including domestic partners and their children. Benefits can vary by location, the number of regularly scheduled hours you work, length of employment, and job status such as seasonal or temporary employment. The benefits that generally apply to regular, full-time employees include: 1. Medical, Dental, and Vision Coverage 2. Maternity and Parental Leave Options 3. Paid Time Off (PTO) 4. 401(k) Plan If you are not sure that every qualification on the list above describes you exactly, we'd still love to hear from you! At Amazon, we value people with unique backgrounds, experiences, and skillsets. If you’re passionate about this role and want to make an impact on a global scale, please apply!
US, CA, East Palo Alto
The Customer Engagement Technology team leads AI/LLM-driven customer experience transformation using task-oriented dialogue systems. We develop multi-modal, multi-turn, goal-oriented dialog systems that can handle customer issues at Amazon scale across multiple languages. These systems are designed to adapt to changing company policies and invoke correct APIs to automate solutions to customer problems. Additionally, we enhance associate productivity through response/action recommendation, summarization to capture conversation context succinctly, retrieving precise information from documents to provide useful information to the agent, and machine translation to facilitate smoother conversations when the customer and agent speak different languages. Key job responsibilities Research and development of LLM-based chatbots and conversational AI systems for customer service applications. Design and implement state-of-the-art NLP and ML models for tasks such as language understanding, dialogue management, and response generation. Collaborate with cross-functional teams, including data scientists, software engineers, and product managers, to integrate LLM-based solutions into Amazon's customer service platforms. 4. Develop and implement strategies for data collection, annotation, and model training to ensure high-quality and robust performance of the chatbots. Conduct experiments and evaluations to measure the performance of the developed models and systems, and identify areas for improvement. Stay up-to-date with the latest advancements in NLP, LLMs, and conversational AI, and explore opportunities to incorporate new techniques and technologies into Amazon's customer service solutions. Collaborate with internal and external research communities, participate in conferences and publications, and contribute to the advancement of the field. A day in the life Amazon offers a full range of benefits that support you and eligible family members, including domestic partners and their children. Benefits can vary by location, the number of regularly scheduled hours you work, length of employment, and job status such as seasonal or temporary employment. The benefits that generally apply to regular, full-time employees include: 1. Medical, Dental, and Vision Coverage 2. Maternity and Parental Leave Options 3. Paid Time Off (PTO) 4. 401(k) Plan If you are not sure that every qualification on the list above describes you exactly, we'd still love to hear from you! At Amazon, we value people with unique backgrounds, experiences, and skillsets. If you’re passionate about this role and want to make an impact on a global scale, please apply!
US, MA, Boston
The Amazon Dash Cart team is seeking a highly motivated Research Scientist (Level 5) to join our team that is focused on building new technologies for grocery stores. We are a team of scientists invent new algorithms (especially artificial intelligence, computer vision and sensor fusion) to improve customer experiences in grocery shopping. The Amazon Dash Cart is a smart shopping cart that uses sensors to keep track of what a shopper has added. Once done, they can bypass the checkout lane and just walk out. The cart comes with convenience features like a store map, a basket that can weigh produce, and product recommendations. Amazon Dash Cart’s are available at Amazon Fresh, Whole Foods. Learn more about the Dash Cart at https://www.amazon.com/b?ie=UTF8&node=21289116011. Key job responsibilities As a research scientist, you will help solve a variety of technical challenges and mentor other engineers. You will play an active role in translating business and functional requirements into concrete deliverables and build quick prototypes or proofs of concept in partnership with other technology leaders within the team. You will tackle challenging, novel situations every day and given the size of this initiative, you’ll have the opportunity to work with multiple technical teams at Amazon in different locations. You should be comfortable with a degree of ambiguity that’s higher than most projects and relish the idea of solving problems that, frankly, haven’t been solved before - anywhere. Along the way, we guarantee that you’ll learn a ton, have fun and make a positive impact on millions of people. About the team Amazon Dash cart allows shoppers to checkout without lines — you just place the items in the cart and the cart will take care of the rest. When you’re done shopping, you leave the store through a designated dash lane. We charge the payment method in your Amazon account as you walk through the dash lane and send you a receipt. Check it out at https://www.amazon.com/b?ie=UTF8&node=21289116011. Designed and custom-built by Amazonians, our Dash cart uses a variety of technologies including computer vision, sensor fusion, and advanced machine learning.
US, WA, Seattle
The Customer Engagement Technology team leads AI/LLM-driven customer experience transformation using task-oriented dialogue systems. We develop multi-modal, multi-turn, goal-oriented dialog systems that can handle customer issues at Amazon scale across multiple languages. These systems are designed to adapt to changing company policies and invoke correct APIs to automate solutions to customer problems. Additionally, we enhance associate productivity through response/action recommendation, summarization to capture conversation context succinctly, retrieving precise information from documents to provide useful information to the agent, and machine translation to facilitate smoother conversations when the customer and agent speak different languages. Key job responsibilities Research and development of LLM-based chatbots and conversational AI systems for customer service applications. Design and implement state-of-the-art NLP and ML models for tasks such as language understanding, dialogue management, and response generation. Collaborate with cross-functional teams, including data scientists, software engineers, and product managers, to integrate LLM-based solutions into Amazon's customer service platforms. Develop and implement strategies for data collection, annotation, and model training to ensure high-quality and robust performance of the chatbots. Conduct experiments and evaluations to measure the performance of the developed models and systems, and identify areas for improvement. Stay up-to-date with the latest advancements in NLP, LLMs, and conversational AI, and explore opportunities to incorporate new techniques and technologies into Amazon's customer service solutions. Collaborate with internal and external research communities, participate in conferences and publications, and contribute to the advancement of the field. A day in the life We thrive on solving challenging problems to innovate for our customers. By pushing the boundaries of technology, we create unparalleled experiences that enable us to rapidly adapt in a dynamic environment. Our decisions are guided by data, and we collaborate with engineering, science, and product teams to foster an innovative learning environment. If you are not sure that every qualification on the list above describes you exactly, we'd still love to hear from you! At Amazon, we value people with unique backgrounds, experiences, and skillsets. If you’re passionate about this role and want to make an impact on a global scale, please apply! Benefits Summary: Amazon offers a full range of benefits that support you and eligible family members, including domestic partners and their children. Benefits can vary by location, the number of regularly scheduled hours you work, length of employment, and job status such as seasonal or temporary employment. The benefits that generally apply to regular, full-time employees include: 1. Medical, Dental, and Vision Coverage 2. Maternity and Parental Leave Options 3. Paid Time Off (PTO) 4. 401(k) Plan About the team Join our team of scientists and engineers who develop and deploy LLM-based Conversational AI systems to enhance Amazon's customer service experience and effectiveness. We work on innovative solutions that help customers solve their issues and get their questions answered efficiently, and associate-facing products that support our customer service associate workforce.
US, CA, San Francisco
If you are interested in this position, please apply on Twitch's Career site https://www.twitch.tv/jobs/en/ About Us: Twitch is the world’s biggest live streaming service, with global communities built around gaming, entertainment, music, sports, cooking, and more. It is where thousands of communities come together for whatever, every day. We’re about community, inside and out. You’ll find coworkers who are eager to team up, collaborate, and smash (or elegantly solve) problems together. We’re on a quest to empower live communities, so if this sounds good to you, see what we’re up to on LinkedIn and X, and discover the projects we’re solving on our Blog. Be sure to explore our Interviewing Guide to learn how to ace our interview process. About the Role Data is critical to the algorithms that power the recommendation, search, and ranking systems. It's also critical to making decisions, especially working on systems that are themselves data-driven. As a Senior Data Scientist on the CDML team, you'll be responsible for helping drive improvements to the machine learning systems as well as analytics to drive decision-making. While there is a team of Applied Scientists building and shipping the algorithms themselves, data science can help improve these systems directly. In this role, you can identify and build new signals to input into the models. We're also working on the value model that the algorithm optimizes, and your input will be critical to understanding the tradeoffs and balancing multiple objectives in a scientific way. We also still have big unanswered analytics questions to solve. How often do viewers just want to get to the content they already know they want to watch, and when are they open to exploring new channels? These are the sorts of questions you'll be tackling. You Will - Inform product strategies by defining and updating core metrics for each initiative - Estimate the opportunity sizing of new features the team could take on - Identify and build new signals to incorporate into the algorithms driving recommendations, search, and feed ranking at Twitch - Identify metric tradeoff ratios that help inform value model choices, long-term impact from early-growth-funnel users, and other product decisions - Establish analytical framework for your team: ad-hoc analysis, automated dashboards, and self-service reporting tools to surface key data to stakeholders - Design A/B experiments to drive product direction with iterative innovation and measurement - Work hand-in-hand with business, product, engineering, and design to proactively influence and inform teammates' decisions throughout the product life cycle - Distill ambiguous product or business questions, find clever ways to answer them, and to quantify the uncertainty Perks - Medical, Dental, Vision & Disability Insurance - 401(k) - Maternity & Parental Leave - Flexible PTO - Amazon Employee Discount
US, WA, Seattle
The People eXperience and Technology (PXT) Central Science Team uses economics, behavioral science, statistics, and machine learning to proactively identify mechanisms, process improvements and products, which simultaneously improve Amazon and the lives, wellbeing, and the value of work of Amazonians. We are an interdisciplinary team which combines the talents of science and engineering to develop and deliver solutions that measurably achieve this goal. We invest in innovation and rapid prototyping of scientific models, AI/ML technologies and software solutions to accelerate informed, accurate, and reliable decision backed by science and data. As a research scientist you will you will design and carry out surveys to address business questions; analyze survey and other forms of data with regression models; perform weighting and multiple imputation to reduce bias due to nonresponse. You will conduct methodological and statistical research to understand the quality of survey data. You will work with economists, engineers, and computer scientists to select samples, draft and test survey questions, calculate nonresponse adjusted weights, and estimate regression models on large scale data. You will evaluate, diagnose, understand, and surface drivers and moderators for key research streams, including (but are not limited to) attrition, engagement, productivity, inclusion, and Amazon culture. Key job responsibilities Help to design and execute a scalable global content development and validation strategy to drive more effective decisions and improve the employee experience across all of Amazon Conduct psychometric and econometric analyses to evaluate integrity and practical application of survey questions and data Identify and execute research streams to evaluate how to mitigate or remove sources of measurement error Partner closely and drive effective collaborations across multi-disciplinary research and product teams Manage full life cycle of large-scale research programs (Develop strategy, gather requirements, manage and execute)
US, WA, Seattle
Join the next revolution in robotics at Amazon's Frontier AI & Robotics team, where you'll work alongside world-renowned AI pioneers like Pieter Abbeel, Rocky Duan, and Peter Chen to push the boundaries of what's possible in robotic intelligence. As an Applied Scientist, you'll be at the forefront of developing breakthrough foundation models that enable robots to perceive, understand, and interact with the world in unprecedented ways. You'll drive independent research initiatives in areas such as perception, manipulation, scence understanding, sim2real transfer, multi-modal foundation models, and multi-task learning, designing novel algorithms that bridge the gap between cutting-edge research and real-world deployment at Amazon scale. In this role, you'll balance innovative technical exploration with practical implementation, collaborating with platform teams to ensure your models and algorithms perform robustly in dynamic real-world environments. You'll have access to Amazon's vast computational resources, enabling you to tackle ambitious problems in areas like very large multi-modal robotic foundation models and efficient, promptable model architectures that can scale across diverse robotic applications. Key job responsibilities - Design and implement novel deep learning architectures that push the boundaries of what robots can understand and accomplish - Drive independent research initiatives in robotics foundation models, focusing on breakthrough approaches in perception, and manipulation, for example open-vocabulary panoptic scene understanding, scaling up multi-modal LLMs, sim2real/real2sim techniques, end-to-end vision-language-action models, efficient model inference, video tokenization - Lead technical projects from conceptualization through deployment, ensuring robust performance in production environments - Collaborate with platform teams to optimize and scale models for real-world applications - Contribute to the team's technical strategy and help shape our approach to next-generation robotics challenges A day in the life - Design and implement novel foundation model architectures, leveraging our extensive compute infrastructure to train and evaluate at scale - Collaborate with our world-class research team to solve complex technical challenges - Lead technical initiatives from conception to deployment, working closely with robotics engineers to integrate your solutions into production systems - Participate in technical discussions and brainstorming sessions with team leaders and fellow scientists - Leverage our massive compute cluster and extensive robotics infrastructure to rapidly prototype and validate new ideas - Transform theoretical insights into practical solutions that can handle the complexities of real-world robotics applications Amazon offers a full range of benefits that support you and eligible family members, including domestic partners and their children. Benefits can vary by location, the number of regularly scheduled hours you work, length of employment, and job status such as seasonal or temporary employment. The benefits that generally apply to regular, full-time employees include: 1. Medical, Dental, and Vision Coverage 2. Maternity and Parental Leave Options 3. Paid Time Off (PTO) 4. 401(k) Plan If you are not sure that every qualification on the list above describes you exactly, we'd still love to hear from you! At Amazon, we value people with unique backgrounds, experiences, and skillsets. If you’re passionate about this role and want to make an impact on a global scale, please apply! About the team At Frontier AI & Robotics, we're not just advancing robotics – we're reimagining it from the ground up. Our team, led by pioneering AI researchers Pieter Abbeel, Rocky Duan, and Peter Chen, is building the future of intelligent robotics through groundbreaking foundation models and end-to-end learned systems. We tackle some of the most challenging problems in AI and robotics, from developing sophisticated perception systems to creating adaptive manipulation strategies that work in complex, real-world scenarios. What sets us apart is our unique combination of ambitious research vision and practical impact. We leverage Amazon's massive computational infrastructure and rich real-world datasets to train and deploy state-of-the-art foundation models. Our work spans the full spectrum of robotics intelligence – from multimodal perception using images, videos, and sensor data, to sophisticated manipulation strategies that can handle diverse real-world scenarios. We're building systems that don't just work in the lab, but scale to meet the demands of Amazon's global operations. Join us if you're excited about pushing the boundaries of what's possible in robotics, working with world-class researchers, and seeing your innovations deployed at unprecedented scale.
IN, KA, Bengaluru
Alexa is the voice activated digital assistant powering devices like Amazon Echo, Echo Dot, Echo Show, and Fire TV, which are at the forefront of this latest technology wave. To preserve our customers’ experience and trust, the Alexa Sensitive Content Intelligence (ASCI) team builds services and tools through Machine Learning techniques to implement our policies to detect and mitigate sensitive content in across Alexa. We are looking for a passionate, talented, and inventive Data Scientist-II to help build industry-leading technology with Large Language Models (LLMs) and multimodal systems, requiring good learning and generative models knowledge. You will be working with a team of exceptional Data Scientists working in a hybrid, fast-paced organization where scientists, engineers, and product managers work together to build customer facing experiences. You will collaborate with other data scientists while understanding the role data plays in developing data sets and exemplars that meet customer needs. You will analyze and automate processes for collecting and annotating LLM inputs and outputs to assess data quality and measurement. You will apply state-of-the-art Generative AI techniques to analyze how well our data represents human language and run experiments to gauge downstream interactions. You will work collaboratively with other data scientists and applied scientists to design and implement principled strategies for data optimization. Key job responsibilities A Data Scientist-II should have a reasonably good understanding of NLP models (e.g. LSTM, LLMs, other transformer based models) or CV models (e.g. CNN, AlexNet, ResNet, GANs, ViT) and know of ways to improve their performance using data. You leverage your technical expertise in improving and extending existing models. Your work will directly impact our customers in the form of products and services that make use of speech, language, and computer vision technologies. You will be joining a select group of people making history producing one of the most highly rated products in Amazon's history, so if you are looking for a challenging and innovative role where you can solve important problems while growing in your career, this may be the place for you. A day in the life You will be working with a group of talented scientists on running experiments to test scientific proposal/solutions to improve our sensitive contents detection and mitigation for worldwide coverage. This will involve collaboration with partner teams including engineering, PMs, data annotators, and other scientists to discuss data quality, policy, model development, and solution implementation. You will work with other scientists, collaborating and contributing to extending and improving solutions for the team. About the team The mission of the Alexa Sensitive Content Intelligence (ASCI) team is to (1) minimize negative surprises to customers caused by sensitive content, (2) detect and prevent potential brand-damaging interactions, and (3) build customer trust through appropriate interactions on sensitive topics. The term “sensitive content” includes within its scope a wide range of categories of content such as offensive content (e.g., hate speech, racist speech), profanity, content that is suitable only for certain age groups, politically polarizing content, and religiously polarizing content. The term “content” refers to any material that is exposed to customers by Alexa (including both 1P and 3P experiences) and includes text, speech, audio, and video.
US, WA, Seattle
The AWS Marketplace & Partner Services Science team is hiring an Applied Scientist to develop state-of-the-art recommendations systems, Conversational AI agents, and personalization capabilities within AWS Marketplace. This role will revolutionize discovery of solutions that accelerate customer cloud migrations for our customers, bringing personalization to AWS customers. The ideal candidate is comfortable leading production level recommendations strategies, implementing agent based conversationalAI experience, and mentoring other scientists on the team. You able to evaluate feasibility of scientific approaches and influence business leaders to develop the best experience for our customers. You thrive in a collaborative environment, where mentorship, learning, and teamwork is critical. Key job responsibilities - Work with customers, product managers, scientists, and engineers to deliver production level recommendation experiences - Ability to write production level code and support requirements for MLOps/LLMOps - Mentor Scientists on the team, and guide scientific approach across the organization About the team The AWS Marketplace & Partner Services Science team supports science models and recommendations that are deployed directly to AWS Customers (via AWS Marketplace), to our partners (via Partner Central), and to our internal AWS Sellers. Our mission is to accelerate cloud migrations and modernizations, supporting AWS customers to innovate, and the growth of our AWS Partners.