How does Astro localize itself in an ever-changing home?

Deep learning to produce invariant representations, estimations of sensor reliability, and efficient map representations all contribute to Astro’s superior spatial intelligence.

We as humans take for granted our ability to operate in ever-changing home environments. Every morning, we can get from our bed to the kitchen, even after we change our furniture arrangement or move chairs around or when family members leave their shoes and bags in the middle of the hallway.

Related content
“Body language” and an awareness of social norms help Amazon’s new household robot integrate gracefully into the home.

This is because humans develop a deep contextual understanding of their environments that is invariant to a variety of changes. That understanding is enabled by superior sensors (eyes, ears, and touch), a powerful computer (the brain), and vast memory. However, for a robot that has finite sensors, computational power, and memory, dealing with a challenging dynamic environment requires innovative new algorithms and representations.

At Amazon, scientists and engineers have been investigating ways to help Astro know where it is at all times in a customer's home with few to no assumptions about the environment. Astro’s Intelligent Motion system relies on visual simultaneous localization and mapping, or V-SLAM, which enables a robot to use visual data to simultaneously construct a map of its environment and determine its position on that map.

VSLAM overview.png
A high-level overview of a V-SLAM system.

A V-SLAM system typically consists of a visual odometry tracker, a nonlinear optimizer, a loop-closure detector, and mapping components. The front end of Astro’s system performs visual odometry by extracting visual features from sensor data, establishing correspondences between features from different sensor feeds, and tracking the features from frame to frame in order to estimate sensor movement.

Loop-closure detection tries to match the features in the current frame with those previously seen to correct for accumulated inaccuracies in visual odometry. Astro then processes the visual features, estimated sensor poses, and loop-closure information and optimizes it to obtain a global motion trajectory and map.

State-of-the art research on V-SLAM assumes that the robot’s environment is mostly static and rarely changes. But those assumptions can’t be expected to hold in customers’ homes.

Visual odometry and loop closure
An example from a mock home environment, which demonstrates how Astro connects visual features captured by two sensors (red lines) and at different times (green lines). The actual data is discarded after the salient features (yellow circles) are extracted.

For Astro to localize robustly in home environments, we had to overcome a number of challenges, which we discuss in the following sections.

Environmental dynamics

Changes in the home happen at varying time scales: short-term changes, such as the presence of pets and people; medium-term changes, such as the appearance of objects like boxes, bags, or chairs that have been moved around; and long-term changes, such as holiday decorations, large-furniture rearrangements, or even structural changes to walls during renovations.

In addition, the lighting inside homes changes constantly as the sun moves and indoor lights are turned on and off, shading and illuminating rooms and furniture in ways that can make the same scene look very different at different times. Astro must be able to operate across all lighting conditions, including total darkness.

Two sets of inputs from Astro's perspective, showing how similarities between two different places in the home can lead to perceptual aliasing. Images have been adjusted for clarity.
Lighting shift.png
In this sample input from a simulated home environment, Astro's perspective on the same room at two different times demonstrates how dramatically lighting conditions can vary. Images have been adjusted for clarity.

While industrial robots can function in controlled environments whose variations are precoded as rules in software programs, adapting to unscripted environmental changes is one of the fundamental challenges the Astro team had to solve. The Intelligent Motion system needs a high-level visual understanding of its environment, such that invariant visual cues can be extracted and described programmatically.

Related content
Measuring the displacement between location estimates derived from different camera views can help enforce the local consistency vital to navigation.

Astro uses deep-learning algorithms trained with millions of image pairs, both captured and synthesized, that depict similar scenes at different times of day. Those images mimic a variety of possible scenarios Astro may face in a real customer’s home, such as different scene layouts, lighting and perspective changes, occlusions, object movements, and decorations.

Astro’s algorithms also enable it to adapt to an environment that it has never seen before (like a new customer’s home). The development of those algorithms required a highly accurate and scalable ground-truth mechanism that can be conveniently deployed to homes and allows the team to test and improve the robustness of the V-SLAM system.

In the figure below, for instance, a floor plan of the home was acquired ahead of time, and device motion was then estimated from sensor data at centimeter-level accuracy.

VSLAM map.png
A sample visualization of Astro’s ground truth system.

Using sensor fusion to improve localization

In order to improve the accuracy and robustness of localization, Astro fuses data from its navigation sensors with that of wheel encoders and an inertial measurement unit (IMU), which uses gyroscopes and accelerometers to gauge motion. Each of these sensors has limitations that can affect Astro's ability to localize, and to determine which sensors can be trusted at a given time, it is important to understand their noise characteristics and failure modes.

Related content
A new opt-in feature for Echo Show and Astro provides more-personalized content and experiences for customers who choose to enroll.

For example, when Astro drives over a threshold, the IMU sensor can saturate and give an erroneous reading. Or if Astro drives over a flooring surface where its wheels slip, its wheel encoders can give an inaccurate reading. Visual factors such as illumination and motion blur can also impact sensor readings.

The Astro team also had to account for a variety of use cases that would predictably cause sensor errors. For example, the team had to ensure that when Astro is lifted off the floor, the wheel encoder data is handled appropriately, and when the device enters low-power mode, certain sensor data is not processed.

SLAM overview.png
A simplified overview of Astro’s SLAM system.

Computational and memory limitations

Astro has finite onboard computational capacity and memory, which need to be shared among several critical systems. The Astro team developed a nonlinear optimization technique for “bundle adjustment”, the simultaneous refinement of the 3-D coordinates of the scene, the estimation of the robot’s relative motion, and optical characteristics of the camera, which is computationally efficient enough to generate six-degree-of-freedom pose information multiple times per second.

Because Astro’s map of the home is constantly updated to accommodate changes in the environment, its memory footprint steadily grows, necessitating compression and pruning techniques that preserve the map’s utility while staying within on-device memory limits.

Related content
Parallel processing of microphone inputs and separate detectors for periodicity and dynamics improve performance.

To that end, the Astro team designed a long-term-mapping system with multiple layers of contextual knowledge, from higher-level understanding — such as which rooms Astro can visit — to lower-level understanding — such as differentiating the appearance of objects lying on the floor. This multilayer approach helps Astro efficiently recognize any major changes to its operating environment while being robust enough to disregard minor changes.

All these updates happen on-device, without any cloud processing. A constantly updated representation of the customer’s home allows Astro to robustly and effectively localize itself over months.

In creating this new category of home robot, the Astro team used deep learning and built on state-of-the-art computational-geometry techniques to give Astro spatial intelligence far beyond that of simpler home robots. The Astro team will continue innovating to ensure that Astro learns new ways to adapt to more homes, helping customers save time in their busy lives.

Research areas

Related content

US, NY, New York
We are looking for detail-oriented, organized, and responsible individuals who are eager to learn how to work with large and complicated data sets. Some knowledge of econometrics, as well as basic familiarity with Python is necessary, and experience with SQL and UNIX would be a plus. These are full-time positions at 40 hours per week, with compensation being awarded on an hourly basis. You will learn how to build data sets and perform applied econometric analysis at Internet speed collaborating with economists, scientists, and product managers. These skills will translate well into writing applied chapters in your dissertation and provide you with work experience that may help you with placement. Roughly 85% of previous cohorts have converted to full time economics employment at Amazon. If you are interested, please send your CV to our mailing list at
FR, Clichy
The role can be based in any of our EU offices. Amazon Supply Chain forms the backbone of the fastest growing e-commerce business in the world. The sheer growth of the business and the company's mission "to be Earth’s most customer-centric company” makes the customer fulfillment business bigger and more complex with each passing year. The EU SC Science Optimization team is looking for a Science leader to tackle complex and ambiguous forecasting and optimization problems for our EU fulfillment network. The team owns the optimization of our Supply Chain from our suppliers to our customers. We are also responsible for analyzing the performance of our Supply Chain end-to-end and deploying Statistics, Econometrics, Operations Research and Machine Learning models to improve decision making within our organization, including forecasting, planning and executing our network. We work closely with Supply Chain Optimization Technology (SCOT) teams, who own the systems and the inputs we rely on to plan our networks, the worldwide scientific community, and with our internal EU stakeholders within Supply Chain, Transportation, Store and Finance. The ideal candidate has a well-rounded-technical/science background as well as a history of leading large projects end-to-end, and is comfortable in developing long term research strategy while ensuring the delivery of incremental results in an ever-changing operational environment. As a Sr. Science Manager, you will lead and grow a high-performing team of data and research scientists, technical program managers and business intelligence engineers. You will partner with operations, finance, store, science and engineering leadership to identify opportunities to drive efficiency improvement in our Fulfillment Center network flows via optimization and scalable execution. As a science leader, you will not only develop optimization solutions, but also influence strategy and outcomes across multiple partner science teams such as forecasting, transportation network design, or modelling teams. You will identify new areas of investment and research and work to align roadmaps to deliver on these opportunities. This role is inherently cross-functional and requires an ability to communicate, influence and earn the trust of science, technical, operations and business leadership.
US, WA, Bellevue
We are looking for detail-oriented, organized, and responsible individuals who are eager to learn how to work with large and complicated data sets. Some knowledge of econometrics, as well as basic familiarity with Python is necessary, and experience with SQL and UNIX would be a plus. These are full-time positions at 40 hours per week, with compensation being awarded on an hourly basis. You will learn how to build data sets and perform applied econometric analysis at Internet speed collaborating with economists, scientists, and product managers. These skills will translate well into writing applied chapters in your dissertation and provide you with work experience that may help you with placement. Roughly 85% of previous cohorts have converted to full time economics employment at Amazon. If you are interested, please send your CV to our mailing list at Key job responsibilities Estimate econometric models using large datasets. Must know SQL and Matlab.
US, WA, Seattle
The AWS AI Labs team has a world-leading team of researchers and academics, and we are looking for world-class colleagues to join us and make the AI revolution happen. Our team of scientists have developed the algorithms and models that power AWS computer vision services such as Amazon Rekognition and Amazon Textract. As part of the team, we expect that you will develop innovative solutions to hard problems, and publish your findings at peer reviewed conferences and workshops. AWS is the world-leading provider of cloud services, has fostered the creation and growth of countless new businesses, and is a positive force for good. Our customers bring problems which will give Applied Scientists like you endless opportunities to see your research have a positive and immediate impact in the world. You will have the opportunity to partner with technology and business teams to solve real-world problems, have access to virtually endless data and computational resources, and to world-class engineers and developers that can help bring your ideas into the world. Our research themes include, but are not limited to: few-shot learning, transfer learning, unsupervised and semi-supervised methods, active learning and semi-automated data annotation, large scale image and video detection and recognition, face detection and recognition, OCR and scene text recognition, document understanding, 3D scene and layout understanding, and geometric computer vision. For this role, we are looking for scientist who have experience working in the intersection of vision and language. We are located in Seattle, Pasadena, Palo Alto (USA) and in Haifa and Tel Aviv (Israel).
US, WA, Seattle
Amazon Prime Video is changing the way millions of customers enjoy digital content. Prime Video delivers premium content to customers through purchase and rental of movies and TV shows, unlimited on-demand streaming through Amazon Prime subscriptions, add-on channels like Showtime and HBO, and live concerts and sporting events like NFL Thursday Night Football. In total, Prime Video offers nearly 200,000 titles and is available across a wide variety of platforms, including PCs and Macs, Android and iOS mobile devices, Fire Tablets and Fire TV, Smart TVs, game consoles, Blu-ray players, set-top-boxes, and video-enabled Alexa devices. Amazon believes so strongly in the future of video that we've launched our own Amazon Studios to produce original movies and TV shows, many of which have already earned critical acclaim and top awards, including Oscars, Emmys and Golden Globes. The Global Consumer Engagement team within Amazon Prime Video builds product and technology solutions that drive customer activation and engagement across all our supported devices and global footprint. We obsess over finding effective, programmatic and scalable ways to reach customers via a broad portfolio of both in-app and out-of-app experiences. We would love to have you join us to build models that can classify and detect content available on Prime Video. We need you to analyze the video, audio and textual signal streams and improve state-of-art solutions while being scalable to Amazon size data. We need to solve problems across many cultures and languages, working alongside an operations team generating labels across many languages to help us achieve these goals. Our team consistently strives to innovate, and holds several novel patents and inventions in the motion picture and television industry. We are highly motivated to extend the state of the art. As a member of our team, you will apply your deep knowledge of Computer Vision and Machine Learning to concrete problems that have broad cross-organizational, global, and technology impact. Your work will focus on addressing fundamental computer vision models like video understanding and video summarization in addition to building appropriate large scale datasets. You will work on large engineering efforts that solve significantly complex problems facing global customers. You will be trusted to operate with independence and are often assigned to focus on areas with significant impact on audience satisfaction. You must be equally comfortable with digging in to customer requirements as you are drilling into design with development teams and developing production ready learning models. You consistently bring strong, data-driven business and technical judgment to decisions. You will work with internal and external stakeholders, cross-functional partners, and end-users around the world at all levels. Our team makes a big impact because nothing is more important to us than pleasing our customers, continually earning their trust, and thinking long term. You are empowered to bring new technologies and deep learning approaches to your solutions. We embrace the challenges of a fast paced market and evolving technologies, paving the way to universal availability of content. You will be encouraged to see the big picture, be innovative, and positively impact millions of customers. This is a young and evolving business where creativity and drive will have a lasting impact on the way video is enjoyed worldwide.
US, CA, Palo Alto
Join a team working on cutting-edge science to innovate search experiences for Amazon shoppers! Amazon Search helps customers shop with ease, confidence and delight WW. We aim to transform Search from an information retrieval engine to a shopping engine. In this role, you will build models to generate and recommend search queries that can help customers fulfill their shopping missions, reduce search efforts and let them explore and discover new products. You will also build models and applications that will increase customer awareness of related products and product attributes that might be best suited to fulfill the customer needs. Key job responsibilities On a day-to-day basis, you will: Design, develop, and evaluate highly innovative, scalable models and algorithms; Design and execute experiments to determine the impact of your models and algorithms; Work with product and software engineering teams to manage the integration of successful models and algorithms in complex, real-time production systems at very large scale; Share knowledge and research outcomes via internal and external conferences and journal publications; Project manage cross-functional Machine Learning initiatives. About the team The mission of Search Assistance is to improve search feature by reducing customers’ effort to search. We achieve this through three customer-facing features: Autocomplete, Spelling Correction and Related Searches. The core capability behind the three features is backend service Query Recommendation.
US, CA, Palo Alto
Amazon is investing heavily in building a world class advertising business and we are responsible for defining and delivering a collection of self-service performance advertising products that drive discovery and sales. Our products are strategically important to our Retail and Marketplace businesses driving long term growth. We deliver billions of ad impressions and millions of clicks daily and are breaking fresh ground to create world-class products. We are highly motivated, collaborative and fun-loving with an entrepreneurial spirit and bias for action. With a broad mandate to experiment and innovate, we are growing at an unprecedented rate with a seemingly endless range of new opportunities. The Ad Response Prediction team in Sponsored Products organization build advanced deep-learning models, large-scale machine-learning (ML) pipelines, and real-time serving infra to match shoppers’ intent to relevant ads on all devices, for all contexts and in all marketplaces. Through precise estimation of shoppers’ interaction with ads and their long-term value, we aim to drive optimal ads allocation and pricing, and help to deliver a relevant, engaging and delightful ads experience to Amazon shoppers. As the business and the complexity of various new initiatives we take continues to grow, we are looking for energetic, entrepreneurial, and self-driven science leaders to join the team. Key job responsibilities As a Principal Applied Scientist in the team, you will: Seek to understand in depth the Sponsored Products offering at Amazon and identify areas of opportunities to grow our business via principled ML solutions. Mentor and guide the applied scientists in our organization and hold us to a high standard of technical rigor and excellence in ML. Design and lead organization wide ML roadmaps to help our Amazon shoppers have a delightful shopping experience while creating long term value for our sellers. Work with our engineering partners and draw upon your experience to meet latency and other system constraints. Identify untapped, high-risk technical and scientific directions, and simulate new research directions that you will drive to completion and deliver. Be responsible for communicating our ML innovations to the broader internal & external scientific community.
US, CA, Palo Alto
We’re working to improve shopping on Amazon using the conversational capabilities of large language models, and are searching for pioneers who are passionate about technology, innovation, and customer experience, and are ready to make a lasting impact on the industry. You'll be working with talented scientists, engineers, and technical program managers (TPM) to innovate on behalf of our customers. If you're fired up about being part of a dynamic, driven team, then this is your moment to join us on this exciting journey!"?
US, CA, Santa Clara
AWS AI/ML is looking for world class scientists and engineers to join its AI Research and Education group working on foundation models, large-scale representation learning, and distributed learning methods and systems. At AWS AI/ML you will invent, implement, and deploy state of the art machine learning algorithms and systems. You will build prototypes and innovate on new representation learning solutions. You will interact closely with our customers and with the academic and research communities. You will be at the heart of a growing and exciting focus area for AWS and work with other acclaimed engineers and world famous scientists. Large-scale foundation models have been the powerhouse in many of the recent advancements in computer vision, natural language processing, automatic speech recognition, recommendation systems, and time series modeling. Developing such models requires not only skillful modeling in individual modalities, but also understanding of how to synergistically combine them, and how to scale the modeling methods to learn with huge models and on large datasets. Join us to work as an integral part of a team that has diverse experiences in this space. We actively work on these areas: * Hardware-informed efficient model architecture, training objective and curriculum design * Distributed training, accelerated optimization methods * Continual learning, multi-task/meta learning * Reasoning, interactive learning, reinforcement learning * Robustness, privacy, model watermarking * Model compression, distillation, pruning, sparsification, quantization About Us Inclusive Team Culture Here at AWS, we embrace our differences. We are committed to furthering our culture of inclusion. We have ten employee-led affinity groups, reaching 40,000 employees in over 190 chapters globally. We have innovative benefit offerings, and host annual and ongoing learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences. Amazon’s culture of inclusion is reinforced within our 14 Leadership Principles, which remind team members to seek diverse perspectives, learn and be curious, and earn trust. Work/Life Balance Our team puts a high value on work-life balance. It isn’t about how many hours you spend at home or at work; it’s about the flow you establish that brings energy to both parts of your life. We believe striking the right balance between your personal and professional life is critical to life-long happiness and fulfillment. We offer flexibility in working hours and encourage you to find your own balance between your work and personal lives. Mentorship & Career Growth Our team is dedicated to supporting new members. We have a broad mix of experience levels and tenures, and we’re building an environment that celebrates knowledge sharing and mentorship. Our senior members enjoy one-on-one mentoring and thorough, but kind, code reviews. We care about your career growth and strive to assign projects based on what will help each team member develop into a better-rounded engineer and enable them to take on more complex tasks in the future.
US, WA, Seattle
Do you want to join an innovative team of scientists who use machine learning to help Amazon provide the best experience to our Selling Partners by automatically understanding and addressing their challenges, needs and opportunities? Do you want to build advanced algorithmic systems that are powered by state-of-art ML, such as Natural Language Processing, Large Language Models, Deep Learning, Computer Vision and Causal Modeling, to seamlessly engage with Sellers? Are you excited by the prospect of analyzing and modeling terabytes of data and creating cutting edge algorithms to solve real world problems? Do you like to build end-to-end business solutions and directly impact the profitability of the company and experience of our customers? Do you like to innovate and simplify? If yes, then you may be a great fit to join the Selling Partner Experience Science team. Key job responsibilities Use statistical and machine learning techniques to create the next generation of the tools that empower Amazon's Selling Partners to succeed. Design, develop and deploy highly innovative models to interact with Sellers and delight them with solutions. Work closely with teams of scientists and software engineers to drive real-time model implementations and deliver novel and highly impactful features. Establish scalable, efficient, automated processes for large scale data analyses, model development, model validation and model implementation. Research and implement novel machine learning and statistical approaches. Lead strategic initiatives to employ the most recent advances in ML in a fast-paced, experimental environment. Drive the vision and roadmap for how ML can continually improve Selling Partner experience. About the team Selling Partner Experience Science (SPeXSci) is a growing team of scientists, engineers and product leaders engaged in the research and development of the next generation of ML-driven technology to empower Amazon's Selling Partners to succeed. We draw from many science domains, from Natural Language Processing to Computer Vision to Optimization to Economics, to create solutions that seamlessly and automatically engage with Sellers, solve their problems, and help them grow. Focused on collaboration, innovation and strategic impact, we work closely with other science and technology teams, product and operations organizations, and with senior leadership, to transform the Selling Partner experience.