This picture is an overhead shot inside an Amazon center, workers can be seen moving amidst hundreds of boxes which sit on conveyor belts and carts, in the upper left foreground, a yellow railing extends into the distance.
When faced with the need to evolve Amazon’s supply chain to meet customer needs, a team of scientists, developers, and other professionals worked together to create an inventory planning system that would help Amazon fulfill its delivery promises.
F4D Studios

The evolution of Amazon’s inventory planning system

How Amazon’s scientists developed a first-of-its-kind multi-echelon system for inventory buying and placement.

For every order placed on the Amazon Store, mathematical models developed by Amazon’s Supply Chain Optimization Technologies organization (SCOT) work behind the scenes to ensure that product inventories are best positioned to fulfill the order. 

Forecasting models developed by SCOT predict the demand for every product. Buying systems determine the right level of product to purchase from different suppliers, while large-scale placement systems determine the optimal location for products across the hundreds of facilities belonging to Amazon’s global fulfillment network.

“With hundreds of millions of products sold across multiple geographies, developing automated models to make inventory planning decisions at Amazon scale is one of the most challenging and rewarding parts of our work,” said Deepak Bhatia, vice president of Supply Chain Optimization Technologies at Amazon.

We made the decision to redesign Amazon’s supply chain systems from the ground up.
Deepak Bhatia

In the first half of the past decade, Amazon transitioned from a largely manual supply chain management system to an automated one. However, when faced with the need to evolve Amazon’s supply chain to meet customer needs, and the introduction of same day delivery services like Prime Now, the team moved to replace that system with a new one that would better help Amazon fulfill delivery promises made to customers.

“As far back as 2016, we were able to see that the automated system we had at the time wouldn’t help us meet the ever-growing expectations of our customers,” Bhatia recalled. “As a result, we made the decision to redesign Amazon’s supply chain systems from the ground up.”

A global company catering to local needs

“In 2016, Amazon’s supply chain network was designed for scenarios where inventory from any fulfillment center could be shipped to any customer to meet a two-day promise,” said Salal Humair, senior principal research scientist at Amazon who has been with the company for seven years.

This design was inadequate for the new world in which Amazon was operating; one shaped by what Humair calls the “globalization-localization imperative.” Amazon’s expansion included an increasing number of international locations — at the time, the company had 175 fulfillment centers serving customers in 185 countries around the world.

“Meeting the needs of our customer base meant that we needed to serve those customers in multiple geographies,” Humair said.

As Amazon continued to expand internationally, the company also launched one-day and same day delivery windows in local regions for services like Amazon Prime and Amazon Prime Now.

“We quickly realized that in addition to serving customers around the globe, we also had to pivot from functioning as a national network to a local one, where we could position inventory close to our customers,” Humair says.

A row of five profile photos shows, left to right, Deepak Bhatia, vice president of Supply Chain Optimization Technologies at Amazon; Salal Humair, senior principal research scientist; Alp Muharremoglu, a senior principal scientist; Jeff Maurer, a vice president; and Yan Xia, principal applied scientist.
Left to right, Deepak Bhatia, vice president of Supply Chain Optimization Technologies at Amazon; Salal Humair, senior principal research scientist; Alp Muharremoglu, a senior principal scientist; Jeff Maurer, a vice president in SCOT; and Yan Xia, principal applied scientist, were among those instrumental in migrating Amazon to the multi-echelon system.

In addition to the ‘globalization-localization imperative,’ the growing complexity of Amazon’s supply chain network further complicated matters. To meet the increased customer demand for a diverse variety of shipping speeds, Amazon’s fulfillment network was expanding to include an increasing number of building types and sizes: from fulfillment centers (for everyday products) and non-sortable fulfillment centers (for larger items), to smaller fulfillment centers catering to same-day orders, and distribution centers that supplied products to downstream fulfillment centers. The network was increasingly becoming layered, and fulfillment centers in one layer (or echelon) were acting as suppliers to other layers.

“We had to reimagine every aspect of our system to account for this increasing number of echelons,” Humair said.

The science behind multi-echelon inventory planning

The sheer scale of Amazons operations posed a significant challenge from a scientific perspective. Amazon Store orders are fulfilled through complex dynamic optimization processes — where a real-time order assignment system can choose to fulfill an order from the optimal fulfillment center that can meet the customer promise. This real-time order assignment makes inventory planning an incredibly complex problem to solve.

Other inventory-related dependencies further complicate matters: the same pool of inventory is frequently used to serve demand for orders with different shipping speeds. Consider a box of diapers: it can be used to fulfill an order for a two-day Prime delivery. It can also be used to ease the life of harried parents who have placed an order on Prime Now, and need diapers for their baby delivered in a two-hour window.

Amazon’s scientists also have to contend with a high degree of uncertainty. Customer demand for products cannot be perfectly predicted even with the most advanced machine learning models. In addition, lead times from vendors are subject to natural variation due to manufacturing capacity, transportation times, weather, etc., adding another layer of uncertainty.

This required building a custom solution, one that relies on sound scientific principles and rigor, and borrowing ideas from academic literature as building blocks, but with ground-breaking in-house invention.
Alp Muharremoglu

Humair notes that the scale of Amazon’s operations, the complexity of the network, and the uncertainties associated with the company’s dynamic ordering system make it impossible to even write down a closed-form objective function for the optimization problem the team was trying to solve.

While multi-echelon inventory optimization is a well-researched field, the bulk of literature focused on single-product models, proposed solutions for much simpler networks, or used greatly simplified assumptions for replenishing inventory.

“There is a large body of academic literature on multi-echelon inventory management, and papers typically focus on one or two main aspects of the problem,” noted Alp Muharremoglu, a senior principal scientist in SCOT who spent 15 years as a faculty member at Columbia University and the University of Texas at Dallas. “Amazon’s scale and complexity meant no existing solution was a perfect fit. This required building a custom solution, one that relies on sound scientific principles and rigor, and borrowing ideas from academic literature as building blocks, but with ground-breaking in-house invention to push the boundaries of academic research. It is a thrill to see multi-echelon inventory theory truly in action in such a large scale and dynamic supply chain.”

As a result, the system developed by SCOT (a project whose roots stretch back to 2016) is a significant break from the past. The heart of the model is a multi-product, multi-fulfillment center, capacity-constrained model for optimizing inventory levels for multiple delivery speeds, under a dynamic fulfillment policy. The framework then uses a Lagrangian-type decomposition framework to control and optimize inventory levels across Amazon’s network in near real-time.

Broadly speaking, decomposition is a mathematical technique that breaks a large, complex problem up into smaller and simpler ones. Each of these problems is then solved in parallel or sequentially. The Lagrangian method of decomposition factors complicated constraints into the solution, while providing a ‘cost’ for violating these constraints. This cost makes the problem easier to solve by providing an upper bound to the maximization problem, which is critical when planning for inventory levels at Amazon’s scale. 

“We computed opportunity costs for storage and flows at every fulfillment center,” Humair said. “Using Lagrangean decomposition, we then used these costs to calculate the related inventory positions at these locations. Crucially, we incorporated a stochastic dynamic fulfillment policy in a scalable optimization model, allowing Amazon to calculate inventory levels not at just one location, but at every layer in our fulfillment network.”

Mobilizing the organization

While creating the new multi-echelon system was an imposing scientific challenge, it also represented a significant organizational accomplishment, one that required collaboration across multiple teams.

“Moving multi-echelon from concept to implementation was one of the most difficult organizational challenges we’ve worked through; we had many potential implementations that looked radically different in terms of model capabilities, interfaces, engineering challenges, and long-term implications for how our teams would interact with each other,” said Jeff Maurer, a SCOT vice president who has been instrumental in rolling out the automation of Amazon’s supply chain and oversaw the roll out of the multi-echelon system.

“This was also a case where there wasn’t a great way to decide between them without building and exploring one or more approaches in production. Ultimately, that’s what we did — we picked the best options we could identify, built them out, learned from them, then repeated that process. We learned things by experimenting with real production implementations that we could never have learned from simplified models or simulations alone, given the complexity of the real-world dynamics of our supply chain. But it was hard on the teams — it wasn’t always obvious that the systems the teams were iterating on were the best path, given the high directional ambiguity.”

Packages moving through a fulfillment center

“Sometimes, the only way to make a massive change is to realize that you have no option but to make that change,” said Yan Xia, principal applied scientist at Amazon. Humair noted that Xia played “a pivotal role” over the four years it took the company to migrate to the new multi-echelon system.

Xia recalled that teams within SCOT were keenly aware of the limitations of the existing system.  However, there was skepticism that the multi-echelon system was the right solution.

“The skepticism was understandable,” Xia said. “It’s one thing to have a big idea. But you also have to be able to present the benefits of your idea in a coherent way.”

Xia gave an example of how he helped convince members from the buying and placement teams about the benefits of the new model.

“One team decides optimal suppliers to source products from, while another team makes decisions on where these products should be placed,” Xia explained. “I was able to show them how the two functions would essentially be unified in the multi-echelon system. Sure, it would change how they worked on a day-to-day basis — but it would do so in a way that made their lives simpler.”

To help ensure that resources were made available for the development of the multi-echelon system, Xia also focused on driving alignment among leaders in SCOT. He developed a simulation based on real-world data. The results clearly demonstrated that the proposed solution for inventory forecasting, buying, and placement would result in a steep decline in shipping costs, which in turn would allow Amazon to keep prices lower for customers.

Teams involved in multi-echelon planning discussions were galvanized after seeing the results of the simulation.

“Everyone bought into the vision,” Xia said. “We began to collaborate in near real-time. If we ran into a problem, we didn’t wait around for a weekly sprint meeting. We just got together in a room, or stood next to a whiteboard and solved it.”

Xia said that this was also when things began to get more complex. 

“An awareness of the complexity of the existing setup began to dawn on us,” says Xia. “We began to realize how every component in the system had multiple dependencies. For example, the buying platforms were tightly integrated with older legacy systems – we now had to factor these dependencies into our solutions.”

Solving a multi-item, multi-echelon with stochastic demand and lead-time and aggregated capacity constraints and differentiated customer service levels. That sort of thing is just unheard of in the academia and the industry.
Deepak Bhatia

The team iterated on the multi-echelon solution in a sequence of three in-production experiments (or labs) that spanned 2018 to 2020. The first lab incorporated components of the new system coupled with the old platform. It was a resounding success in terms of reducing costs, even while fulfilling orders associated with higher shipping speeds. The team moved on to testing the subsequent version of the multi-echelon system in the second lab. 

“That wasn’t nearly as good,” Xia recalled. “Most things didn’t work as expected.”

However, the team was encouraged by leadership to keep going. This wasn’t SCOT’s first attempt at taking on big and ambitious projects. The organization had taken three years to deploy the first automated supply chain management system where they overcame various challenges.

“Sure, the failure of the second lab was demotivating,” Xia says. “But we knew from experience that this failure was only to be expected. It was part of the process.”

The team fixed the bugs, and moved on to testing new features in the third lab. These included critical system capabilities, such the ability to model order cut-off times for deliveries within a particular time window.

The system went live in 2020, and over the past year, the multi-echelon system has had a large and statistically significant impact in positioning products closer to customers.

“On a personal level, I am incredibly proud of our team. Having worked in the area of multi-echelon inventory optimization before I joined Amazon, I have a deep appreciation of how difficult it was,” Bhatia noted. “There is a strong sense of pride for the work the team is doing — such as solving a multi-item, multi-echelon with stochastic demand and lead-time and aggregated capacity constraints and differentiated customer service levels. That sort of thing is just unheard of in academia and industry. This is why I find it gratifying to work as a scientist and a leader at Amazon. It gives me a lot of pride, and none of this could have been achieved without the people and the culture we have.”

Related content

IL, Tel Aviv
Come join the AWS Agentic AI science team in building the next generation models for intelligent automation. AWS, the world-leading provider of cloud services, has fostered the creation and growth of countless new businesses, and is a positive force for good. Our customers bring problems that will give Applied Scientists like you endless opportunities to see your research have a positive and immediate impact in the world. You will have the opportunity to partner with technology and business teams to solve real-world problems, have access to virtually endless data and computational resources, and to world-class engineers and developers that can help bring your ideas into the world. As part of the team, we expect that you will develop innovative solutions to hard problems, and publish your findings at peer reviewed conferences and workshops. We are looking for world class researchers with experience in one or more of the following areas - autonomous agents, API orchestration, Planning, large multimodal models (especially vision-language models), reinforcement learning (RL) and sequential decision making.
IL, Tel Aviv
Are you a Masters or PhD student interested in a 2026 Internship in Data Science? If so, we want to hear from you! We are looking for a customer obsessed Data Scientist Intern who can innovate in a business environment and is comfortable owning data to drive step-change innovation in the EMEA region or worldwide. If this describes you, come and join our Data Science teams at Amazon for an exciting internship opportunity. If you are insatiably curious and always want to learn more, then you’ve come to the right place. You can find more information about the Amazon Science community as well as our interview process via the links below; https://www.amazon.science/ https://amazon.jobs/content/en/career-programs/university/science Key job responsibilities As a Data Science Intern, you will have the following key job responsibilities: • Work closely with scientists and engineers to develop new algorithms to implement scientific solutions for Amazon problems • Design, run, and analyze A/B tests • Work on an interdisciplinary team on customer-obsessed research • Experience Amazon's customer-focused culture • Create and deliver projects that can be quickly applied starting locally and scaled to EMEA/worldwide • Create and share data with audiences of varying levels technical papers and presentations • Define metrics and design algorithms to estimate customer satisfaction and engagement A day in the life At Amazon, you will grow into the high impact person you know you’re ready to be. Every day will be filled with developing new skills and achieving personal growth. How often can you say that your work changes the world? At Amazon, you’ll say it often. Join us and define tomorrow. Some more benefits of an Amazon Science internship include; • All of our internships offer a competitive stipend/salary • Interns are paired with an experienced manager and mentor(s) • Interns receive invitations to different events such as intern program initiatives or site events • Interns can build their professional and personal network with other Amazon Scientists • Interns can potentially publish work at top tier conferences each year About the team Applicants will be reviewed on a rolling basis and are assigned to teams aligned with their research interests and experience prior to interviews. Start dates are available throughout the year and durations can vary in length from 3-6 months for full time internships or 6-12 months for part time internships. Please note these are not remote internships.
IN, KA, Bengaluru
Alexa+ is the world’s best Generative AI powered personal assistant / agent for consumers. We are seeking an Applied Scientist to join our newly expanding team in India focused on Alexa Conversational Ads and Personalization. In this role, you will build machine learning models that seamlessly and naturally integrate relevant advertising into the Alexa experience while deeply personalizing user interactions. You will work closely with other scientists, engineers, and product managers to take models from conception to production. Key job responsibilities Design, develop, and evaluate innovative deep learning and GenAI models for natural language processing (NLP), recommendation systems, and personalization. Conduct hands-on data analysis and build scalable ML pipelines. Design and run A/B experiments to measure the impact of new models on customer experience and ad performance. Collaborate with software development engineers to deploy models into high-scale, real-time production environments. About the team We are building a new science team in Bangalore to solve some of the most impactful problems in computational advertising. This isn't about tweaking existing models as we are rethinking how ads are ranked, priced, and personalized across voice-first and screen-first surfaces. These are problems that don't have textbook solutions. Key points to note about the team: 🧪 Greenfield team - you are not joining a mature org with rigid processes. You will shape the science roadmap, pick the problems, and define the culture from day one. 📈 Direct business impact — your models directly drive revenue. No yearly cycles to see if your work matters. 🌏 Global scope, local autonomy — collaborate with scientists and engineers across Seattle, Sunnyvale, and Bangalore, but own your problem space end-to-end. 🎓 Ship AND Publish: We encourage top-tier publications (NeurIPS, ACL, EMNLP, KDD, ICML, WWW) while ensuring your research hits production.
IN, KA, Bengaluru
Alexa+ is the world’s best Generative AI powered personal assistant / agent for consumers. We are seeking an Applied Scientist to join our newly expanding team in India focused on Alexa Conversational Ads and Personalization. In this role, you will build machine learning models that seamlessly and naturally integrate relevant advertising into the Alexa experience while deeply personalizing user interactions. You will work closely with other scientists, engineers, and product managers to take models from conception to production. Key job responsibilities Design, develop, and evaluate innovative deep learning and GenAI models for natural language processing (NLP), recommendation systems, and personalization. Conduct hands-on data analysis and build scalable ML pipelines. Design and run A/B experiments to measure the impact of new models on customer experience and ad performance. Collaborate with software development engineers to deploy models into high-scale, real-time production environments. About the team We are building a new science team in Bangalore to solve some of the most impactful problems in computational advertising. This isn't about tweaking existing models as we are rethinking how ads are ranked, priced, and personalized across voice-first and screen-first surfaces. These are problems that don't have textbook solutions. Key points to note about the team: 🧪 Greenfield team - you are not joining a mature org with rigid processes. You will shape the science roadmap, pick the problems, and define the culture from day one. 📈 Direct business impact — your models directly drive revenue. No yearly cycles to see if your work matters. 🌏 Global scope, local autonomy — collaborate with scientists and engineers across Seattle, Sunnyvale, and Bangalore, but own your problem space end-to-end. 🎓 Ship AND Publish: We encourage top-tier publications (NeurIPS, ACL, EMNLP, KDD, ICML, WWW) while ensuring your research hits production.
IN, KA, Bengaluru
Alexa+ is the world’s best Generative AI powered personal assistant / agent for consumers. We are seeking an Applied Scientist to join our newly expanding team in India focused on Alexa Conversational Ads and Personalization. In this role, you will build machine learning models that seamlessly and naturally integrate relevant advertising into the Alexa experience while deeply personalizing user interactions. You will work closely with other scientists, engineers, and product managers to take models from conception to production. Key job responsibilities Design, develop, and evaluate innovative deep learning and GenAI models for natural language processing (NLP), recommendation systems, and personalization. Conduct hands-on data analysis and build scalable ML pipelines. Design and run A/B experiments to measure the impact of new models on customer experience and ad performance. Collaborate with software development engineers to deploy models into high-scale, real-time production environments. About the team We are building a new science team in Bangalore to solve some of the most impactful problems in computational advertising. This isn't about tweaking existing models as we are rethinking how ads are ranked, priced, and personalized across voice-first and screen-first surfaces. These are problems that don't have textbook solutions. Key points to note about the team: 🧪 Greenfield team - you are not joining a mature org with rigid processes. You will shape the science roadmap, pick the problems, and define the culture from day one. 📈 Direct business impact — your models directly drive revenue. No yearly cycles to see if your work matters. 🌏 Global scope, local autonomy — collaborate with scientists and engineers across Seattle, Sunnyvale, and Bangalore, but own your problem space end-to-end. 🎓 Ship AND Publish: We encourage top-tier publications (NeurIPS, ACL, EMNLP, KDD, ICML, WWW) while ensuring your research hits production.
IN, KA, Bengaluru
Alexa+ is the world’s best Generative AI powered personal assistant / agent for consumers. We are seeking an Applied Scientist to join our newly expanding team in India focused on Alexa Conversational Ads and Personalization. In this role, you will build machine learning models that seamlessly and naturally integrate relevant advertising into the Alexa experience while deeply personalizing user interactions. You will work closely with other scientists, engineers, and product managers to take models from conception to production. Key job responsibilities - Design, develop, and evaluate innovative machine learning and deep learning models for natural language processing (NLP), recommendation systems, and personalization. - Conduct hands-on data analysis and build scalable ML pipelines. - Design and run A/B experiments to measure the impact of new models on customer experience and ad performance. - Collaborate with software development engineers to deploy models into high-scale, real-time production environments.
US, CA, San Francisco
Join Amazon's Frontier AI & Robotics team as a Member of Technical Staff, this Technical Program Manager will become the driving force behind breakthrough robotics innovation. You'll orchestrate complex, cross-functional programs that bridge AI research, software, hardware, and production deployment—managing the technical workstreams that enable robots to see, reason, and act in Amazon's warehouse environments. Your program leadership will directly accelerate our mission to build the next generation of embodied intelligence. Key job responsibilities · Establish and drive program management mechanisms and cadence for complex robotics and AI development initiatives spanning research, software engineering, hardware, and operations · Manage end-to-end program execution across the full robotics stack—including AI models, software engineering, and hardware deployment · Drive decision-making velocity by facilitating tradeoff discussions when there are conflicting priorities; determine whether decisions are one-way or two-way doors · Own program-level risk management, proactively identifying technical, schedule, and resource risks; escalate where necessary and drive mitigation strategies · Manage dependencies and scope changes across internal teams and partner organizations, ensuring alignment on commitments, timelines, and technical requirements · Create transparency through clear RACI frameworks, program dashboards, and communication mechanisms that keep stakeholders aligned on status, risks, and decisions · Exercise strong technical judgment to influence program-level decisions on deployment methodology, scalability requirements, and technical feasibility—acting as the voice back to research and engineering teams · Build sustainable program management processes that scale as our organization grows, adapting agile frameworks to the unique challenges of AI robotics A day in the life Your focus centers on driving velocity and alignment across our robotics programs. You might start your morning facilitating tradeoff decisions between AI researchers and software engineers on a critical prototype milestone, then transition to managing dependencies across hardware and operations teams to keep timelines on track. In the afternoon, you could be conducting risk assessments on supply chain constraints that impact our development roadmap, updating program dashboards to provide leadership visibility, or working with partner teams to align on deployment strategies. You'll establish the mechanisms and cadence that keep our fast-moving organization synchronized—from sprint planning rituals to cross-functional design reviews. Throughout the day, you balance hands-on program execution with strategic escalation, ensuring technical decisions align with our long-term vision while removing obstacles that slow teams down. You're the connective tissue that enables researchers, engineers, and operations specialists to move fast together. About the team At Frontier AI & Robotics, we're not just advancing robotics – we're reimagining it from the ground up. Our team is building the future of intelligent robotics through frontier foundation models and end-to-end learned systems. We tackle some of the most challenging problems in AI and robotics, from developing sophisticated perception systems to creating adaptive manipulation strategies that work in complex, real-world scenarios. What sets us apart is our unique combination of ambitious research vision and practical impact. We leverage Amazon's computational infrastructure and rich real-world datasets to train and deploy state-of-the-art foundation models. Our work spans the full spectrum of robotics intelligence – from multimodal perception using images, videos, and sensor data, to sophisticated manipulation strategies that can handle diverse real-world scenarios. We're building systems that don't just work in the lab, but scale to meet the demands of Amazon's global operations. Join us if you're excited about pushing the boundaries of what's possible in robotics, working with world-class researchers, and seeing your innovations deployed at unprecedented scale.
US, CA, San Francisco
We are seeking a hands-on Electrical Engineer to lead the design and integration of electrical systems or subsystems for high-degree-of-freedom robotic platforms. This role involves architecting the robot’s power distribution, sensor wiring, and embedded electrical infrastructure. You will be responsible for designing across the full electrical system for advanced robotics platforms including power distribution, sensing, compute, motor controllers, communication infrastructure, battery system and power electronics in close collaboration with mechanical, controls and software engineers. You’ll play a key role in ensuring high-performance, reliable operation of complex electromechanical systems under real-world conditions. Key job responsibilities * Electrical system architect / owner for power electronics, actuation, PCBAs, battery, ware harness specs and high speed electrical/communications protocols * Design, develop and integrate power distribution, embedded electronics, motor controllers and safety-critical circuits for complex robotic systems * Own board layout of PCBAs including SoCs, microcontrollers, sensors, power devices, etc. using Cadence OrCAD/Allegro or equivalent tools. Oversee bring-up and validation * Determine appropriate high speed electrical and communication protocols (e.g., CAN, EtherCAT, USB, etc) for reliable and efficient system operation * Specify and design custom power electronics and power distribution boards to meet performance, thermal, and safety requirements * Design and route all cabling and wire harnesses across the robotic platform, considering EMI, signal integrity, serviceability, and integration with mechanical structures * Architect and integrate the robot’s battery system, including protection circuitry, battery management, charging systems, and thermal considerations * Define and implement wiring and electrical interfaces for sensors (e.g., lidar, stereo cameras, IMUs, tactile) and compute modules * Ownership over prototyping and bringing up electrical designs and creation of test & validation rigs About the team At Frontier AI & Robotics, we're not just advancing robotics – we're reimagining it from the ground up. Our team is building the future of intelligent robotics through innovative foundation models and end-to-end learned systems. We tackle some of the most challenging problems in AI and robotics, from developing sophisticated perception systems to creating adaptive manipulation strategies that work in complex, real-world scenarios. What sets us apart is our unique combination of ambitious research vision and practical impact. We leverage Amazon's massive computational infrastructure and rich real-world datasets to train and deploy state-of-the-art foundation models. Our work spans the full spectrum of robotics intelligence – from multimodal perception using images, videos, and sensor data, to sophisticated manipulation strategies that can handle diverse real-world scenarios. We're building systems that don't just work in the lab, but scale to meet the demands of Amazon's global operations. Join us if you're excited about pushing the boundaries of what's possible in robotics, working with world-class researchers, and seeing your innovations deployed at unprecedented scale.
US, NY, New York
We are seeking an Applied Scientist to develop and optimize Visual Inertial Odometry (VIO) and sensor fusion systems for our intelligent robots. In this role, you will design, implement, and deploy state estimation and tracking algorithms that enable robots to understand their position and motion in real time, even in challenging and dynamic environments. You will own the full pipeline from algorithm development through embedded deployment, ensuring that perception systems run efficiently on resource-constrained robotic hardware. You will also leverage modern machine learning approaches to push the boundaries of classical perception methods, combining learned representations with geometric techniques to achieve robust, real-time performance. This is a deeply hands-on role. You will work directly with sensors, hardware, and real-world data, while prototyping, testing, and iterating in physical environments. The ideal candidate has strong foundations in VIO and sensor fusion, practical experience optimizing algorithms for embedded platforms, and familiarity with how modern deep learning is transforming perception. Key job responsibilities - Design and implement Visual Inertial Odometry algorithms for robust real-time state estimation on robotic platforms like Sprout - Develop multi-sensor fusion pipelines integrating cameras, IMUs, and other sensing modalities for accurate pose tracking - Optimize perception and tracking algorithms for deployment on embedded hardware (e.g., ARM, GPU-accelerated edge devices) under strict latency and power constraints - Apply modern ML-based perception techniques (learned features, depth estimation, neural odometry) to complement and improve classical geometric approaches - Build and maintain calibration, evaluation, and benchmarking infrastructure for perception systems - Collaborate with hardware, controls, and navigation teams to integrate perception outputs into the robot’s autonomy stack - Lead technical projects from research prototyping through production deployment
US, WA, Bellevue
The candidate in this role will own delivery of science products and solutions to help Amazon Devices Sales and Marketing org. make better decisions: product recommendations to customers, segmentation, financial incrementality of marketing initiatives, A/B testing etc. Key job responsibilities The Amazon Devices organization designs, produces and markets Echo Speakers, Kindle e-readers, Fire Tablets, Fire TV Streaming Media Players, Ring and Blink Smart Home & Security products. We are constantly looking to innovate on behalf of customers with new devices in existing or new categories or improving customer experience on existing platforms. The Devices Data Services (DDS) team provides Data Science, Analytics and Engineering support to the broader organization to enable Sales and Marketing activities across all these product lines. We are looking for an innovative, hands-on and customer-obsessed Data Scientist who can be a strategic partner to the product managers and engineers on the team. Our projects span multiple organizations and require coordination of experimentation, economic and causal analysis, and building predictive machine learning models. A successful candidate will be a problem solver who enjoys diving into data, is excited by difficult modeling challenges, is motivated to build something that will eventually become a production software system, and possesses strong communication skills to effectively interface between technical and business teams. In this role, you will be a technical expert with massive impact. You will take the lead on developing advanced ML systems that are key to reaching our customers with the right recommendations at the right time. Your work will directly impact the success of Amazon's growing Devices business. You will work across diverse science/engineering/business teams. You will work on critical data science problems, building high quality, reliable, accurate, and consistent code sets that are aligned with our business needs. Key Performance Areas - Implement statistical or machine learning methods to solve specific business problems. - Improve upon existing methodologies by developing new data sources, testing model enhancements, and fine-tuning model parameters. - Directly contribute to development of modern automated recommendation systems - Build customer-facing reporting tools to provide insights and metrics to track model performance and explain variance - Collaborate with researchers, software developers, and business leaders to define product requirements, provide analytical support, and communicate feedback A day in the life You will work with other scientists, engineers, product managers, and marketers to develop new products that benefit our customers and help us reach our business goals. You will own solutions from end to end: conceptualization, prioritization, development, delivery, and productionalization. About the team We are a full stack science team that empowers product, marketing, and other business leaders to better understand customers who use Amazon devices, make decisions on product development or optimization, and measure the effectiveness of their efforts against our customer’s expectation. Our focus area is to build analytical frameworks that help the organization either access data, better understand the decisions customers are making and why, or assess customer satisfaction.