Amazon builds first foundation model for multirobot coordination

Trained on millions of hours of data from Amazon fulfillment centers and sortation centers, Amazon’s new DeepFleet models predict future traffic patterns for fleets of mobile robots.

Large language models and other foundation models have introduced a new paradigm in AI: large models trained in a self-supervised fashion — no data annotation required — on huge volumes of data can learn general competencies that allow them to perform a variety of tasks. The most prominent examples of this paradigm are in language, image, and video generation. But where else can it be applied?

At Amazon, one answer to that question is in managing fleets of robots. In June, we announced the development of a new foundation model for predicting the interactions of mobile robots on the floors of Amazon fulfillment centers (FCs) and sortation centers, which we call DeepFleet. We still have a lot to figure out, but DeepFleet can already help assign tasks to our robots and route them around potential congestion, increasing the efficiency of our robot deployments by 10%. That lets us deliver packages to customers more rapidly and at lower costs.

Robots laden with storage pods at a fulfillment center (left) and with packages at a sortation center (right).
Robots laden with storage pods at a fulfillment center (left) and with packages at a sortation center (right).

One question I get a lot is why we would need a foundation model to predict robots’ locations. After all, we know exactly what algorithms the robots are running; can’t we just simulate their interactions and get an answer that way?

There are two obstacles to this approach. First, accurately simulating the interactions of a couple thousand robots faster than real time is prohibitively resource intensive: our fleet already uses all available computation time to optimize its plans. In contrast, a learned model can quickly infer how traffic will likely play out.

Second, we see predicting robot locations as, really, a pretraining task, which we use to teach an AI to understand traffic flow. We believe that, just as pretraining on next-word prediction enabled chatbots to answer a diverse range of questions, pretraining on location prediction can enable an AI to generate general solutions for mobile-robot fleets.

Related content
Unique end-of-arm tools with three-dimensional force sensors and innovative control algorithms enable robotic arms to “pick” items from and “stow” items in fabric storage pods.

The success of a foundation model depends on having adequate training data, which is one of the areas where Amazon has an advantage. At the same time that we announced DeepFleet, we also announced the deployment of our millionth robot to Amazon FCs and sortation centers. We have literally billions of hours of robot navigation data that we can use to train our foundation models.

And of course, Amazon is also the largest provider of cloud computing resources, so we have the computational capacity to train and deploy models large enough to benefit from all that training data. One of our paper’s key findings is that, like other foundation models, a robot fleet foundation model continues to improve as the volume of training data increases.

In some ways, it’s natural to adapt LLM architectures to the problem of predicting robot location. An LLM takes in a sequence of words and projects that sequence forward, one word at a time. Similarly, a robot navigation model would take in a sequence of robot states or floor states and project it forward, one state at a time.

In other ways, the adaptation isn’t so straightforward. With LLMs, it’s clear what the inputs and outputs should be: words (or more precisely word parts, or tokens). But how about with robot navigation? Should the input to the model be the state of a single robot, and you produce a floor map by aggregating the outputs of multiple models? Or should the inputs and outputs include the state of the whole floor? And if they do, how do you represent the floor? As a set of features relative to the robot location? As an image? As a graph? And how do you handle time? Is each input to the model a snapshot taken at a regular interval? Or does each input represent a discrete action, whenever it took place?

We experimented with four distinct models that answer these questions in different ways. The basic setup is the same for all of them: we model the floor of an FC or sortation center as a grid whose cells can be occupied by robots, which are either laden (storage pods in an FC, packages in a sortation center) or unladen and have fixed orientations; obstacles; or storage or drop-off locations. Unoccupied cells make up travel lanes.

Sample models of a fulfillment center (top) and a sortation center (bottom).
Sample models of a fulfillment center (top) and a sortation center (bottom).

Like most machine learning systems of the past 10 years, our models produce embeddings of input data, or vector representations that capture data features useful for predictive tasks. All of our models make use of the Transformer architecture that is the basis of today’s LLMs. The Transformer’s characteristic feature is the attention mechanism: when determining its next output, the model determines how much it should attend to each data item it’s already seen — or to supplementary data. One of our models also uses a convolutional neural network, the standard model for image processing, while another uses a graph neural network to capture spatial relationships.

DeepFleet is the collective name for all of our models. Individually, they are the robot-centric model, the robot-floor model, the image-floor model, and the graph-floor model.

1. The robot-centric model

The robot-centric model focuses on one robot at a time — the “ego robot” — and builds a representation of its immediate environment. The model’s encoder produces an embedding of the ego robot’s state — where it is, what direction it’s facing, where it’s headed, whether it’s laden or unladen, and so on. The encoder also produces embeddings of the states of the 30 robots nearest the ego robot; the 100 nearest grid cells; and the 100 nearest objects (drop-off chutes, storage pods, charging stations, and so on).

A Transformer combines these embeddings into a single embedding, and a sequence of such embeddings — representing a sequence of states and actions the ego robot took — passes to a decoder. On the basis of that sequence, the decoder predicts the robot’s next action. This process happens in parallel for every robot on the floor. Updating the state of the floor as a whole is a matter of sequentially applying each robot’s predicted action.

Architecture of the robot-centric model.
Architecture of the robot-centric model.

2. The robot-floor model

With the robot-floor model, separate encoders produce embeddings of the robot states and fixed features of the floor cells. As the only changes to the states of the floor cells are the results of robotic motion, the floor state requires only a single embedding.

At decoding time, we use cross-attention between the robot embeddings and the floor state embedding to produce a new embedding for each robot that factors in floor state information. Then, for each robot, we use cross-attention between its updated embedding and those of each of the other robots to produce a final embedding, which captures both robot-robot and robot-floor relationships. The last layer of the model — the output head — uses these final embeddings to predict each robot’s next action.

The architecture of the robot-floor model..png
The architecture of the robot-floor model.

3. The image-floor model

Convolutional neural networks step through an input image, applying different filters to fixed-size blocks of pixels. Each filter establishes a separate processing channel through the network. Typically, the filters are looking for different image features, such as contours with particular shapes and orientations.

In our case, however, the “pixels” are cells of the floor grid, and each channel is dedicated to a separate cell feature. There are static features, such as fixed objects in particular cells, and dynamic features, such as the locations of the robots and their states.

Related content
Generative AI supports the creation, at scale, of complex, realistic driving scenarios that can be directed to specific locations and environments.

In each channel, representations of successive states of the floor are flattened — converted from 2-D grids to 1-D vectors — and fed to a Transformer. The Transformer’s attention mechanism can thus attend to temporal and spatial features simultaneously. The Transformer’s output is an encoding of the next floor state, which a convolutional decoder converts back to a 2-D representation.

4. The graph-floor model

A natural way to model the FC or sortation center floor is as a graph whose nodes are floor cells and whose edges encode the available movements between cells (for example, a robot may not move into a cell occupied by another object). We convert such a spatial graph into a spatiotemporal graph by adding temporal edges that connect each node to itself at a later time step.

Next, in the approach made standard by graph neural networks, we use a Transformer to iteratively encode the spatiotemporal graph as a set of node embeddings. With each iteration, a node’s embedding factors in information about nodes farther away from it in the graph. In parallel, the model also builds up a set of edge embeddings.

Each encoding block also includes an attention mechanism that uses the edge embeddings to compute attention scores between node embeddings. The output embedding thus factors in information about the distances between nodes, so it can capture long-range effects.

From the final set of node embeddings, we can decode a prediction of where each robot is, whether it is moving, what direction it is heading, etc.

The architecture of the graph-floor model.
The architecture of the graph-floor model.

Evaluation

We used two metrics to evaluate all four models’ performance. The first is dynamic-time-warping (DTW) distance between predictions and the ground truth across multiple dimensions, including robot position, speed, state, and the timing of load and unload events. The second metric is congestion delay error (CDE), or the relative error between delay predictions and ground truth.

Overall, the robot-centric model performed best, with the top scores on both CDE and the DTW distance on position and state predictions, but the robot-floor model achieved the top score on DTW distance for timing estimation. The graph-floor model didn’t fare quite as well, but its results were still strong at a significantly lower parameter count — 13 million, versus 97 million for the robot-centric model and 840 million for the robot-floor model.

The image-floor model didn’t work well. We suspect that this is because the convolutional filters of a convolutional neural network are designed to abstract away from pixel-level values to infer larger-scale image features, like object classifications. We were trying to use convolutional neural networks for pixel-level predictions, which they may not be suited for.

We also conducted scaling experiments with the robot-centric and graph-floor models, which showed that, indeed, model performance improved with increases in the volume of training data — an encouraging sign, given the amount of data we have at our disposal.

On the basis of these results, we are continuing to develop the robot-centric, robot-floor, and graph-floor models, initially using them to predict congestion, with the longer-term goal of using them to produce outputs like assignments of robots to specific retrieval tasks and target locations. You can read the full paper on arXiv.

Research areas

Related content

US, CA, Pasadena
We’re on the lookout for the curious, those who think big and want to define the world of tomorrow. At Amazon, you will grow into the high impact, visionary person you know you’re ready to be. Every day will be filled with exciting new challenges, developing new skills, and achieving personal growth. How often can you say that your work changes the world? At Amazon, you’ll say it often. Join us and define tomorrow. The Amazon Web Services (AWS) Center for Quantum Computing (CQC) in Pasadena, CA, is looking for a Quantum Research Scientist Intern in the Device and Architecture Theory group. You will be joining a multi-disciplinary team of scientists, engineers, and technicians, all working at the forefront of quantum computing to innovate for the benefit of our customers. Key job responsibilities As an intern with the Device and Architecture Theory team, you will conduct pathfinding theoretical research to inform the development of next-generation quantum processors. Potential focus areas include device physics of superconducting circuits, novel qubits and gate schemes, and physical implementations of error-correcting codes. You will work closely with both theorists and experimentalists to explore these directions. We are looking for candidates with excellent problem-solving and communication skills who are eager to work collaboratively in a team environment. Amazon Science gives you insight into the company’s approach to customer-obsessed scientific innovation. Amazon fundamentally believes that scientific innovation is essential to being the most customer-centric company in the world. It’s the company’s ability to have an impact at scale that allows us to attract some of the brightest minds in quantum computing and related fields. Our scientists continue to publish, teach, and engage with the academic community, in addition to utilizing our working backwards method to enrich the way we live and work. A day in the life Why AWS? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. AWS Utility Computing (UC) provides product innovations — from foundational services such as Amazon’s Simple Storage Service (S3) and Amazon Elastic Compute Cloud (EC2), to consistently released new product innovations that continue to set AWS’s services and features apart in the industry. As a member of the UC organization, you’ll support the development and management of Compute, Database, Storage, Internet of Things (Iot), Platform, and Productivity Apps services in AWS. Within AWS UC, Amazon Dedicated Cloud (ADC) roles engage with AWS customers who require specialized security solutions for their cloud services. Inclusive Team Culture Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences, inspire us to never stop embracing our uniqueness. Diverse Experiences AWS values diverse experiences. Even if you do not meet all of the qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Mentorship & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. Export Control Requirement: Due to applicable export control laws and regulations, candidates must be either a U.S. citizen or national, U.S. permanent resident (i.e., current Green Card holder), or lawfully admitted into the U.S. as a refugee or granted asylum, or be able to obtain a US export license. If you are unsure if you meet these requirements, please apply and Amazon will review your application for eligibility.
US, MA, Boston
**This is a 12 month contract opportunity with the possibility to extend based on business needs** Embark on a transformative journey as our Domain Expert Lead, where intellectual rigor meets cutting-edge technological innovation. In this pivotal role, you will serve as a strategic architect of data integrity, leveraging your domain expertise to advance AI model training and evaluation. Your domain knowledge and experience will be instrumental in elevating our artificial intelligence capabilities, meticulously refining data collection processes and ensuring the highest standards of quality and precision across complex computational landscapes. Key job responsibilities • Critically analyze and evaluate responses generated by our LLMs across various domains and use cases in your area of expertise. • Develop and write demonstrations to illustrate "what good data looks like" in terms of meeting benchmarks for quality and efficiency • Participate in the creation of tooling that helps create such data by providing your feedback on what works and what doesn’t. • Champion effective knowledge-sharing initiatives by translating domain expertise into actionable insights, while cultivating strategic partnerships across multidisciplinary teams. • Provide detailed feedback and explanations for your evaluations, helping to refine and improve the LLM's understanding and output • Collaborate with the AI research team to identify areas for improvement in the LLM’s capabilities • Stay abreast of the latest developments in how LLMs and GenAI can be applied to your area of expertise to ensure our evaluations remain cutting-edge.
IN, KA, Bengaluru
The Amazon Alexa AI team in India is seeking a talented, self-driven Applied Scientist to work on prototyping, optimizing, and deploying ML algorithms within the realm of Generative AI. Key responsibilities include: - Research, experiment and build Proof Of Concepts advancing the state of the art in AI & ML for GenAI. - Collaborate with cross-functional teams to architect and execute technically rigorous AI projects. - Thrive in dynamic environments, adapting quickly to evolving technical requirements and deadlines. - Engage in effective technical communication (written & spoken) with coordination across teams. - Conduct thorough documentation of algorithms, methodologies, and findings for transparency and reproducibility. - Publish research papers in internal and external venues of repute - Support on-call activities for critical issues Basic Qualifications: - Master’s or PhD in computer science, statistics or a related field - 1-2 years experience in deep learning, machine learning, and data science. - Proficiency in coding and software development, with a strong focus on machine learning frameworks. - Experience in Python, or another language; command line usage; familiarity with Linux and AWS ecosystems. - Understanding of relevant statistical measures such as confidence intervals, significance of error measurements, development and evaluation data sets, etc. - Excellent communication skills (written & spoken) and ability to collaborate effectively in a distributed, cross-functional team setting. - Papers published in AI/ML venues of repute Preferred Qualifications: - Track record of diving into data to discover hidden patterns and conducting error/deviation analysis - Ability to develop experimental and analytic plans for data modeling processes, use of strong baselines, ability to accurately determine cause and effect relations - The motivation to achieve results in a fast-paced environment. - Exceptional level of organization and strong attention to detail - Comfortable working in a fast paced, highly collaborative, dynamic work environment
US, CA, Pasadena
Do you enjoy solving challenging problems and driving innovations in research? As a Research Science intern with the Quantum Algorithms Team at CQC, you will work alongside global experts to develop novel quantum algorithms, evaluate prospective applications of fault-tolerant quantum computers, and strengthen the long-term value proposition of quantum computing. A strong candidate will have experience applying methods of mathematical and numerical analysis to assess the performance of quantum algorithms and establish their advantage over classical algorithms. Key job responsibilities We are particularly interested in candidates with expertise in any of the following subareas related to quantum algorithms: quantum chemistry, many-body physics, quantum machine learning, cryptography, optimization theory, quantum complexity theory, quantum error correction & fault tolerance, quantum sensing, and scientific computing, among others. A day in the life Throughout your journey, you'll have access to unparalleled resources, including state-of-the-art computing infrastructure, cutting-edge research papers, and mentorship from industry luminaries. This immersive experience will not only sharpen your technical skills but also cultivate your ability to think critically, communicate effectively, and thrive in a fast-paced, innovative environment where bold ideas are celebrated. Diverse Experiences AWS values diverse experiences. Even if you do not meet all of the qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why AWS? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Inclusive Team Culture Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences, inspire us to never stop embracing our uniqueness. Mentorship & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. Hybrid Work We value innovation and recognize this sometimes requires uninterrupted time to focus on a build. We also value in-person collaboration and time spent face-to-face. Our team affords employees options to work in the office every day or in a flexible, hybrid work model near one of our U.S. Amazon offices. This is not a remote internship opportunity. About the team Amazon Web Services (AWS) Center for Quantum Computing (CQC) is a multi-disciplinary team of theoretical and experimental physicists, materials scientists, and hardware and software engineers on a mission to develop a fault-tolerant quantum computer.
US, CA, Pasadena
The Amazon Web Services (AWS) Center for Quantum Computing in Pasadena, CA, is looking to hire a Research Scientist specializing in hardware design for cryogenic environements. The candidate should have expertise in 3D CAD (SolidWorks), thermal and structural FEA (Ansys/COMSOL), hardware design for cryogenic applications, design for manufacturing, and mechanical engineering principles. The candidate must have demonstrated driving designs through full product development cycles (requirements, conceptual design, detailed design, manufacturing, integration, and testing). Candidates must have a strong background in both cryogenic mechanical engineering theory and implementation. Working effectively within a cross-functional team environment is critical. Key job responsibilities Our scientists and engineers collaborate across diverse teams and projects to offer state of the art, cost effective solutions for scaling the signal delivery to AWS quantum processor systems at cryogenic temperatures. Equally important is the ability to scale the thermal performance and improve EMI mitigation of the cryogenic environment. You'll bring passion, enthusiasm, and innovation to work on the following: - High density novel packaging solutions for quantum processor units. - Cryogenic mechanical design for novel cryogenic signal conditioning sub-assemblies. - Cryogenic mechanical design for signal delivery systems. - Simulation driven designs (shielding, filtering, etc.) to reduce sources of EMI within the qubit environment. - Own end-to-end product development through requirements, design reports, design reviews, assembly/testing documentation, and final delivery. A day in the life As you design and implement cryogenic hardware solutions, from requirements definition to deployment, you will also: - Participate in requirements, design, and test reviews and communicate with internal stakeholders. - Work cross-functionally to help drive decisions using your unique technical background and skill set. - Refine and define standards and processes for operational excellence. - Work in a high-paced, startup-like environment where you are provided the resources to innovate quickly. About the team Why AWS? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. AWS Utility Computing (UC) provides product innovations — from foundational services such as Amazon’s Simple Storage Service (S3) and Amazon Elastic Compute Cloud (EC2), to consistently released new product innovations that continue to set AWS’s services and features apart in the industry. As a member of the UC organization, you’ll support the development and management of Compute, Database, Storage, Internet of Things (Iot), Platform, and Productivity Apps services in AWS. Within AWS UC, Amazon Dedicated Cloud (ADC) roles engage with AWS customers who require specialized security solutions for their cloud services. Inclusive Team Culture AWS values curiosity and connection. Our employee-led and company-sponsored affinity groups promote inclusion and empower our people to take pride in what makes us unique. Our inclusion events foster stronger, more collaborative teams. Our continual innovation is fueled by the bold ideas, fresh perspectives, and passionate voices our teams bring to everything we do. Diverse Experiences AWS values diverse experiences. Even if you do not meet all of the qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Mentorship & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. Export Control Requirement: Due to applicable export control laws and regulations, candidates must be either a U.S. citizen or national, U.S. permanent resident (i.e., current Green Card holder), or lawfully admitted into the U.S. as a refugee or granted asylum, or be able to obtain a US export license. If you are unsure if you meet these requirements, please apply and Amazon will review your application for eligibility.
US, CA, Santa Clara
Amazon Web Services (AWS) is assembling an elite team of world-class scientists and engineers to pioneer the next generation of AI-driven development tools. Join the Amazon Kiro LLM-Training team and help create groundbreaking generative AI technologies including Kiro IDE and Amazon Q Developer that are transforming the software development landscape. Key job responsibilities As a key member of our team, you'll be at the forefront of innovation, where cutting-edge research meets real-world application: - Push the boundaries of reinforcement learning and post-training methodologies for large language models specialized in code intelligence - Invent and implement state-of-the-art machine learning solutions that operate at unprecedented Amazon scale - Deploy revolutionary products that directly impact the daily workflows of millions of developers worldwide - Break new ground in AI and machine learning, challenging what's possible in intelligent code assistance - Publish and present your pioneering work at premier ML and NLP conferences (NeurIPS, ICML, ICLR , ACL, EMNLP) - Accelerate innovation by working directly with customers to rapidly transition research breakthroughs into production systems About the team The AWS Developer Agents and Experiences (DAE) team is reimagining the builder experience through generative AI and foundation models. We're leveraging the latest advances in AI to transform how engineers work from IDE environments to web-based tools and services, empowering developers to tackle projects of any scale with unprecedented efficiency. Broadly, AWS Utility Computing (UC) provides product innovations — from foundational services such as Amazon’s Simple Storage Service (S3) and Amazon Elastic Compute Cloud (EC2), to consistently released new product innovations that continue to set AWS’s services and features apart in the industry. As a member of the UC organization, you’ll support the development and management of Compute, Database, Storage, Internet of Things (Iot), Platform, and Productivity Apps services in AWS. Within AWS UC, Amazon Dedicated Cloud (ADC) roles engage with AWS customers who require specialized security solutions for their cloud services. Why AWS? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Inclusive Team Culture Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon conferences, inspire us to never stop embracing our uniqueness. Mentorship & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. Hybrid Work We value innovation and recognize this sometimes requires uninterrupted time to focus on a build. We also value in-person collaboration and time spent face-to-face. Our team affords employees options to work in the office every day or in a flexible, hybrid work model near one of our U.S. Amazon offices.
US, WA, Redmond
As a Guidance, Navigation & Control Hardware Engineer, you will directly contribute to the planning, selection, development, and acceptance of Guidance, Navigation & Control hardware for Amazon Leo's constellation of satellites. Specializing in critical satellite hardware components including reaction wheels, star trackers, magnetometers, sun sensors, and other spacecraft sensors and actuators, you will play a crucial role in the integration and support of these precision systems. You will work closely with internal Amazon Leo hardware teams who develop these components, as well as Guidance, Navigation & Control engineers, software teams, systems engineering, configuration & data management, and Assembly, Integration & Test teams. A key aspect of your role will be actively resolving hardware issues discovered during both factory testing phases and operational space missions, working hand-in-hand with internal Amazon Leo hardware development teams to implement solutions and ensure optimal satellite performance. Export Control Requirement: Due to applicable export control laws and regulations, candidates must be a U.S. citizen or national, U.S. permanent resident (i.e., current Green Card holder), or lawfully admitted into the U.S. as a refugee or granted asylum. Key job responsibilities * Lead the planning and coordination of resources necessary to successfully accept and integrate satellite Guidance, Navigation & Control components including reaction wheels, star trackers, magnetometers, and sun sensors provided by internal Amazon Leo teams * Partner with internal Amazon Leo hardware teams to develop and refine spacecraft actuator and sensor solutions, ensuring they meet requirements and providing technical guidance for future satellite designs * Collaborate with internal Amazon Leo hardware development teams to resolve issues discovered during both factory test phases and operational space missions, implementing corrective actions and design improvements * Work with internal Amazon Leo teams to ensure state-of-the-art satellite hardware technologies including precision pointing systems, attitude determination sensors, and spacecraft actuators meet mission requirements * Lead verification and testing activities, ensuring satellite Guidance, Navigation & Control hardware components meet stringent space-qualified requirements * Drive implementation of hardware-in-the-loop testing for satellite systems, coordinating with internal Amazon Leo hardware engineers to validate component performance in simulated space environments * Troubleshoot and resolve complex hardware integration issues working directly with internal Amazon Leo hardware development teams
US, MA, Boston
**This is a 12 month contract opportunity with the possibility to extend based on business needs** Embark on a transformative journey as our Domain Expert Lead, where intellectual rigor meets cutting-edge technological innovation. In this pivotal role, you will serve as a strategic architect of data integrity, leveraging your domain expertise to advance AI model training and evaluation. Your domain knowledge and experience will be instrumental in elevating our artificial intelligence capabilities, meticulously refining data collection processes and ensuring the highest standards of quality and precision across complex computational landscapes. Key job responsibilities • Critically analyze and evaluate responses generated by our LLMs across various domains and use cases in your area of expertise. • Develop and write demonstrations to illustrate "what good data looks like" in terms of meeting benchmarks for quality and efficiency • Participate in the creation of tooling that helps create such data by providing your feedback on what works and what doesn’t. • Champion effective knowledge-sharing initiatives by translating domain expertise into actionable insights, while cultivating strategic partnerships across multidisciplinary teams. • Provide detailed feedback and explanations for your evaluations, helping to refine and improve the LLM's understanding and output • Collaborate with the AI research team to identify areas for improvement in the LLM’s capabilities • Stay abreast of the latest developments in how LLMs and GenAI can be applied to your area of expertise to ensure our evaluations remain cutting-edge.
US, VA, Arlington
The Global Real Estate and Facilities (GREF) team provides real estate transaction expertise, business partnering, space & occupancy planning, design and construction, capital investment program management and facility maintenance and operations for Amazon’s corporate office portfolio across multiple countries. We partner with suppliers to ensure quality, innovation and operational excellence with Amazon’s business and utilize customer driven feedback to continuously improve and exceed employee expectations. Within GREF, the newly formed Global Transformation & Insights (GTI) team is responsible for Customer Insights, Business Insights, Creative, and Communications. We are a group of builders, creators, innovators and go getters. We are customer obsessed, and index high on Ownership. We Think Big, and move fast, and constantly challenge one another while collaborating on "what else", "how might we", and "how can I help". We celebrate the unique perspectives we each bring to the table. We thrive in ambiguity. The ideal Senior Data Scientist candidate thrives in ambiguous environments where the business problem is known, though the technical strategy is not defined. They are able to investigate and develop strategies and concepts to frame a solution set and enable detailed design to commence. They must have strong problem-solving capabilities to isolate, define, resolve complex problems, and implement effective and efficient solutions. They should have experience working in large scale organizations – where data sets are large and complex. They should have high judgement with the ability to balance the right data fidelity with right speed with right confidence level for various stages of analysis and purposes. They should have experience partnering with a broad set of functional teams and levels with the ability to adjust and synthesize their approaches, assumptions, and recommendations to audiences with varying levels of technical knowledge. They are mentors and strong partners with research scientists and other data scientists. Key job responsibilities - Demonstrate advanced technical expertise in data science - Provide scientific and technical leadership within the team - Stay current with emerging technologies and methodologies - Apply data science techniques to solve business problems - Guide and mentor junior data scientists - Share knowledge about scientific advancements with team members - Contribute to the technical growth of the organization - Work on complex, high-impact projects - Influence data science strategy and direction - Collaborate across teams to drive data-driven decision making
US, MA, N.reading
Amazon Industrial Robotics is seeking exceptional talent to help develop the next generation of advanced robotics systems that will transform automation at Amazon's scale. We're building revolutionary robotic systems that combine cutting-edge AI, sophisticated control systems, and advanced mechanical design to create adaptable automation solutions capable of working safely alongside humans in dynamic environments. This is a unique opportunity to shape the future of robotics and automation at an unprecedented scale, working with world-class teams pushing the boundaries of what's possible in robotic dexterous manipulation, locomotion, and human-robot interaction. This role presents an opportunity to shape the future of robotics through innovative applications of deep learning and large language models. At Amazon Industrial Robotics we leverage advanced robotics, machine learning, and artificial intelligence to solve complex operational challenges at an unprecedented scale. Our fleet of robots operates across hundreds of facilities worldwide, working in sophisticated coordination to fulfill our mission of customer excellence. The ideal candidate will contribute to research and implementation that bridges the gap between theoretical advancement and practical implementation in robotics. You will be part of a team that's revolutionizing how robots learn, adapt, and interact with their environment. Join us in building the next generation of intelligent robotics systems that will transform the future of automation and human-robot collaboration. Key job responsibilities - Implement and optimize control algorithms for robot locomotion - Support development of behaviors that enable robots to traverse diverse terrain - Contribute to methods that integrate stability, locomotion, and manipulation tasks - Help create dynamics models and simulations that enable sim2real transfer of algorithms - Collaborate effectively with multi-disciplinary teams on hardware and algorithms for loco-manipulation