Russ Tedrake (Massachusetts Institute of Technology).JPG
Russ Tedrake, a professor of electrical engineering and computer science and head of the Robot Locomotion Group at MIT, has used funding from his Amazon Research Awards to explore the challenge of robotic manipulation.
Gretchen Ertl

Real-world robotic-manipulation system

Amazon Research Award recipient Russ Tedrake is teaching robots to manipulate a wide variety of objects in unfamiliar and constantly changing contexts.

Russ Tedrake, a professor of electrical engineering and computer science and head of the Robot Locomotion Group at MIT, received his first Amazon Research Award (ARA) in 2017 — the first year that robotics was included among the ARA research areas.

Explore Tedrake's Amazon Research Awards

In a succession of ARA awards since then, Tedrake has continued to explore the challenge of robotic manipulation — the grasping and manipulation of objects in arbitrary spatial configurations.

“There's one level of manipulation that is basically just looking for big flat areas to attach to, and you don't think very much about the objects,” Tedrake says. “And there is a big step where you understand, not just that this is a flat surface, but that it has inertia distributed a certain way. If there was a big, heavy book, for instance, it would be much better to pick in the middle than at the edge. We've been trying to take the revolution in computer vision, take what we know about control, understand how to put those together, and push forward.”

Self-supervised learning in robotics

Related content
Learn how Bill Smart wants to simplify the ways that robots and people work together — and why waiting on a date one night changed his career path.

With their first ARA award, Tedrake’s group worked on applying self-supervised learning to problems of robotic manipulation. Today, self-supervised learning is all the rage, but at the time, it was little explored in robotics.

The basic method in self-supervised learning is to use unlabeled — but, often, algorithmically manipulated — data to train a machine learning model to represent data in a way that’s useful for some task. The model can then be fine-tuned on that task with very little labeled data.

In computer vision, for instance, self-supervised learning often involves taking two copies of the same image, randomly modifying one of them — cropping it, rotating it, changing its colors, adding noise, and so on — and training the model to recognize that both images are of the same object.

In Tedrake’s case, his team allowed a sensor-laden robotic arm to move around an object, simultaneously photographing it and measuring the distance to points on its surface using a depth camera. From the depth readings, software could construct a 3-D model of the object and use it to map points from one 2-D photo onto others.

Self-supervision to learn invariant object representations

From the point-mapped images, a neural network could then learn an invariant representation of the object, one that allows it to identify parts of the object regardless of perspective — for instance, to identify the handle of a coffee mug whether it was viewed from the top, the side, or straight on.

The goal: enable a robot to grasp objects at specified points — to, say, pick up coffee mugs by their handles. That, however, requires the robot to generalize from a canonical instance of an object — a mug with its handle labeled — to variants of the object — mugs that are squatter or tapered or have differently shaped handles.

Keypoint correspondences

So Tedrake and his students’ next ARA-sponsored project was to train a neural network to map keypoints across different instances of the same type of object. For instance, the points at which a mug’s handle joins the mug could constitute a set of keypoints; keypoints might also be points in free space, defined relative to the object, such as the opening left by the mug handle.

Tedrake’s group began with a neural network pretrained through self-supervision and fine-tuned it using multiple instances of the same types of objects — mugs and shoes of all shapes and sizes, for example. Instances of the same objects had been labeled with corresponding keypoints, so that the model could learn category-level structural principles, as opposed to simply memorizing diverse shapes. Tedrake’s group also augmented their training images of real objects with computer-generated images of objects in the same categories.

Learning keypoint correspondences

After training the model, the group tested it on a complete end-to-end robotic-manipulation task. “We can do the task with 99% confidence,” Tedrake says. “People would just come into the lab and take their shoes off, and we’d try to put a shoe on the rack. Daniela [Rus, a roboticist, the director of MIT’s Computer Science and Artificial Intelligence Laboratory, and fellow ARA recipient] had these super shiny black Italian shoes, and they did totally fool our system. But we just added them to the training set and trained the model, and then it worked fine.”

This system worked well so long as the object to be grasped (a shoe or, in a separate set of experiments, a coffee cup) remained stationary after the neural model had identified the grasp point. “But if the object slipped, or if someone moved it as the robot reached for it, it would still air ball in the way robots have done for far too long,” Tedrake says.

Adapting on the fly

Related content
The AWS Machine Learning Research Award winner is working to develop methods and open-source libraries that can potentially benefit the artificial intelligence and robotics communities.

So the next phase of the project was to teach the robot to use video feedback to adjust trajectories on the fly. Until now, Tedrake’s team had been using machine learning only for the robot’s perceptual system; they’d designed the control algorithms using traditional control-theoretical optimization. But now they switched to machine learning for controller design, too.

To train the controller model, Tedrake’s group used data from demonstrations in which one of the lab members teleoperated the robotic arm while other members knocked the target object around, so that its position and orientation changed. During training, the model took as input sensor data from the demonstrations and tried to predict the teleoperator’s control signals.

“By the end, we had versions that were just super robust, where you're antagonizing the robot, trying to knock objects away just as it reaches for them,” Tedrake says.

Still, producing those robust models required around 100 runs of the teleoperation experiment for each object, a resource-intensive data acquisition procedure. This led to the next step: generalizing the feedback model, so that the robot could learn to handle perturbations from just a handful — even just one — example.

Related content
While these systems look like other robot arms, they embed advanced technologies that will shape Amazon's robot fleet for years to come.

“From all that data, we’re now trying to learn, not the policy directly, but a dynamics model, and then you compute the policy after the fact,” Tedrake explains.

This requires a combination of machine learning and the more traditional, control-theoretical analysis that Tedrake’s group has specialized in. From data, the machine learning model learns vector representations of both the input and the control signal, but hand-tooled algorithms constrain the representation space to optimize the control signal selection. “It's basically turning it back into a planning and control problem, but in the feature space that was learned,” Tedrake explains.

And indeed, with his current ARA grant, Tedrake is pursuing ever more sophisticated techniques for analyzing planning and control problems. In a recent paper, he and two of his students, Tobia Marcucci and Jack Umenberger, together with Pablo Parrilo, a professor in MIT’s Laboratory for Information and Decision Systems, consider a variation on the shortest-path problem, or finding the shortest path through a graph with edges of varying lengths.

In Tedrake and his colleagues’ version of the problem, the locations of the graph nodes vary according to some function, and as a consequence, so do the edge lengths. This formalism lends itself to a wide range of problems, including motion planning for robots and autonomous vehicles.

An example of Tedrake and his colleagues’ variation of the shortest-path problem. White circles represent locations of vertices, which can vary anywhere within the pale-blue polygons; the dotted blue lines represent the current distances between vertices along the shortest route through the graph. Black arrows represent the direction of flow through the graph.
An example of Tedrake and his colleagues’ variation of the shortest-path problem. White circles represent locations of vertices, which can vary anywhere within the pale-blue polygons; the dotted blue lines represent the current distances between vertices along the shortest route through the graph. Black arrows represent the direction of flow through the graph.

Computing the shortest path through such a graph is an NP-complete problem, meaning it is computationally intractable for graphs of sufficient size. But the MIT researchers showed how to find an approximate solution efficiently.

This continued focus on traditional optimization techniques puts Tedrake at odds with the prevailing shift toward machine learning in so many branches of AI.

“Learning is working extremely well, but too often, I think, people have thrown the baby out with the bathwater,” he says. “There are some things that we still know how to do very, very well with control and optimization, and I'm trying to push the boundary back towards everything we do know how to do.”

Research areas

Related content

  • Staff writer
    December 29, 2025
    From foundation model safety frameworks and formal verification at cloud scale to advanced robotics and multimodal AI reasoning, these are the most viewed publications from Amazon scientists and collaborators in 2025.
  • Staff writer
    December 29, 2025
    From quantum computing breakthroughs and foundation models for robotics to the evolution of Amazon Aurora and advances in agentic AI, these are the posts that captured readers' attention in 2025.
  • Amazon Research Awards team
    November 25, 2025
    Awardees, who represent 41 universities in 8 countries, have access to Amazon public datasets, along with AWS AI/ML services and tools.
US, CA, Pasadena
The Amazon Web Services (AWS) Center for Quantum Computing (CQC) is a multi-disciplinary team of theoretical and experimental physicists, materials scientists, and hardware and software engineers on a mission to develop a fault-tolerant quantum computer. Throughout your internship journey, you'll have access to unparalleled resources, including state-of-the-art computing infrastructure, cutting-edge research papers, and mentorship from industry luminaries. This immersive experience will not only sharpen your technical skills but also cultivate your ability to think critically, communicate effectively, and thrive in a fast-paced, innovative environment where bold ideas are celebrated. Join us at the forefront of applied science, where your contributions will shape the future of Quantum Computing and propel humanity forward. Seize this extraordinary opportunity to learn, grow, and leave an indelible mark on the world of technology. Amazon has positions available for Quantum Research Science and Applied Science Internships in Santa Clara, CA and Pasadena, CA. We are particularly interested in candidates with expertise in any of the following areas: superconducting qubits, cavity/circuit QED, quantum optics, open quantum systems, superconductivity, electromagnetic simulations of superconducting circuits, microwave engineering, benchmarking, quantum error correction, fabrication, etc. Key job responsibilities In this role, you will work alongside global experts to develop and implement novel, scalable solutions that advance the state-of-the-art in the areas of quantum computing. You will tackle challenging, groundbreaking research problems, work with leading edge technology, focus on highly targeted customer use-cases, and launch products that solve problems for Amazon customers. The ideal candidate should possess the ability to work collaboratively with diverse groups and cross-functional teams to solve complex business problems. A successful candidate will be a self-starter, comfortable with ambiguity, with strong attention to detail and the ability to thrive in a fast-paced, ever-changing environment. About the team Diverse Experiences AWS values diverse experiences. Even if you do not meet all of the qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why AWS? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Inclusive Team Culture Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences, inspire us to never stop embracing our uniqueness. Mentorship & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. Hybrid Work We value innovation and recognize this sometimes requires uninterrupted time to focus on a build. We also value in-person collaboration and time spent face-to-face. Our team affords employees options to work in the office every day or in a flexible, hybrid work model near one of our U.S. Amazon offices.
US, CA, San Francisco
Amazon’s Frontier AI & Robotics (FAR) team is seeking a Member of Technical Staff to drive foundational research and build intelligent robotic systems from the ground up. In this role, you will operate at the intersection of innovative AI research and real-world robotics - conducting original research, publishing, and deploying your innovations into production systems at Amazon scale. We’re looking for researchers who think from first principles, push the boundaries of what’s possible, and take full ownership of turning breakthrough ideas into working systems.  You will join the next revolution in robotics, where you'll work alongside world-renowned AI pioneers to push the boundaries of what's possible in robotic intelligence. As a Member of Technical Staff, you'll be at the forefront of developing breakthrough foundation models and full-stack robotics systems that enable robots to perceive, understand, and interact with the world in unprecedented ways. You'll drive technical excellence and independent research initiatives in areas such as locomotion, manipulation, perception, sim2real transfer, multi-modal, multi-task robot learning, designing novel frameworks that bridge the gap between state-of-the-art research and real-world deployment at Amazon scale. In this role, you'll balance innovative technical exploration with practical implementation, collaborating with platform teams to ensure your models and algorithms perform robustly in dynamic real-world environments. You’ll have the freedom to pursue ambitious research directions while leveraging Amazon’s vast computational resources to tackle ambiguous problems in areas like very large multi-modal robotic foundation models and efficient, promptable model architectures that can scale across diverse robotic applications. Key job responsibilities - Drive independent research initiatives across the robotics stack, including robot co-design, dexterous manipulation mechanisms, innovative actuation strategies, state estimation, low-level control, system identification, reinforcement learning, sim-to-real transfer, as well as foundation models focusing on breakthrough approaches in perception, and manipulation, for example open-vocabulary panoptic scene understanding, scaling up multi-modal LLMs, sim2real/real2sim techniques, end-to-end vision-language-action models, efficient model inference, video tokenization - Design and implement novel deep learning architectures that push the boundaries of what robots can understand and accomplish - Guide technical direction for full-stack robotics projects from conceptualization through deployment, taking a system-level approach that integrates hardware considerations with algorithmic development, ensuring robust performance in production environments - Collaborate with platform and hardware teams to ensure seamless integration across the entire robotics stack, optimizing and scaling models for real-world applications - Contribute to team's technical decisions and influence implementation strategies to help shape our approach to next-generation robotics challenges - Mentor fellow researchers while maintaining solid individual technical contributions A day in the life - Design and implement novel foundation model architectures and innovative systems and algorithms, leveraging our extensive infrastructure to prototype and evaluate at scale - Collaborate with our world-class research team to solve complex technical challenges across the full robotics stack - Lead focused technical initiatives from conception through deployment, ensuring successful integration with production systems - Drive technical discussions and brainstorming sessions with team leaders, fellow researchers and key stakeholders - Conduct experiments and prototype new ideas using our massive compute cluster and extensive robotics infrastructure - Transform theoretical insights into practical solutions that can handle the complexities of real-world robotics applications About the team At Frontier AI & Robotics, we're not just advancing robotics – we're reimagining it from the ground up. Our team is building the future of intelligent robotics through innovative foundation models and end-to-end learned systems. We tackle some of the most challenging problems in AI and robotics, from developing sophisticated perception systems to creating adaptive manipulation strategies that work in complex, real-world scenarios. What sets us apart is our unique combination of ambitious research vision and practical impact. We leverage Amazon's massive computational infrastructure and rich real-world datasets to train and deploy state-of-the-art foundation models. Our work spans the full spectrum of robotics intelligence – from multimodal perception using images, videos, and sensor data, to sophisticated manipulation strategies that can handle diverse real-world scenarios. We're building systems that don't just work in the lab, but scale to meet the demands of Amazon's global operations. Join us if you're excited about pushing the boundaries of what's possible in robotics, working with world-class researchers, and seeing your innovations deployed at unprecedented scale.
US, WA, Bellevue
Alexa International Science team is looking for a passionate, talented, and inventive Senior Applied Scientist to help build industry-leading technology with Large Language Models (LLMs) and multimodal systems, requiring strong deep learning and generative models knowledge. At this level, you will drive cross-team scientific strategy, influence partner teams, and deliver solutions that have broad impact across Alexa's international products and services. Key job responsibilities As a Senior Applied Scientist with the Alexa International team, you will work with talented peers to develop novel algorithms and modeling techniques to advance the state of the art with LLMs, particularly delivering industry-leading scientific research and applied AI for multi-lingual applications — a challenging area for the industry globally. Your work will directly impact our global customers in the form of products and services that support Alexa+. You will leverage Amazon's heterogeneous data sources and large-scale computing resources to accelerate advances in text, speech, and vision domains. The ideal candidate possesses a solid understanding of machine learning, speech and/or natural language processing, modern LLM architectures, LLM evaluation & tooling, and a passion for pushing boundaries in this vast and quickly evolving field. They thrive in fast-paced environment, like to tackle complex challenges, excel at swiftly delivering impactful solutions while iterating based on user feedback, and are able to influence and align multiple teams around a shared scientific vision.
US, WA, Bellevue
Alexa International is looking for a passionate, talented, and inventive Applied Scientist to help build industry-leading technology with Large Language Models (LLMs) and multimodal systems, requiring strong deep learning and generative models knowledge. You will contribute to developing novel solutions and deliver high-quality results that impact Alexa's international products and services. Key job responsibilities As an Applied Scientist with the Alexa International team, you will work with talented peers to develop novel algorithms and modeling techniques to advance the state of the art with LLMs. Your work will directly impact our international customers in the form of products and services that make use of digital assistant technology. You will leverage Amazon's heterogeneous data sources, unique and diverse international customer nuances and large-scale computing resources to accelerate advances in text, voice, and vision domains in a multimodal setup. The ideal candidate possesses a solid understanding of machine learning, natural language understanding, modern LLM architectures, LLM evaluation & tooling, and a passion for pushing boundaries in this vast and quickly evolving field. They thrive in fast-paced environments to tackle complex challenges, excel at swiftly delivering impactful solutions while iterating based on user feedback, and collaborate effectively with cross-functional teams. A day in the life * Analyze, understand, and model customer behavior and the customer experience based on large-scale data. * Build novel online & offline evaluation metrics and methodologies for multimodal personal digital assistants. * Fine-tune/post-train LLMs using techniques like SFT, DPO, RLHF, and RLAIF. * Set up experimentation frameworks for agile model analysis and A/B testing. * Collaborate with partner teams on LLM evaluation frameworks and post-training methodologies. * Contribute to end-to-end delivery of solutions from research to production, including reusable science components. * Communicate solutions clearly to partners and stakeholders. * Contribute to the scientific community through publications and community engagement.
IL, Tel Aviv
Come join the AWS Agentic AI science team in building the next generation models for intelligent automation. AWS, the world-leading provider of cloud services, has fostered the creation and growth of countless new businesses, and is a positive force for good. Our customers bring problems that will give Applied Scientists like you endless opportunities to see your research have a positive and immediate impact in the world. You will have the opportunity to partner with technology and business teams to solve real-world problems, have access to virtually endless data and computational resources, and to world-class engineers and developers that can help bring your ideas into the world. As part of the team, we expect that you will develop innovative solutions to hard problems, and publish your findings at peer reviewed conferences and workshops. We are looking for world class researchers with experience in one or more of the following areas - autonomous agents, API orchestration, Planning, large multimodal models (especially vision-language models), reinforcement learning (RL) and sequential decision making.
IL, Tel Aviv
Are you a Masters or PhD student interested in a 2026 Internship in Data Science? If so, we want to hear from you! We are looking for a customer obsessed Data Scientist Intern who can innovate in a business environment and is comfortable owning data to drive step-change innovation in the EMEA region or worldwide. If this describes you, come and join our Data Science teams at Amazon for an exciting internship opportunity. If you are insatiably curious and always want to learn more, then you’ve come to the right place. You can find more information about the Amazon Science community as well as our interview process via the links below; https://www.amazon.science/ https://amazon.jobs/content/en/career-programs/university/science Key job responsibilities As a Data Science Intern, you will have the following key job responsibilities: • Work closely with scientists and engineers to develop new algorithms to implement scientific solutions for Amazon problems • Design, run, and analyze A/B tests • Work on an interdisciplinary team on customer-obsessed research • Experience Amazon's customer-focused culture • Create and deliver projects that can be quickly applied starting locally and scaled to EMEA/worldwide • Create and share data with audiences of varying levels technical papers and presentations • Define metrics and design algorithms to estimate customer satisfaction and engagement A day in the life At Amazon, you will grow into the high impact person you know you’re ready to be. Every day will be filled with developing new skills and achieving personal growth. How often can you say that your work changes the world? At Amazon, you’ll say it often. Join us and define tomorrow. Some more benefits of an Amazon Science internship include; • All of our internships offer a competitive stipend/salary • Interns are paired with an experienced manager and mentor(s) • Interns receive invitations to different events such as intern program initiatives or site events • Interns can build their professional and personal network with other Amazon Scientists • Interns can potentially publish work at top tier conferences each year About the team Applicants will be reviewed on a rolling basis and are assigned to teams aligned with their research interests and experience prior to interviews. Start dates are available throughout the year and durations can vary in length from 3-6 months for full time internships or 6-12 months for part time internships. Please note these are not remote internships.
US, CA, San Francisco
Join Amazon's Frontier AI & Robotics team and help shape the future of intelligent robotic systems from the inside out. As a Member of Technical Staff - Firmware Engineer, Electronics, you will develop the low-level firmware that brings our in-house robotic actuators to life—writing the embedded code that bridges sophisticated hardware and the high-level AI control systems that power our next-generation robots. Your work will directly enable our robots to see, reason, and act in real-world warehouse environments, making you a critical contributor to one of the most ambitious robotics programs in the world. Key job responsibilities • Develop, test, and optimize embedded firmware for custom in-house robotic actuators, including motor control algorithms (FOC, commutation, current/torque/speed/position loops) running on microcontrollers and DSPs • Design and implement real-time firmware for actuator state estimation, fault detection, and protection logic, ensuring robust and safe operation across all actuator variants deployed in FAR's robotic systems • Collaborate with electronics engineers and motor design engineers to define firmware requirements, hardware interfaces (SPI, I2C, CAN, EtherCAT, RS-485), and actuator bring-up procedures for new hardware revisions • Develop and maintain firmware for field-oriented control (FOC) and sensored/sensorless motor commutation, including tuning current regulators, velocity controllers, and position controllers for high-performance robots • Build and maintain firmware test frameworks and hardware-in-the-loop (HIL) test environments to validate firmware behavior across actuator operating conditions, edge cases, and failure modes • Partner with controls engineers and AI researchers to ensure firmware-level interfaces support high-bandwidth, low-latency communication required by whole-body control and motion planning algorithms • Contribute to actuator firmware architecture decisions, define software-hardware interface standards, and maintain firmware documentation and version control practices to enable scalable multi-actuator development • Support rapid hardware bring-up and debugging of new actuator prototypes, leveraging oscilloscopes, logic analyzers, and custom diagnostic tools to characterize and validate firmware behavior on novel hardware A day in the life Your day is rooted in the intersection of hardware and software where you’ll be wiring firmware from scratch to control custom motors. You might start your morning reviewing firmware behavior logs from the previous night's actuator characterization runs, then spend time working alongside motor design and electronics engineers to debug a torque ripple issue in the motor control loop. In the afternoon, you could be writing and validating embedded firmware for a new actuator variant, tuning (field-oriented control) FOC algorithms, and collaborating with the controls team to ensure firmware interfaces align with high-level motion planning requirements. Beyond the bench, you'll participate in architecture reviews with hardware and software engineers, contribute to code reviews, and document firmware specifications that enable smooth hardware handoffs. You'll be working on actuator variants—each with unique power, torque, and speed requirements—and you'll be the firmware voice in cross-functional design discussions that shape how our actuators are built and controlled. The pace is fast, the problems are novel, and the impact is direct. About the team Frontier AI & Robotics (FAR) is the team at Amazon building the next generation of embodied intelligence. FAR drives the development and implementation of advanced AI models within Amazon’s operations that enable robots to see, reason, and act on the world around them, supporting a number of different warehouse automation tasks.
US, CA, San Francisco
Join Amazon's Frontier AI & Robotics team and take ownership of the electronics that make our robots move. As a Member of Technical Staff - Electronics Engineer, Actuators & Drives, you will conceptualize, design, and test the motor drive electronics that power our in-house robotic actuators—from the gate drivers and power stages that command motor current to the sensing circuits and communication interfaces that give our robots proprioceptive awareness. Your printed circuit board (PCB) designs will live inside each of our next-generation robotic systems, directly enabling the embodied intelligence that is central to FAR's mission. Key job responsibilities • Conceptualize, design, and validate motor drive electronics for in-house robotic actuators, including inverter power stages, gate driver circuits, current and position sensing, and power management subsystems from concept through prototype and production • Lead PCB-level design of compact, high-power-density motor drive boards, including schematic capture, component selection, and collaboration with PCB layout engineers to achieve signal integrity, thermal, and EMC requirements in constrained actuator form factors • Characterize and optimize inverter switching performance, efficiency, and thermal behavior across the full operating envelope of FAR's actuator variants, using bench measurements and simulation to guide design decisions • Define and implement current sensing architectures (shunt-based, Hall-effect, or integrated IC-based) and position/velocity sensing interfaces (encoder, resolver, Hall sensor) to support high-bandwidth FOC firmware on microcontrollers and DSPs • Partner with firmware engineers to define hardware-software interfaces for motor drive control loops, fault detection logic, and communication protocols (CAN, EtherCAT, SPI), ensuring electronics designs support the real-time control requirements of robotic actuation • Collaborate with motor design and mechanical engineers to specify the electrical characteristics of custom BLDC and PMSM motors, align inverter design to motor parameters, and validate the integrated actuator electro-mechanical system • Lead hardware bring-up, functional testing, and failure analysis for new actuator electronics prototypes, developing test plans and characterization setups that systematically validate design performance and identify failure modes • Define electronics design standards, review processes, and design-for-manufacturability (DFM) guidelines for FAR's actuator drive portfolio, and mentor junior engineers in motor drive electronics design best practices A day in the life Your day centers on the full electronics development cycle for our custom actuator drive systems. You might start by reviewing simulation results for a new inverter topology, then transition to the lab to characterize switching losses and thermal performance on a prototype motor drive board. Later in the day, you could be collaborating with motor design engineers on back-EMF waveform analysis, refining gate drive timing to optimize inverter efficiency, or working with firmware engineers to define current sensing interfaces and hardware abstraction layers. Across the week, you'll be involved in schematic capture and PCB layout reviews with your design team, participating in design review gates, and iterating on hardware based on test findings. You'll navigate the challenge of fitting high-performance drive electronics into compact, thermally constrained actuator packages—designing for the power density, reliability, and robustness our robots demand. Your work will span from concept and architecture through silicon bring-up, and you'll play a key role in defining the electronics roadmap for FAR's actuator portfolio. About the team Frontier AI & Robotics (FAR) is the team at Amazon building the next generation of embodied intelligence. FAR drives the development and implementation of advanced AI models within Amazon’s operations that enable robots to see, reason, and act on the world around them, supporting a number of different warehouse automation tasks.
US, CA, San Francisco
About the Role: We are looking for a Member of Technical Staff - Mechanical Engineer with a passion for building complex robotic systems from the ground up. This role is ideal for someone with a deep understanding of structural and electromechanical design, who thrives in hands-on environments and has experience taking high-performance robots from concept to production. You will work on the mechanical and system architecture of advanced robotics platforms, including high degree-of-freedom systems, where considerations such as actuator selection, thermal constraints, cabling, sensing integration, and manufacturability are critical. This is a cross-disciplinary role requiring close collaboration with electrical, software, and AI research teams. Beyond day-to-day hardware development, this role also provides exciting avenues to contribute to innovative research projects. Whether you’re interested in mechatronics, sensor integration, or novel actuation methods, you’ll find opportunities to explore your research interests while building real-world systems that advance in the field of high degree-of-freedom robotics. What You Bring: * A systems-thinking mindset with a strong grasp of cross-domain engineering tradeoffs. * A bias toward action: comfortable building, testing, and iterating rapidly. * A collaborative and communicative working style — especially in multi-disciplinary research environments. * A passion for robotics and advancing the state of the art in intelligent, capable machines. Key job responsibilities * Lead mechanical design of robotic subsystems and full platforms, including structures, joints, enclosures, and mechanisms for a research environment. * Own kinematic, dynamic, and structural analyses to guide the design and optimization of full systems and subsystems of high-DoF robots * Specify and integrate actuators and motors for high-torque density applications in high-degree-of-freedom systems. * Contribute to thermal management strategies for motors, sensors, and embedded compute hardware. * Integrate sensors such as lidar, stereo cameras, IMUs, tactile sensors, and compute modules into compact, functional assemblies. * Design and route cabling and wire harnesses, ensuring reliability, serviceability, and thermal/electrical integrity. * Prototype and test mechanical systems; support hands-on builds, debug sessions, and field testing. * Conduct root cause analysis on system-level failures or performance issues and implement design improvements. * Apply Design for Manufacturing (DFM) and Design for Assembly (DFA) principles to transition prototypes into scalable builds (10s–100s of units). * Collaborate with cross-functional teams in electrical engineering, controls, perception, and research to meet research and product goals. About the team Frontier AI & Robotics (FAR) is the team at Amazon building the next generation of embodied intelligence. FAR drives the development and implementation of advanced AI models within Amazon’s operations that enable robots to see, reason, and act on the world around them, supporting a number of different warehouse automation tasks.
US, CA, Sunnyvale
The Economic Value & Optimization (EV&O) team builds causal econometric models that quantify the long-term economic value of Amazon's retail selection. Our models inform portfolio-level assortment decisions worth billions in projected OPS impact. We are looking for an Econ intern to work on improving our dynamic causal modeling framework and strengthening the empirical grounding of model outputs through experimental calibration. The intern will work with senior economists and scientists to develop methodological improvements that directly influence how Amazon decides what assortment to carry. Key job responsibilities - Develop and test extensions to our dynamic econometric framework including incorporating Gen AI methodology. - Design and implement models to reconcile counterfactual estimates with experimental treatment effects from selection de-assortment experiments. - Conduct econometric analyses on large-scale customer behavior panel data. - Quantify model performance using validation metrics and identify sources of bias. - Communicate findings to science leadership and business stakeholders through written documents and presentations.