Jesse Levinson, co-founder and CTO of Zoox
Jesse Levinson, co-founder and CTO of Zoox, completed his PhD and postdoc under Sebastian Thrun at Stanford. He developed algorithms for Stanford’s entry in the 2007 DARPA Urban Challenge and went on to lead the self-driving team’s research and development efforts.
Zoox

The future of mobility-as-a-service

Jesse Levinson, co-founder and CTO of Zoox, answers 3 questions about the challenges of developing autonomous vehicles and why he’s excited about Zoox’s robotaxi fleet.

In June 2020, Amazon acquired Zoox, a then six-year-old California-based startup focused on “creating autonomous mobility from the ground up.”

Six months later, Zoox, now an independent Amazon subsidiary, shared publicly for the first time a look at its electric, autonomous vehicle created for dense, urban environments. The vehicle reveal marked a key milestone toward the organization’s vision of creating an autonomous robotaxi fleet and ride-hailing service designed with passengers in mind.

At its unveiling in December 2020, Zoox CEO Aicha Evans said her team is transforming the rider experience to provide superior “mobility-as-a-service” for customers. Moreover, she added, given the current data related to carbon emissions and traffic accidents, “It’s more important than ever that we build a sustainable, safe solution that allows riders to get from point A to point B.”

See how a Zoox robotaxi traverses city streets.

Jesse Levinson, co-founder and chief technology officer of Zoox, guides the company’s technology roadmap and execution to turn its mobility-as-a-service vision into reality. After graduating summa cum laude from Princeton, he completed his PhD and postdoc under Sebastian Thrun at Stanford. There, he developed algorithms for Stanford’s successful entry in the 2007 DARPA Urban Challenge and went on to lead the self-driving team’s research and development efforts.

Amazon Science asked Levinson about the challenges of developing self-driving vehicles and why he’s excited about Zoox’s approach.

Q. You were one of the authors on the 2008 paper, Junior: The Stanford Entry in the Urban Challenge. That race was a closed-course competition, and not quite representative of real-world challenges. But what key observations did you take away from that experience?

Probably the most important realization after the race was the dichotomy of how much there was still left to solve and the fact that it was actually all going to be solvable. It’s quite easy to get enchanted with one or the other of those observations; either that the problem is practically impossible because of all the things that still aren’t perfect, or that it must be almost solved because of some super cool demo or milestone that seems incredibly impressive. The reality is in between, and for whatever reason, it’s surprisingly hard for people to maintain a nuanced appreciation of that balance.

Achieving a world with ubiquitous autonomous vehicles will be an incremental process that advances every year — and remember, the alternative is the bar of human performance that stays nearly stagnant.
Jesse Levinson

In 2004, DARPA held its first Grand Challenge:  a 125-mile race in the desert. Of the 20 teams that entered, none completed the race, and the best vehicle only completed about six miles. The industry (and the media) widely regarded the outcome as an abysmal failure of AI. Yet it was not a failure, but an incredible feat of engineering. If an autonomous vehicle can drive six miles in the desert all by itself, then it doesn’t take an incredible imagination to foresee it driving 125 miles.

Lo and behold, the very next year, six vehicles finished the full 125-mile course. It was a promising step towards the future, and a year later, in 2006, DARPA announced the Urban Challenge, which several teams completed successfully. Our entry at Stanford came in second place. Excited by the results, many people made overly optimistic predictions on the mass-adoption of self-driving cars, which were subsequently deflated by various challenges we’ve seen in the industry since that time.

It has been eye-opening to watch the public's reaction to self-driving cars over time. I have always tried my best to be upfront, honest, and realistic about where the technology is — and while I’ve certainly not nailed all of my predictions, I do think I’ve managed to be fairly balanced overall. As technologists, when we are overly optimistic or pessimistic, we do a disservice to ourselves, the industry, and our technology. Achieving a world with ubiquitous autonomous vehicles will be an incremental process that advances every year — and remember, the alternative is the bar of human performance that stays nearly stagnant. It’s the opportunity of a lifetime to participate in the journey of making autonomous driving technology relentlessly better. Soon, it will reach a crossover point where the public begins to adopt it at scale, which will be a transformative win for society at large.

Q. Following up on your answer, what did you learn from that experience that you apply to your current role at Zoox? Has your approach changed since that challenge or remained largely the same?

So much! I’m grateful for that experience because it was formative in the early approach of Zoox. Here’s some of the lessons I took away from it:

Zoox Autonomous Vehicle - Single Side - Coit Tower SF.png
Zoox notes is "the first in the industry to showcase a driving, purpose-built robotaxi capable of operating up to 75 miles per hour."
Zoox

First, teaching cars to drive will not take as long as we thought. In the early 2000s, we all thought it would be many, many decades before self-driving cars would be a reality. The DARPA challenge changed that. To build a vehicle that could navigate many realistic traffic scenarios only took about a year for a small team. Of course, there’s a huge difference between that and what’s required to operate an autonomous vehicle on public roads. But it was an important milestone that highlighted that autonomous driving technology could be a reality within a couple of decades.

Second, system integration and wide-scale testing is critical. No amount of knowledge about artificial intelligence, or anything else for that matter, will lead a mythical genius to intellectually divine a perfect solution. We need to combine and integrate many different complex systems and then see what works and fails through simulations, then closed courses, then public roads (with safety drivers). We have to test and experiment and iterate with massive data and scale, as opposed to trying to reason our way to a perfect solution.

On the other hand, blindly searching for progress without having any vision or architectural insights is also a bad idea; that’s one of the reasons why we identified the benefits of 270-degree sensing on all four corners of our ground-up vehicle at Zoox way back in 2014, a few years before we could drive autonomously in cities — because we knew from first principles that it was the right way to perceive the world.

Zoox Autonomous Vehicle - Reveal Sensor Detail.png
The Zoox vehicles utilize a unique sensor (some of which are seen here) architecture of cameras, radar, and LIDAR to obtain a 270-degree field of view on all four corners of the vehicle.
Zoox

Last, we have to test the various software and hardware components collectively to see how they respond to errors and uncertainty. By building a robust system that handles a cascading series of errors and ambiguities, you can explicitly track uncertainty and represent the state of the world more thoroughly. The proper representation of the world is not a singular, perfect model, but rather a distribution of probabilities and uncertainties. If you can design your system to be robust to imperfect sensor data, unpredictable agents, and unusual environments, you have a real shot at solving the problem in a world that’s not always the way you want it to be. It’s actually what humans do really well all the time, even though we’re rarely conscious that we’re doing it.

Q. You’ve said that safety is the foundation of everything Zoox does, and that the experience of building Zoox’s robotaxi has given you the opportunity to reimagine passenger safety. Can you give us insight into some of the systems you’ve developed for passenger safety, particularly the AI stack that underpins these efforts?

Yes, that’s right: safety is absolutely fundamental to the Zoox mission. With apologies for using an overused phrase, autonomous mobility allows for a paradigm shift (sorry!) in safety — from reactive to proactive. It’s an important point: automotive safety has always been reactive, focused on protecting vehicle occupants in crashes, which are seen to be inevitable. By building an autonomous vehicle from the ground-up, we can add a layer of proactive crash prevention that simply does not exist in today’s human-driven cars, and a focus on preventing crashes from occurring in the first place. We have more than a hundred safety innovations that do not exist in conventional cars today.

Zoox Autonomous Vehicle - Interior day.png
The vehicle features a four-seat, face-to-face symmetrical seating configuration that eliminates the steering wheel and bench seating seen in conventional car designs.
Zoox

We are also developing the AI, vehicle, and service all together. Integrating the software, sensor, and vehicle subsystems is a complex challenge that requires tight, cross-functional collaboration. It would be difficult to create this level of system integration across multiple companies with divergent commercial interests. Building a ground-up vehicle has allowed us to design and choose our own sensor suite to best solve self-driving. We’ve outfitted our Toyota Highlander fleet with this same sensor architecture as our ground-up vehicle so that we can gather large amounts of data and test in environments like San Francisco and Las Vegas while our in-house vehicle is still under development.

Our software stack includes mapping, localization, sensor calibration, perception, prediction, path planning, vehicle control, infrastructure, firmware, diagnostics/messaging/monitoring/logging, and simulation. All of this software is continuously improving, with additions of new features and iterative software updates that are put through rigorous offline validations and on-vehicle structured testing.

Our vehicles also use a variety of advanced sensors, including LIDAR, cameras, and radar, to see objects on all sides of the vehicle. And because of the geometrical configuration of these sensors, we can almost always see around and behind the objects nearest to us, which is particularly helpful in dense urban environments. Our software then uses a combination of machine learning and geometric reasoning to understand the sensor data, make sense of the scene unfolding around the vehicle, and effectively navigate the roads.

We’re excited to launch our first commercial driverless service, but we won’t do so until we’re ready to operate on public roads at safety levels that meaningfully surpass that of humans.
Jesse Levinson

For example, in a busy downtown intersection, our vehicle might be identifying a construction zone based on road cones and signs, while also detecting, tracking, and predicting the motion of hundreds of other agents (vehicles, pedestrians, bicyclists, etc.) around it. Once the perception system understands the environment and can predict how surrounding agents will move, the planner uses that information and context to adapt its driving behavior to the dynamic road conditions. The planner normally tries to maintain a certain lateral distance between itself and other vehicles, but it could decide to slightly reduce that distance in order to avoid a cone in the road ahead.

By combining both the hardware and software design, we are able to reimagine passenger safety. We are confident in our sensors’ abilities to detect activity in the environment around the vehicle, but that has to be validated in a wide range of scenarios. And our vehicle has performed extremely well in crash testing, which is still important, because no matter how sophisticated the AI is, we can’t guarantee that nothing will ever hit us. We’re excited to launch our first commercial driverless service, but we won’t do so until we’re ready to operate on public roads at safety levels that meaningfully surpass that of humans.

Research areas

Related content

  • November 26, 2025
    Reasoning models can generate seven to 10 times as many tokens as necessary on simple tasks, creating unsustainable costs at scale. Amazon's vision for metacognitive AI could fundamentally shift how models allocate computational resources.
  • Staff writer
    February 2, 2026
    Advancing AI requires more than breakthrough models. It depends on communities of builders and researchers who experiment, test assumptions, and share what they learn. That belief is guiding how Amazon engages developers and academics around Amazon Nova, Amazon’s portfolio of AI offerings including the Nova models, Nova Forge and Nova Act.
  • Amazon Research Awards team
    November 25, 2025
    Awardees, who represent 41 universities in 8 countries, have access to Amazon public datasets, along with AWS AI/ML services and tools.
US, CA, Pasadena
The Amazon Web Services (AWS) Center for Quantum Computing (CQC) is a multi-disciplinary team of theoretical and experimental physicists, materials scientists, and hardware and software engineers on a mission to develop a fault-tolerant quantum computer. Throughout your internship journey, you'll have access to unparalleled resources, including state-of-the-art computing infrastructure, cutting-edge research papers, and mentorship from industry luminaries. This immersive experience will not only sharpen your technical skills but also cultivate your ability to think critically, communicate effectively, and thrive in a fast-paced, innovative environment where bold ideas are celebrated. Join us at the forefront of applied science, where your contributions will shape the future of Quantum Computing and propel humanity forward. Seize this extraordinary opportunity to learn, grow, and leave an indelible mark on the world of technology. Amazon has positions available for Quantum Research Science and Applied Science Internships in Santa Clara, CA and Pasadena, CA. We are particularly interested in candidates with expertise in any of the following areas: superconducting qubits, cavity/circuit QED, quantum optics, open quantum systems, superconductivity, electromagnetic simulations of superconducting circuits, microwave engineering, benchmarking, quantum error correction, fabrication, etc. Key job responsibilities In this role, you will work alongside global experts to develop and implement novel, scalable solutions that advance the state-of-the-art in the areas of quantum computing. You will tackle challenging, groundbreaking research problems, work with leading edge technology, focus on highly targeted customer use-cases, and launch products that solve problems for Amazon customers. The ideal candidate should possess the ability to work collaboratively with diverse groups and cross-functional teams to solve complex business problems. A successful candidate will be a self-starter, comfortable with ambiguity, with strong attention to detail and the ability to thrive in a fast-paced, ever-changing environment. About the team Diverse Experiences AWS values diverse experiences. Even if you do not meet all of the qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why AWS? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Inclusive Team Culture Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences, inspire us to never stop embracing our uniqueness. Mentorship & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. Hybrid Work We value innovation and recognize this sometimes requires uninterrupted time to focus on a build. We also value in-person collaboration and time spent face-to-face. Our team affords employees options to work in the office every day or in a flexible, hybrid work model near one of our U.S. Amazon offices.
US, CA, San Francisco
Amazon’s Frontier AI & Robotics (FAR) team is seeking a Member of Technical Staff to drive foundational research and build intelligent robotic systems from the ground up. In this role, you will operate at the intersection of innovative AI research and real-world robotics - conducting original research, publishing, and deploying your innovations into production systems at Amazon scale. We’re looking for researchers who think from first principles, push the boundaries of what’s possible, and take full ownership of turning breakthrough ideas into working systems.  You will join the next revolution in robotics, where you'll work alongside world-renowned AI pioneers to push the boundaries of what's possible in robotic intelligence. As a Member of Technical Staff, you'll be at the forefront of developing breakthrough foundation models and full-stack robotics systems that enable robots to perceive, understand, and interact with the world in unprecedented ways. You'll drive technical excellence and independent research initiatives in areas such as locomotion, manipulation, perception, sim2real transfer, multi-modal, multi-task robot learning, designing novel frameworks that bridge the gap between state-of-the-art research and real-world deployment at Amazon scale. In this role, you'll balance innovative technical exploration with practical implementation, collaborating with platform teams to ensure your models and algorithms perform robustly in dynamic real-world environments. You’ll have the freedom to pursue ambitious research directions while leveraging Amazon’s vast computational resources to tackle ambiguous problems in areas like very large multi-modal robotic foundation models and efficient, promptable model architectures that can scale across diverse robotic applications. Key job responsibilities - Drive independent research initiatives across the robotics stack, including robot co-design, dexterous manipulation mechanisms, innovative actuation strategies, state estimation, low-level control, system identification, reinforcement learning, sim-to-real transfer, as well as foundation models focusing on breakthrough approaches in perception, and manipulation, for example open-vocabulary panoptic scene understanding, scaling up multi-modal LLMs, sim2real/real2sim techniques, end-to-end vision-language-action models, efficient model inference, video tokenization - Design and implement novel deep learning architectures that push the boundaries of what robots can understand and accomplish - Guide technical direction for full-stack robotics projects from conceptualization through deployment, taking a system-level approach that integrates hardware considerations with algorithmic development, ensuring robust performance in production environments - Collaborate with platform and hardware teams to ensure seamless integration across the entire robotics stack, optimizing and scaling models for real-world applications - Contribute to team's technical decisions and influence implementation strategies to help shape our approach to next-generation robotics challenges - Mentor fellow researchers while maintaining solid individual technical contributions A day in the life - Design and implement novel foundation model architectures and innovative systems and algorithms, leveraging our extensive infrastructure to prototype and evaluate at scale - Collaborate with our world-class research team to solve complex technical challenges across the full robotics stack - Lead focused technical initiatives from conception through deployment, ensuring successful integration with production systems - Drive technical discussions and brainstorming sessions with team leaders, fellow researchers and key stakeholders - Conduct experiments and prototype new ideas using our massive compute cluster and extensive robotics infrastructure - Transform theoretical insights into practical solutions that can handle the complexities of real-world robotics applications About the team At Frontier AI & Robotics, we're not just advancing robotics – we're reimagining it from the ground up. Our team is building the future of intelligent robotics through innovative foundation models and end-to-end learned systems. We tackle some of the most challenging problems in AI and robotics, from developing sophisticated perception systems to creating adaptive manipulation strategies that work in complex, real-world scenarios. What sets us apart is our unique combination of ambitious research vision and practical impact. We leverage Amazon's massive computational infrastructure and rich real-world datasets to train and deploy state-of-the-art foundation models. Our work spans the full spectrum of robotics intelligence – from multimodal perception using images, videos, and sensor data, to sophisticated manipulation strategies that can handle diverse real-world scenarios. We're building systems that don't just work in the lab, but scale to meet the demands of Amazon's global operations. Join us if you're excited about pushing the boundaries of what's possible in robotics, working with world-class researchers, and seeing your innovations deployed at unprecedented scale.
US, WA, Bellevue
Alexa International Science team is looking for a passionate, talented, and inventive Senior Applied Scientist to help build industry-leading technology with Large Language Models (LLMs) and multimodal systems, requiring strong deep learning and generative models knowledge. At this level, you will drive cross-team scientific strategy, influence partner teams, and deliver solutions that have broad impact across Alexa's international products and services. Key job responsibilities As a Senior Applied Scientist with the Alexa International team, you will work with talented peers to develop novel algorithms and modeling techniques to advance the state of the art with LLMs, particularly delivering industry-leading scientific research and applied AI for multi-lingual applications — a challenging area for the industry globally. Your work will directly impact our global customers in the form of products and services that support Alexa+. You will leverage Amazon's heterogeneous data sources and large-scale computing resources to accelerate advances in text, speech, and vision domains. The ideal candidate possesses a solid understanding of machine learning, speech and/or natural language processing, modern LLM architectures, LLM evaluation & tooling, and a passion for pushing boundaries in this vast and quickly evolving field. They thrive in fast-paced environment, like to tackle complex challenges, excel at swiftly delivering impactful solutions while iterating based on user feedback, and are able to influence and align multiple teams around a shared scientific vision.
US, WA, Bellevue
Alexa International is looking for a passionate, talented, and inventive Applied Scientist to help build industry-leading technology with Large Language Models (LLMs) and multimodal systems, requiring strong deep learning and generative models knowledge. You will contribute to developing novel solutions and deliver high-quality results that impact Alexa's international products and services. Key job responsibilities As an Applied Scientist with the Alexa International team, you will work with talented peers to develop novel algorithms and modeling techniques to advance the state of the art with LLMs. Your work will directly impact our international customers in the form of products and services that make use of digital assistant technology. You will leverage Amazon's heterogeneous data sources, unique and diverse international customer nuances and large-scale computing resources to accelerate advances in text, voice, and vision domains in a multimodal setup. The ideal candidate possesses a solid understanding of machine learning, natural language understanding, modern LLM architectures, LLM evaluation & tooling, and a passion for pushing boundaries in this vast and quickly evolving field. They thrive in fast-paced environments to tackle complex challenges, excel at swiftly delivering impactful solutions while iterating based on user feedback, and collaborate effectively with cross-functional teams. A day in the life * Analyze, understand, and model customer behavior and the customer experience based on large-scale data. * Build novel online & offline evaluation metrics and methodologies for multimodal personal digital assistants. * Fine-tune/post-train LLMs using techniques like SFT, DPO, RLHF, and RLAIF. * Set up experimentation frameworks for agile model analysis and A/B testing. * Collaborate with partner teams on LLM evaluation frameworks and post-training methodologies. * Contribute to end-to-end delivery of solutions from research to production, including reusable science components. * Communicate solutions clearly to partners and stakeholders. * Contribute to the scientific community through publications and community engagement.
IL, Tel Aviv
Come join the AWS Agentic AI science team in building the next generation models for intelligent automation. AWS, the world-leading provider of cloud services, has fostered the creation and growth of countless new businesses, and is a positive force for good. Our customers bring problems that will give Applied Scientists like you endless opportunities to see your research have a positive and immediate impact in the world. You will have the opportunity to partner with technology and business teams to solve real-world problems, have access to virtually endless data and computational resources, and to world-class engineers and developers that can help bring your ideas into the world. As part of the team, we expect that you will develop innovative solutions to hard problems, and publish your findings at peer reviewed conferences and workshops. We are looking for world class researchers with experience in one or more of the following areas - autonomous agents, API orchestration, Planning, large multimodal models (especially vision-language models), reinforcement learning (RL) and sequential decision making.
IL, Tel Aviv
Are you a Masters or PhD student interested in a 2026 Internship in Data Science? If so, we want to hear from you! We are looking for a customer obsessed Data Scientist Intern who can innovate in a business environment and is comfortable owning data to drive step-change innovation in the EMEA region or worldwide. If this describes you, come and join our Data Science teams at Amazon for an exciting internship opportunity. If you are insatiably curious and always want to learn more, then you’ve come to the right place. You can find more information about the Amazon Science community as well as our interview process via the links below; https://www.amazon.science/ https://amazon.jobs/content/en/career-programs/university/science Key job responsibilities As a Data Science Intern, you will have the following key job responsibilities: • Work closely with scientists and engineers to develop new algorithms to implement scientific solutions for Amazon problems • Design, run, and analyze A/B tests • Work on an interdisciplinary team on customer-obsessed research • Experience Amazon's customer-focused culture • Create and deliver projects that can be quickly applied starting locally and scaled to EMEA/worldwide • Create and share data with audiences of varying levels technical papers and presentations • Define metrics and design algorithms to estimate customer satisfaction and engagement A day in the life At Amazon, you will grow into the high impact person you know you’re ready to be. Every day will be filled with developing new skills and achieving personal growth. How often can you say that your work changes the world? At Amazon, you’ll say it often. Join us and define tomorrow. Some more benefits of an Amazon Science internship include; • All of our internships offer a competitive stipend/salary • Interns are paired with an experienced manager and mentor(s) • Interns receive invitations to different events such as intern program initiatives or site events • Interns can build their professional and personal network with other Amazon Scientists • Interns can potentially publish work at top tier conferences each year About the team Applicants will be reviewed on a rolling basis and are assigned to teams aligned with their research interests and experience prior to interviews. Start dates are available throughout the year and durations can vary in length from 3-6 months for full time internships or 6-12 months for part time internships. Please note these are not remote internships.
US, CA, San Francisco
Join Amazon's Frontier AI & Robotics team and help shape the future of intelligent robotic systems from the inside out. As a Member of Technical Staff - Firmware Engineer, Electronics, you will develop the low-level firmware that brings our in-house robotic actuators to life—writing the embedded code that bridges sophisticated hardware and the high-level AI control systems that power our next-generation robots. Your work will directly enable our robots to see, reason, and act in real-world warehouse environments, making you a critical contributor to one of the most ambitious robotics programs in the world. Key job responsibilities • Develop, test, and optimize embedded firmware for custom in-house robotic actuators, including motor control algorithms (FOC, commutation, current/torque/speed/position loops) running on microcontrollers and DSPs • Design and implement real-time firmware for actuator state estimation, fault detection, and protection logic, ensuring robust and safe operation across all actuator variants deployed in FAR's robotic systems • Collaborate with electronics engineers and motor design engineers to define firmware requirements, hardware interfaces (SPI, I2C, CAN, EtherCAT, RS-485), and actuator bring-up procedures for new hardware revisions • Develop and maintain firmware for field-oriented control (FOC) and sensored/sensorless motor commutation, including tuning current regulators, velocity controllers, and position controllers for high-performance robots • Build and maintain firmware test frameworks and hardware-in-the-loop (HIL) test environments to validate firmware behavior across actuator operating conditions, edge cases, and failure modes • Partner with controls engineers and AI researchers to ensure firmware-level interfaces support high-bandwidth, low-latency communication required by whole-body control and motion planning algorithms • Contribute to actuator firmware architecture decisions, define software-hardware interface standards, and maintain firmware documentation and version control practices to enable scalable multi-actuator development • Support rapid hardware bring-up and debugging of new actuator prototypes, leveraging oscilloscopes, logic analyzers, and custom diagnostic tools to characterize and validate firmware behavior on novel hardware A day in the life Your day is rooted in the intersection of hardware and software where you’ll be wiring firmware from scratch to control custom motors. You might start your morning reviewing firmware behavior logs from the previous night's actuator characterization runs, then spend time working alongside motor design and electronics engineers to debug a torque ripple issue in the motor control loop. In the afternoon, you could be writing and validating embedded firmware for a new actuator variant, tuning (field-oriented control) FOC algorithms, and collaborating with the controls team to ensure firmware interfaces align with high-level motion planning requirements. Beyond the bench, you'll participate in architecture reviews with hardware and software engineers, contribute to code reviews, and document firmware specifications that enable smooth hardware handoffs. You'll be working on actuator variants—each with unique power, torque, and speed requirements—and you'll be the firmware voice in cross-functional design discussions that shape how our actuators are built and controlled. The pace is fast, the problems are novel, and the impact is direct. About the team Frontier AI & Robotics (FAR) is the team at Amazon building the next generation of embodied intelligence. FAR drives the development and implementation of advanced AI models within Amazon’s operations that enable robots to see, reason, and act on the world around them, supporting a number of different warehouse automation tasks.
US, CA, San Francisco
Join Amazon's Frontier AI & Robotics team and take ownership of the electronics that make our robots move. As a Member of Technical Staff - Electronics Engineer, Actuators & Drives, you will conceptualize, design, and test the motor drive electronics that power our in-house robotic actuators—from the gate drivers and power stages that command motor current to the sensing circuits and communication interfaces that give our robots proprioceptive awareness. Your printed circuit board (PCB) designs will live inside each of our next-generation robotic systems, directly enabling the embodied intelligence that is central to FAR's mission. Key job responsibilities • Conceptualize, design, and validate motor drive electronics for in-house robotic actuators, including inverter power stages, gate driver circuits, current and position sensing, and power management subsystems from concept through prototype and production • Lead PCB-level design of compact, high-power-density motor drive boards, including schematic capture, component selection, and collaboration with PCB layout engineers to achieve signal integrity, thermal, and EMC requirements in constrained actuator form factors • Characterize and optimize inverter switching performance, efficiency, and thermal behavior across the full operating envelope of FAR's actuator variants, using bench measurements and simulation to guide design decisions • Define and implement current sensing architectures (shunt-based, Hall-effect, or integrated IC-based) and position/velocity sensing interfaces (encoder, resolver, Hall sensor) to support high-bandwidth FOC firmware on microcontrollers and DSPs • Partner with firmware engineers to define hardware-software interfaces for motor drive control loops, fault detection logic, and communication protocols (CAN, EtherCAT, SPI), ensuring electronics designs support the real-time control requirements of robotic actuation • Collaborate with motor design and mechanical engineers to specify the electrical characteristics of custom BLDC and PMSM motors, align inverter design to motor parameters, and validate the integrated actuator electro-mechanical system • Lead hardware bring-up, functional testing, and failure analysis for new actuator electronics prototypes, developing test plans and characterization setups that systematically validate design performance and identify failure modes • Define electronics design standards, review processes, and design-for-manufacturability (DFM) guidelines for FAR's actuator drive portfolio, and mentor junior engineers in motor drive electronics design best practices A day in the life Your day centers on the full electronics development cycle for our custom actuator drive systems. You might start by reviewing simulation results for a new inverter topology, then transition to the lab to characterize switching losses and thermal performance on a prototype motor drive board. Later in the day, you could be collaborating with motor design engineers on back-EMF waveform analysis, refining gate drive timing to optimize inverter efficiency, or working with firmware engineers to define current sensing interfaces and hardware abstraction layers. Across the week, you'll be involved in schematic capture and PCB layout reviews with your design team, participating in design review gates, and iterating on hardware based on test findings. You'll navigate the challenge of fitting high-performance drive electronics into compact, thermally constrained actuator packages—designing for the power density, reliability, and robustness our robots demand. Your work will span from concept and architecture through silicon bring-up, and you'll play a key role in defining the electronics roadmap for FAR's actuator portfolio. About the team Frontier AI & Robotics (FAR) is the team at Amazon building the next generation of embodied intelligence. FAR drives the development and implementation of advanced AI models within Amazon’s operations that enable robots to see, reason, and act on the world around them, supporting a number of different warehouse automation tasks.
US, CA, San Francisco
About the Role: We are looking for a Member of Technical Staff - Mechanical Engineer with a passion for building complex robotic systems from the ground up. This role is ideal for someone with a deep understanding of structural and electromechanical design, who thrives in hands-on environments and has experience taking high-performance robots from concept to production. You will work on the mechanical and system architecture of advanced robotics platforms, including high degree-of-freedom systems, where considerations such as actuator selection, thermal constraints, cabling, sensing integration, and manufacturability are critical. This is a cross-disciplinary role requiring close collaboration with electrical, software, and AI research teams. Beyond day-to-day hardware development, this role also provides exciting avenues to contribute to innovative research projects. Whether you’re interested in mechatronics, sensor integration, or novel actuation methods, you’ll find opportunities to explore your research interests while building real-world systems that advance in the field of high degree-of-freedom robotics. What You Bring: * A systems-thinking mindset with a strong grasp of cross-domain engineering tradeoffs. * A bias toward action: comfortable building, testing, and iterating rapidly. * A collaborative and communicative working style — especially in multi-disciplinary research environments. * A passion for robotics and advancing the state of the art in intelligent, capable machines. Key job responsibilities * Lead mechanical design of robotic subsystems and full platforms, including structures, joints, enclosures, and mechanisms for a research environment. * Own kinematic, dynamic, and structural analyses to guide the design and optimization of full systems and subsystems of high-DoF robots * Specify and integrate actuators and motors for high-torque density applications in high-degree-of-freedom systems. * Contribute to thermal management strategies for motors, sensors, and embedded compute hardware. * Integrate sensors such as lidar, stereo cameras, IMUs, tactile sensors, and compute modules into compact, functional assemblies. * Design and route cabling and wire harnesses, ensuring reliability, serviceability, and thermal/electrical integrity. * Prototype and test mechanical systems; support hands-on builds, debug sessions, and field testing. * Conduct root cause analysis on system-level failures or performance issues and implement design improvements. * Apply Design for Manufacturing (DFM) and Design for Assembly (DFA) principles to transition prototypes into scalable builds (10s–100s of units). * Collaborate with cross-functional teams in electrical engineering, controls, perception, and research to meet research and product goals. About the team Frontier AI & Robotics (FAR) is the team at Amazon building the next generation of embodied intelligence. FAR drives the development and implementation of advanced AI models within Amazon’s operations that enable robots to see, reason, and act on the world around them, supporting a number of different warehouse automation tasks.
US, CA, Sunnyvale
The Economic Value & Optimization (EV&O) team builds causal econometric models that quantify the long-term economic value of Amazon's retail selection. Our models inform portfolio-level assortment decisions worth billions in projected OPS impact. We are looking for an Econ intern to work on improving our dynamic causal modeling framework and strengthening the empirical grounding of model outputs through experimental calibration. The intern will work with senior economists and scientists to develop methodological improvements that directly influence how Amazon decides what assortment to carry. Key job responsibilities - Develop and test extensions to our dynamic econometric framework including incorporating Gen AI methodology. - Design and implement models to reconcile counterfactual estimates with experimental treatment effects from selection de-assortment experiments. - Conduct econometric analyses on large-scale customer behavior panel data. - Quantify model performance using validation metrics and identify sources of bias. - Communicate findings to science leadership and business stakeholders through written documents and presentations.