NASA's Orion spacecraft shown splashing down in the Pacific Ocean, west of Baja California, at 9:40 a.m. PST Sunday, Dec. 11.
NASA's Orion spacecraft shown splashing down in the Pacific Ocean, west of Baja California, at 9:40 a.m. PST Sunday, Dec. 11.
NASA

The story behind how Amazon integrated Alexa into NASA’s Orion spacecraft

From physical constraints to acoustic challenges, learn how Amazon collaborated with NASA and Lockheed Martin to get Alexa to work in space.

In September 2018, Amazon’s principal solutions architect Philippe Lantin received a call from his manager.

“He said that there was something unique on the horizon, and that their team was being roped into a one-in-a-lifetime opportunity,” says Lantin.

This was no understatement: on the horizon was an opportunity for Amazon to collaborate with Lockheed Martin Space, and integrate Alexa into NASA’s Orion spacecraft. Orion is the first human-rated spacecraft to visit the moon in more than 40 years.

“NASA is trying to engage the public more as we enter this new era of space travel, where we are setting the stage for extra-planetary exploration,” says Lantin. “Given that over 100 million Alexa-enabled devices have already been sold, having Alexa answer questions like 'Alexa, how far to the moon?' and 'Alexa, how fast is Orion going?' is a great way to get people around the world involved in NASA’s missions.”

Setting up an Echo device on Earth is simple: all you need is a Wi-Fi connection and the Alexa app. However, things are far more complicated in space.

“We had several constraints we had to contend with,” says Lantin.
The Alexa team had to operate within a key physical constraint: the shape of the device. The contours of a smart speaker greatly influences it acoustics. To give just one example, the round shape of the Echo Dot offers a full cavity behind the woofer for a better bass response.

Related content
NASA is using unsupervised learning and anomaly detection to explore the extreme conditions associated with solar superstorms.

However, when it came to NASA’s Orion spacecraft, Alexa’s acoustic engineers had to work with what was provided by Lockheed Martin and NASA.

“We were somewhat limited by the form factor, which was a small briefcase-like enclosure that was 1.5 feet by one foot and about five inches in depth.” says Lantin.

There were other physical constraints. Equipment developed for the mission had to be resilient to extreme shocks and vibrations, be at least minimally resistant to radiation emissions in space, and utilize highly specific and custom-built components such as power and data cables.

Limited Internet connectivity

The team also had to deal with issues related to the lack of Internet connectivity. Typically, Echo devices use on-device keyword spotting designed to detect when a customer says the wake word. This on-device buffer exists in temporary memory. After the wake word is detected, the device streams the audio to the cloud for speech recognition and natural language processing.

Orion components

“However, for the Orion mission, our ability to communicate with the Alexa cloud was severely constrained,” says Lantin. “NASA’s spacecraft uses the Deep Space Network to communicate with earth. The bandwidth available to us on the downlink connection is slightly better than dial-up modem speeds with latencies of up to five seconds. To further complicate matters, NASA prioritizes traffic for navigation and telemetry for the first payload — traffic for Alexa was consigned to the secondary payload.”

The team also wanted to demonstrate a fully autonomous experience, one that can be used in future missions where Earth connectivity is no longer a practical option for real-time communications. They used Alexa Local Voice Control to get around the limited internet connectivity. Alexa Local Voice control allows select devices to process voice commands locally, rather than sending information to the cloud.

Lantin says that while the team was motivated by demonstrating technology leadership and scientific innovation in a very challenging environment, the real motivator was making a difference in the lives of millions of customers at home on earth.

“At Amazon, we take pride in delivering customer-focused science,” says Lantin. “That was a huge motivator for us at every step along the way. Consider the innovations we drove to Alexa Local Voice Control. These improvements will allow people on earth to do so much more with Alexa in situations where they have limited or no Internet connectivity. Think about when you are in a car and passing through a tunnel, or driving to a remote camping site. You can do things like tune the radio, turn on the AC and continue to use voice commands, even if you have a feeble signal or no cellular connection.”

Lantin says that the acoustic innovations enabled for Orion will also translate directly into improved listening experiences for people interacting with the mission on earth.

Rohit Prasad, Alexa senior vice president and head scientist, on the initial collaboration with Lockheed Martin

“We are planning to have celebrities, politicians, STEM students and a variety of other personalities interacting with Alexa,” says Lantin. “ And so, we also spent a good deal of time thinking about what people might want to ask Alexa about during the mission.”

The nuances of acoustics aboard Orion

Scott Isabelle is a solutions architect at Amazon. Prior to Amazon, Isabelle was a distinguished member of the technical staff at Motorola, where among other projects, he developed systems for enhancing voice quality in mobile devices, methods for generating adaptive ringtones, and a two-microphone system for noise suppression.

“One of the most important things for a voice AI is being in an environment where it is able to pick up your voice,” says Isabelle.

Related content
Parallel processing of microphone inputs and separate detectors for periodicity and dynamics improve performance.

However, this is easier said than done on Orion, where the conical shape of the space capsule, and its metallic surfaces result in increased reverberation.

“The voice can keep bouncing around losing very little energy. This wouldn’t happen in a typical room where soft material like curtains and sofa cushions can absorb some of the sound. In the capsule, the reverberations off the metal surfaces can play up the wrong frequencies that are critical to automatic speech recognition. This can make it really difficult for Alexa to pick up wake word invocations. ”

Alexa also has to contend with increased noise levels aboard Orion.

NASA | Exploration Mission-1 — pushing farther into deep space

The ideal signal to noise ratio (SNR) for systems involving intelligent voice assistants is in the range 20 to 30 decibels (dB). To place this in context, a SNR of 35 dB is what you would find in a face-to-face conversation between two people standing one meter apart in a typical room (higher SNRs are better). However, the SNR onboard the Orion capsule can be much lower than 20 dB, posing an acoustic challenge.

To enhance the comfort of astronauts during crewed missions, NASA would ordinarily place acoustic blankets to damp down the reverberation in the hard-walled cabin, and some of the noise created by engines and pumps.

“However, because this is an uncrewed mission we have to work within an environment with more reverberation and noise than we would like,” says Isabelle.

re:MARS 2022 — Open space: A revolution in robots for space exploration

There’s another challenge that results from the lack of humans on board. For Orion, commands to Alexa have to be sent from ground control. The low-bandwidth connections utilized for the transmission can make it challenging to transmit voices at the wide range of frequencies essential for differentiating between sounds.

During a typical phone call, our voice is typically transmitted in the narrow band, which ranges from 300 HZ to 3,000 HZ. For Alexa to make out individual words aboard the noisier environment of the space capsule, the voice would have to be transmitted at 8,000 HZ.

“Voice commands from mission control are transmitted to Alexa via a speaker,” says Isabelle. “Flight-qualified speakers are typically designed for narrow-band communications. And so for this mission we were required to use a speaker that could operate in the flight environment.”

Alexa in Space | Alexa Innovators | Build with Alexa

The team relied on what Isabelle calls “brute force” to overcome these acoustic challenges.

Related content
A combination of audio and visual signals guide the device’s movement, so the screen is always in view.

“We designed the speaker playback system to play at extremely loud volumes, which allowed us to increase the SNR to where we wanted it to be.”

The team also took advantage of the physical form factor of Alexa on board to overcome the challenges presented by the noisy environment. The speakers, the light ring and the microphones in the briefcase-like enclosure for Alexa are close to each other, which allows acoustic engineers to overcome some of the obstacles presented by the background noise and reverberation.

Finally, the team deployed two microphones in combination with an array processing algorithm. The latter combined the signals from the two microphones in a way that helps Alexa make sense of the commands being issued from mission control. Because the speakers and microphones are in fixed positions relative to each other — as opposed to a room, where people can be located in any number of locations — the algorithms could be more easily designed to distinguish between speech and the surrounding noise.

Related content
Zoox principal software engineer Olivier Toupet on company’s autonomous robotaxi technology

While the Orion mission will not have any crew members on board, the initial mission will lay the groundwork for Alexa to be integrated into future crewed missions — to the moon, Mars, and beyond. Having Alexa onboard in these future missions would allow crew members to be more efficient in day-to-day tasks, and benefit from the comforts of having Alexa on board such as the ability to play relaxing music and to keep in touch with family and friends back home.

Future crewed missions would have their own unique set of challenges, where Alexa would have to respond to commands from astronauts, who might (literally) be free-floating at multiple points within the capsule. Isabelle and Lantin are already looking forward to overcoming the challenges enabled by crewed missions.

“For someone who grew up watching Star Trek, working on this project has been a dream come true,” says Lantin. “It’s great to be able to build the future. But it’s just as exciting to be able to draw on all of this great work, and be able to enjoy all these new Alexa capabilities during my next vacation, and my day-to-day life right here at home.”

Editor's note

This is a reprint of an article that initially ran on the Alexa Skills Kit Blog. To learn more about the technical innovations that helped get Alexa into space and some inspiring facts about the Artemis I mission, visit the Skills Kit blog.

Research areas

Related content

US, WA, Seattle
The Sponsored Products and Brands (SPB) team at Amazon Ads is re-imagining the advertising landscape through state-of-the-art generative AI technologies, revolutionizing how millions of customers discover products and engage with brands across Amazon.com and beyond. We are at the forefront of re-inventing advertising experiences, bridging human creativity with artificial intelligence to transform every aspect of the advertising lifecycle from ad creation and optimization to performance analysis and customer insights. We are a passionate group of innovators dedicated to developing responsible and intelligent AI technologies that balance the needs of advertisers, enhance the shopping experience, and strengthen the marketplace. If you're energized by solving complex challenges and pushing the boundaries of what's possible with AI, join us in shaping the future of advertising. Curious about our advertising solutions? Discover more about Sponsored Products and Sponsored Brands to see how we’re helping businesses grow on Amazon.com and beyond! Key job responsibilities This role will redesign how ads create personalized, relevant shopping experiences with customer value at the forefront. Key responsibilities include: - Design and develop solutions using GenAI, deep learning, multi-objective optimization and/or reinforcement learning to transform ad retrieval, auctions, whole-page relevance, and shopping experiences. - Partner with scientists, engineers, and product managers to build scalable, production-ready science solutions. - Apply industry advances in GenAI, Large Language Models (LLMs), and related fields to create innovative prototypes and concepts. - Improve the team's scientific and technical capabilities by implementing algorithms, methodologies, and infrastructure that enable rapid experimentation and scaling. - Mentor junior scientists and engineers to build a high-performing, collaborative team. A day in the life As an Applied Scientist on the Sponsored Products and Brands Off-Search team, you will contribute to the development in Generative AI (GenAI) and Large Language Models (LLMs) to revolutionize our advertising flow, backend optimization, and frontend shopping experiences. This is a rare opportunity to redefine how ads are retrieved, allocated, and/or experienced—elevating them into personalized, contextually aware, and inspiring components of the customer journey. You will have the opportunity to fundamentally transform areas such as ad retrieval, ad allocation, whole-page relevance, and differentiated recommendations through the lens of GenAI. By building novel generative models grounded in both Amazon’s rich data and the world’s collective knowledge, your work will shape how customers engage with ads, discover products, and make purchasing decisions. If you are passionate about applying frontier AI to real-world problems with massive scale and impact, this is your opportunity to define the next chapter of advertising science. About the team The Off-Search team within Sponsored Products and Brands (SPB) is focused on building delightful ad experiences across various surfaces beyond Search on Amazon—such as product detail pages, the homepage, and store-in-store pages—to drive monetization. Our vision is to deliver highly personalized, context-aware advertising that adapts to individual shopper preferences, scales across diverse page types, remains relevant to seasonal and event-driven moments, and integrates seamlessly with organic recommendations such as new arrivals, basket-building content, and fast-delivery options. To execute this vision, we work in close partnership with Amazon Stores stakeholders to lead the expansion and growth of advertising across Amazon-owned and -operated pages beyond Search. We operate full stack—from backend ads-retail edge services, ads retrieval, and ad auctions to shopper-facing experiences—all designed to deliver meaningful value.
US, CA, Palo Alto
The Sponsored Products and Brands (SPB) team at Amazon Ads is re-imagining the advertising landscape through state-of-the-art generative AI technologies, revolutionizing how millions of customers discover products and engage with brands across Amazon.com and beyond. We are at the forefront of re-inventing advertising experiences, bridging human creativity with artificial intelligence to transform every aspect of the advertising lifecycle from ad creation and optimization to performance analysis and customer insights. We are a passionate group of innovators dedicated to developing responsible and intelligent AI technologies that balance the needs of advertisers, enhance the shopping experience, and strengthen the marketplace. If you're energized by solving complex challenges and pushing the boundaries of what's possible with AI, join us in shaping the future of advertising. Curious about our advertising solutions? Discover more about Sponsored Products and Sponsored Brands to see how we’re helping businesses grow on Amazon.com and beyond! Key job responsibilities This role will be pivotal in redesigning how ads contribute to a personalized, relevant, and inspirational shopping experience, with the customer value proposition at the forefront. Key responsibilities include, but are not limited to: - Contribute to the design and development of GenAI, deep learning, multi-objective optimization and/or reinforcement learning empowered solutions to transform ad retrieval, auctions, whole-page relevance, and/or bespoke shopping experiences. - Collaborate cross-functionally with other scientists, engineers, and product managers to bring scalable, production-ready science solutions to life. - Stay abreast of industry trends in GenAI, LLMs, and related disciplines, bringing fresh and innovative concepts, ideas, and prototypes to the organization. - Contribute to the enhancement of team’s scientific and technical rigor by identifying and implementing best-in-class algorithms, methodologies, and infrastructure that enable rapid experimentation and scaling. - Mentor and grow junior scientists and engineers, cultivating a high-performing, collaborative, and intellectually curious team. A day in the life As an Applied Scientist on the Sponsored Products and Brands Off-Search team, you will contribute to the development in Generative AI (GenAI) and Large Language Models (LLMs) to revolutionize our advertising flow, backend optimization, and frontend shopping experiences. This is a rare opportunity to redefine how ads are retrieved, allocated, and/or experienced—elevating them into personalized, contextually aware, and inspiring components of the customer journey. You will have the opportunity to fundamentally transform areas such as ad retrieval, ad allocation, whole-page relevance, and differentiated recommendations through the lens of GenAI. By building novel generative models grounded in both Amazon’s rich data and the world’s collective knowledge, your work will shape how customers engage with ads, discover products, and make purchasing decisions. If you are passionate about applying frontier AI to real-world problems with massive scale and impact, this is your opportunity to define the next chapter of advertising science. About the team The Off-Search team within Sponsored Products and Brands (SPB) is focused on building delightful ad experiences across various surfaces beyond Search on Amazon—such as product detail pages, the homepage, and store-in-store pages—to drive monetization. Our vision is to deliver highly personalized, context-aware advertising that adapts to individual shopper preferences, scales across diverse page types, remains relevant to seasonal and event-driven moments, and integrates seamlessly with organic recommendations such as new arrivals, basket-building content, and fast-delivery options. To execute this vision, we work in close partnership with Amazon Stores stakeholders to lead the expansion and growth of advertising across Amazon-owned and -operated pages beyond Search. We operate full stack—from backend ads-retail edge services, ads retrieval, and ad auctions to shopper-facing experiences—all designed to deliver meaningful value.
US, CA, Sunnyvale
Industrial Robotics Group is seeking exceptional talent to help develop the next generation of advanced robotics systems that will transform automation at Amazon's scale. We're building revolutionary robotic systems that combine innovative AI, sophisticated control systems, and advanced mechanical design to create adaptable automation solutions capable of working safely alongside humans in dynamic environments. This is a unique opportunity to shape the future of robotics and automation at unprecedented scale, working with world-class teams pushing the boundaries of what's possible in robotic manipulation, locomotion, and human-robot interaction. This role presents an opportunity to shape the future of robotics through innovative applications of deep learning and large language models. We leverage advanced robotics, machine learning, and artificial intelligence to solve complex operational challenges at unprecedented scale. Our fleet of robots operates across hundreds of facilities worldwide, working in sophisticated coordination to fulfill our mission of customer excellence. We are pioneering the development of robotics foundation models that: - Enable unprecedented generalization across diverse tasks - Integrate multi-modal learning capabilities (visual, tactile, linguistic) - Accelerate skill acquisition through demonstration learning - Enhance robotic perception and environmental understanding - Streamline development processes through reusable capabilities The ideal candidate will contribute to research that bridges the gap between theoretical advancement and practical implementation in robotics. You will be part of a team that's revolutionizing how robots learn, adapt, and interact with their environment. Join us in building the next generation of intelligent robotics systems that will transform the future of automation and human-robot collaboration. As an Applied Scientist, you will develop and improve machine learning systems that help robots perceive, reason, and act in real-world environments. You will leverage state-of-the-art models (open source and internal research), evaluate them on representative tasks, and adapt/optimize them to meet robustness, safety, and performance needs. You will invent new algorithms where gaps exist. You’ll collaborate closely with research, controls, hardware, and product-facing teams, and your outputs will be used by downstream teams to further customize and deploy on specific robot embodiments. Key job responsibilities As an Applied Scientist in the Foundations Model team, you will: - Leverage state-of-the-art models for targeted tasks, environments, and robot embodiments through fine-tuning and optimization. - Execute rapid, rigorous experimentation with reproducible results and solid engineering practices, closing the gap between sim and real environments. - Build and run capability evaluations/benchmarks to clearly profile performance, generalization, and failure modes. - Contribute to the data and training workflow: collection/curation, dataset quality/provenance, and repeatable training recipes. - Write clean, maintainable, well commented and documented code, contribute to training infrastructure, create tools for model evaluation and testing, and implement necessary APIs - Stay current with latest developments in foundation models and robotics, assist in literature reviews and research documentation, prepare technical reports and presentations, and contribute to research discussions and brainstorming sessions. - Work closely with senior scientists, engineers, and leaders across multiple teams, participate in knowledge sharing, support integration efforts with robotics hardware teams, and help document best practices and methodologies.
IN, KA, Bengaluru
Alexa+ is the world’s best Generative AI powered personal assistant / agent for consumers. We are seeking an experienced Applied Science Manager to build and lead a new team of scientists in India dedicated to Alexa Conversational Ads and Personalization. As the leader of this team, you will shape both the scientific roadmap and the product strategy, working closely with global product stakeholders to ensure your team is delivering high-impact, scalable solutions. Key job responsibilities - Hire, develop, and mentor a high-performing team of applied scientists. - Partner with product management and engineering leadership to define the mid-to-long-term scientific roadmap for conversational ads and personalization. - Manage the execution of complex ML projects, ensuring rigorous experimental design, high modeling standards, and on-time delivery. - Bridge the gap between science, engineering, and product, translating business metrics into scientific goals and vice versa. - Establish best practices for ML lifecycle management, code quality, and technical documentation within the team.
IN, KA, Bengaluru
Alexa+ is the world’s best Generative AI powered personal assistant / agent for consumers. We are looking for a Senior Applied Scientist to provide technical leadership for our Alexa Conversational Ads and Personalization initiatives. You will be responsible for tackling our most ambiguous scientific challenges, setting the technical architecture for new ML systems, and pushing the boundaries of what is possible in voice-based advertising. Key job responsibilities - Define the scientific vision and lead the technical execution for complex, multi-quarter ML projects in conversational ads and personalization. - Architect end-to-end machine learning systems that operate at Alexa's massive scale. - Mentor and guide junior scientists on modeling techniques, experimental design, and best practices. - Partner closely with product and engineering stakeholders to translate ambiguous business requirements into rigorous scientific problem statements. - Contribute to the broader scientific community through internal technical papers and external publications.
IN, KA, Bengaluru
Alexa+ is the world’s best Generative AI powered personal assistant / agent for consumers. We are seeking an Applied Scientist to join our newly expanding team in India focused on Alexa Conversational Ads and Personalization. In this role, you will build machine learning models that seamlessly and naturally integrate relevant advertising into the Alexa experience while deeply personalizing user interactions. You will work closely with other scientists, engineers, and product managers to take models from conception to production. Key job responsibilities - Design, develop, and evaluate innovative machine learning and deep learning models for natural language processing (NLP), recommendation systems, and personalization. - Conduct hands-on data analysis and build scalable ML pipelines. - Design and run A/B experiments to measure the impact of new models on customer experience and ad performance. - Collaborate with software development engineers to deploy models into high-scale, real-time production environments.
US, CA, San Francisco
The Amazon Center for Quantum Computing (CQC) is a multi-disciplinary team of scientists, engineers, and technicians, all working to innovate in quantum computing for the benefit of our customers. We are looking to hire an Applied Scientist to design and model novel superconducting quantum devices (including qubits), readout and control schemes, and advanced quantum processors. The ideal candidate will have a track record of original scientific contributions, strong engineering principles, and/or software development experience. Resourcefulness, as well as strong organizational and communication skills, is essential. About the team About the team The Amazon Center for Quantum Computing (CQC) is a multi-disciplinary team of scientists, engineers, and technicians, on a mission to develop a fault-tolerant quantum computer. Inclusive Team Culture Here at Amazon, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon conferences, inspire us to never stop embracing our uniqueness. Diverse Experiences Amazon values diverse experiences. Even if you do not meet all of the preferred qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Mentorship & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. Export Control Requirement Due to applicable export control laws and regulations, candidates must be either a U.S. citizen or national, U.S. permanent resident (i.e., current Green Card holder), or lawfully admitted into the U.S. as a refugee or granted asylum, or be able to obtain a US export license. If you are unsure if you meet these requirements, please apply and Amazon will review your application for eligibility. Export Control Requirement: Due to applicable export control laws and regulations, candidates must be either a U.S. citizen or national, U.S. permanent resident (i.e., current Green Card holder), or lawfully admitted into the U.S. as a refugee or granted asylum, or be able to obtain a U.S export license. If you are unsure if you meet these requirements, please apply and Amazon will review your application for eligibility.
US, CA, Sunnyvale
Amazon Industrial Robotics Group is seeking exceptional talent to help develop the next generation of advanced robotics systems that will transform automation at Amazon's scale. We're building revolutionary robotic systems that combine innovative AI, sophisticated control systems, and advanced mechanical design to create adaptable automation solutions capable of working safely alongside humans in dynamic environments. This is a unique opportunity to shape the future of robotics and automation at unprecedented scale, working with world-class teams pushing the boundaries of what's possible in robotic manipulation, locomotion, and human-robot interaction. This role presents an opportunity to shape the future of robotics through innovative applications of deep learning and large language models. We leverage advanced robotics, machine learning, and artificial intelligence to solve complex operational challenges at unprecedented scale. Our fleet of robots operates across hundreds of facilities worldwide, working in sophisticated coordination to fulfill our mission of customer excellence. We are pioneering the development of robotics foundation models that: - Enable unprecedented generalization across diverse tasks - Integrate multi-modal learning capabilities (visual, tactile, linguistic) - Accelerate skill acquisition through demonstration learning - Enhance robotic perception and environmental understanding - Streamline development processes through reusable capabilities The ideal candidate will contribute to research that bridges the gap between theoretical advancement and practical implementation in robotics. You will be part of a team that's revolutionizing how robots learn, adapt, and interact with their environment. Join us in building the next generation of intelligent robotics systems that will transform the future of automation and human-robot collaboration. As a Senior Applied Scientist, you will lead the development of machine learning systems that help robots perceive, reason, and act in real-world environments. You will set technical direction for adapting and advancing state-of-the-art models (open source and internal research) into robust, safe, and high-performing “robot brain” capabilities for our target tasks, environments, and robot embodiments. You will drive rigorous capability profiling and experimentation, lead targeted innovation where gaps exist, and partner across research, controls, hardware, and product teams to ensure outputs can be further customized and deployed on specific robots. Key job responsibilities - Lead technical initiatives for foundation-model capabilities (e.g., visuomotor / VLA / video-action worldmodel-action policies), from problem definition through validated model deliverables. - Own model readiness for our embodiment class: drive adaptation, fine-tuning, and optimization (latency/throughput/robustness), and define success criteria that downstream teams can build on. - Establish and evolve capability evaluation: define benchmark strategy, metrics, and profiling methodology to quantify performance, generalization, and failure modes; ensure evaluations drive clear roadmap decisions. - Drive the data + training strategy needed to close key capability gaps, including data requirements, collection/curation standards, dataset quality/provenance, and repeatable training recipes (sim + real). - Invent and validate new methods when leveraging SOTA is insufficient—new training schemes, model components, supervision signals, or sim↔real techniques—backed by strong empirical evidence. - Influence cross-team technical decisions by collaborating with controls/WBC, hardware, and product teams on interfaces, constraints, and integration plans; communicate results via design docs and technical reviews. - Mentor and raise the bar: guide junior scientists/engineers, set best practices for experimentation and code quality, and drive a culture of rigor and reproducibility.
US, CA, Sunnyvale
Amazon Industrial Robotics Group is seeking exceptional talent to help develop the next generation of advanced robotics systems that will transform automation at Amazon's scale. We're building revolutionary robotic systems that combine innovative AI, sophisticated control systems, and advanced mechanical design to create adaptable automation solutions capable of working safely alongside humans in dynamic environments. This is a unique opportunity to shape the future of robotics and automation at unprecedented scale, working with world-class teams pushing the boundaries of what's possible in robotic manipulation, locomotion, and human-robot interaction. This role presents an opportunity to shape the future of robotics through innovative applications of deep learning and large language models. We leverage advanced robotics, machine learning, and artificial intelligence to solve complex operational challenges at unprecedented scale. Our fleet of robots operates across hundreds of facilities worldwide, working in sophisticated coordination to fulfill our mission of customer excellence. We are pioneering the development of robotics foundation models that: - Enable unprecedented generalization across diverse tasks - Integrate multi-modal learning capabilities (visual, tactile, linguistic) - Accelerate skill acquisition through demonstration learning - Enhance robotic perception and environmental understanding - Streamline development processes through reusable capabilities The ideal candidate will contribute to research that bridges the gap between theoretical advancement and practical implementation in robotics. You will be part of a team that's revolutionizing how robots learn, adapt, and interact with their environment. Join us in building the next generation of intelligent robotics systems that will transform the future of automation and human-robot collaboration. As a Senior Applied Scientist, you will lead the development of machine learning systems that help robots perceive, reason, and act in real-world environments. You will set technical direction for adapting and advancing state-of-the-art models (open source and internal research) into robust, safe, and high-performing “robot brain” capabilities for our target tasks, environments, and robot embodiments. You will drive rigorous capability profiling and experimentation, lead targeted innovation where gaps exist, and partner across research, controls, hardware, and product teams to ensure outputs can be further customized and deployed on specific robots. Key job responsibilities - Lead technical initiatives for foundation-model capabilities (e.g., visuomotor / VLA / video-action worldmodel-action policies), from problem definition through validated model deliverables. - Own model readiness for our embodiment class: drive adaptation, fine-tuning, and optimization (latency/throughput/robustness), and define success criteria that downstream teams can build on. - Establish and evolve capability evaluation: define benchmark strategy, metrics, and profiling methodology to quantify performance, generalization, and failure modes; ensure evaluations drive clear roadmap decisions. - Drive the data + training strategy needed to close key capability gaps, including data requirements, collection/curation standards, dataset quality/provenance, and repeatable training recipes (sim + real). - Invent and validate new methods when leveraging SOTA is insufficient—new training schemes, model components, supervision signals, or sim↔real techniques—backed by strong empirical evidence. - Influence cross-team technical decisions by collaborating with controls/WBC, hardware, and product teams on interfaces, constraints, and integration plans; communicate results via design docs and technical reviews. - Mentor and raise the bar: guide junior scientists/engineers, set best practices for experimentation and code quality, and drive a culture of rigor and reproducibility.
US, WA, Seattle
We are looking for a passionate Applied Scientist to help pioneer the next generation of agentic AI applications for Amazon advertisers. In this role, you will design agentic architectures, develop tools and datasets, and contribute to building systems that can reason, plan, and act autonomously across complex advertiser workflows. You will work at the forefront of applied AI, developing methods for fine-tuning, reinforcement learning, and preference optimization, while helping create evaluation frameworks that ensure safety, reliability, and trust at scale. You will work backwards from the needs of advertisers—delivering customer-facing products that directly help them create, optimize, and grow their campaigns. Beyond building models, you will advance the agent ecosystem by experimenting with and applying core primitives such as tool orchestration, multi-step reasoning, and adaptive preference-driven behavior. This role requires working independently on ambiguous technical problems, collaborating closely with scientists, engineers, and product managers to bring innovative solutions into production. Key job responsibilities - Design and build agents to guide advertisers in conversational and non-conversational experience. - Design and implement advanced model and agent optimization techniques, including supervised fine-tuning, instruction tuning and preference optimization (e.g., DPO/IPO). - Curate datasets and tools for MCP. - Build evaluation pipelines for agent workflows, including automated benchmarks, multi-step reasoning tests, and safety guardrails. - Develop agentic architectures (e.g., CoT, ToT, ReAct) that integrate planning, tool use, and long-horizon reasoning. - Prototype and iterate on multi-agent orchestration frameworks and workflows. - Collaborate with peers across engineering and product to bring scientific innovations into production. - Stay current with the latest research in LLMs, RL, and agent-based AI, and translate findings into practical applications. About the team The Sponsored Products and Brands team at Amazon Ads is re-imagining the advertising landscape through the latest generative AI technologies, revolutionizing how millions of customers discover products and engage with brands across Amazon.com and beyond. We are at the forefront of re-inventing advertising experiences, bridging human creativity with artificial intelligence to transform every aspect of the advertising lifecycle from ad creation and optimization to performance analysis and customer insights. We are a passionate group of innovators dedicated to developing responsible and intelligent AI technologies that balance the needs of advertisers, enhance the shopping experience, and strengthen the marketplace. If you're energized by solving complex challenges and pushing the boundaries of what's possible with AI, join us in shaping the future of advertising. The Campaign Strategies team within Sponsored Products and Brands is focused on guiding and supporting 1.6MM advertisers to meet their advertising needs of creating and managing ad campaigns. At this scale, the complexity of diverse advertiser goals, campaign types, and market dynamics creates both a massive technical challenge and a transformative opportunity: even small improvements in guidance systems can have outsized impact on advertiser success and Amazon’s retail ecosystem. Our vision is to build a highly personalized, context-aware agentic advertiser guidance system that leverages LLMs together with tools such as auction simulations, ML models, and optimization algorithms. This agentic framework, will operate across both chat and non-chat experiences in the ad console, scaling to natural language queries as well as proactively delivering guidance based on deep understanding of the advertiser. To execute this vision, we collaborate closely with stakeholders across Ad Console, Sales, and Marketing to identify opportunities—from high-level product guidance down to granular keyword recommendations—and deliver them through a tailored, personalized experience. Our work is grounded in state-of-the-art agent architectures, tool integration, reasoning frameworks, and model customization approaches (including tuning, MCP, and preference optimization), ensuring our systems are both scalable and adaptive.