Anton van den Hengel is seen smiling into the camera, with some office buildings in the background
Anton van den Hengel

Anton van den Hengel’s journey from intellectual property law to computer vision pioneer

Amazon’s director of applied science in Adelaide, Australia, believes the economic value of computer vision has “gone through the roof".

Anton van den Hengel, an international pioneer in computer vision and its many applications, departed the University of Adelaide in South Australia to join Amazon as director of applied science in April 2020. He is creating a new, world-class machine-learning hub in Adelaide and supporting Amazon’s business through the development and application of state-of-the-art computer vision and scalable machine learning.

Related content
Senior principal scientist Aleix M. Martinez on why computer vision research has only begun to scratch the surface.

In 2018, van den Hengel was the founding director of the Australian Institute for Machine Learning (AIML), Australia’s first institute dedicated to machine learning research. When he left to join Amazon, AIML was 140 people strong and near the top of the institutional world rankings in terms of computer vision research. He remains the part-time director of AIML’s new Centre for Augmented Reasoning, whose mission is to build core Artificial Intelligence (AI) capability in Australia.

Van den Hengel has authored more than 300 research papers, commercialized eight patents, and been chief investigator on research projects funded by many Fortune 500 companies.

But it could all have been so different. The young van den Hengel first got into computer science simply to support his efforts to become an intellectual property lawyer. In fact, he completed his law degree.

Amazon in Australia
Research teams in Adelaide are developing state-of-the-art, large-scale machine learning methods and applications involving terabytes of data. They work on applying ML, and particularly computer vision, to a wide spectrum of areas.

“I’d bought the suit, tie, and bright white shirt and was all ready to start my first day as an entry level lawyer,” he recalls. “Then, instead, I turned around and went straight back into the University of Adelaide. I spent the next couple of decades there.”

What followed was a master’s, then PhD in computer science and, ultimately, building up the University of Adelaide’s forerunner to AIML, the Australian Centre for Visual Technologies.

The chance to have an impact

What turned van den Hengel around was the chance to study computer vision.

“I saw the opportunity to engage with something that I realized was going to have incredible impact,” he says. Computer vision and its applications are everywhere today, but in the early 1990s, things were very different. “It's hard to believe now but at the time there were maybe 1000 people in the world working on computer vision, at a time when there weren't any digital cameras,” he reminisces. “Most papers in CV were at least half about how people had taken the images.”

[In the early 90s] there were maybe 1000 people in the world working on computer vision, at a time when there weren't any digital cameras. Most papers in CV were at least half about how people had taken the images.
Anton van den Hengel

Van den Hengel understood that humans are primarily visual animals and he clearly saw the inevitability of computers using vision to sense, and ultimately interact with, the world. “But back then, having a computer that could actually either measure or impact upon the real world was virtually unbelievable,” he says.

Since then, he says, computer vision has transformed from a heavily mathematical field with 300 people at every conference who all knew each other, to conferences of many thousands of people and auditoria full of companies trying to attract staff and sell things.

“The economic value of computer vision has gone through the roof,” he says.

Computer vision is a fundamental technology, van den Hengel says, because it relates the real world to symbols. “Humans reason about things in terms of symbols, so ‘cat’, ‘sky’, ‘car’, ‘road’, and ‘fish’ are all symbols, right? Computer vision takes visual signals from the real world and relates those signals to symbols,” he says.

That's been the critical missing piece of the puzzle. For decades it was predicted that by the year 2000 we would have robots doing the housework and many other ‘magical’ things, but we came up short because there's an infinite variation of things out there in the real world and it's much harder to get a computer to reason about our physical environment than anybody imagined.”

Looking for answers

This missing piece is tackled by a subfield of computer vision known as visual question answering (VQA). The idea is to enable computers not only to understand the content of an image (or video/livestream) in a more semantic, human-like way, but also to answer questions posed in natural language about that image. For example, “Where was this photo taken?”, “Does it look like the person on the picnic blanket is expecting someone?”, “What’s the color of the dog nearest the stop sign?”.

Van den Hengel is the world’s most-cited researcher in VQA by an enormous margin, with close to 22,000 citations.

Fireside chat: Anton van den Hengel and Simon Lucey

“I got into it very early because I saw it as a threshold change in the way that artificial intelligence works,” van den Hengel says. “What's interesting about VQA is that you ask the question at run-time and need the answer immediately, so it needs to be very flexible, unlike current machine learning applications, which are often fixed, single-purpose solutions to specific problems.”

In other words, it needs to be closer to true artificial intelligence – often referred to as artificial general intelligence.

In that vein, imagine a robot that could follow natural-language instructions, based on a greater understanding of what it sees around itself. It’s a sci-fi dream, but for how much longer?

In 2018, using a vision-and-language process similar to VQA, Van den Hengel and a team of colleagues from across Australia developed a simulator that uses imagery taken from the inside of real buildings to teach virtual agents to successfully navigate using visually grounded instructions, such as: “Head upstairs and walk past the piano through an archway directly in front. Turn right when the hallway ends at pictures and table. Wait by the moose antlers hanging on the wall.” It is only a matter of time before we can talk to our self-driving cars in a similar manner when necessary, says van den Hengel.

The power of neural networks

Rapid developments in machine learning are behind the recent supercharging of computer vision research.

“In the last 10 years of computer vision, we have essentially trained deep-learning neural networks to replace all of these lovely computer-vision algorithms that we'd previously come up with for solving a whole bunch of problems,” he says. “In fact, neural networks are so much better at it, they went from being just an interesting solution to a puzzle to being a practical solution to some of the core challenges we face.”

While at the University of Adelaide, van den Hengel has applied advances in ML and computer vision to make the world better in a variety of ways. These include working with Adelaide-based medical technology company LBT Innovations in creating an automated pathology machine called APAS (Automated Plate Assessment System) Independence, which can screen and interpret high volumes of pathology plates.

“There's a shortage of trained pathologists, partly because it's not a lot of fun sitting all day doing chemistry and looking at samples. APAS does the drudge work of the visual inspection process,” he says. The device was FDA approved in 2019.

Beyond computer vision, van den Hengel is currently the chief investigator for the Australian National Health and Medical Research Council’s Centre of Research Excellence in Healthy Housing, which is using ML to help deliver better outcomes within the Australian housing system, not only in terms of housing, but also in terms of health.

“People who are homeless suffer diseases and injuries, which put them into hospital, and homelessness can see people spiral into a set of difficult conditions that are very expensive for society to address,” he says. “It's actually cheaper to house somebody than to fix the impact of homelessness. So where can we intervene in the housing process in a way that benefits everybody and also saves money?”

Not all of van den Hengel’s work is quite so serious, however.

The paper I'm most happy about but that gets the least recognition is one that tells you how to build real Lego models of objects in images,” he says. “It’s got brilliant maths in it; some of my favorite maths. And it incorporates gravity, structural considerations and, you know, fantastic maths.” And did he mention the maths?

Van den Hengel has even used ML to design an IPA beer.

“Collecting the data was a real trauma: we had to drink, and rate, a lot of beer,” he laments. He named the resulting ale The Rodney, in homage to the Australian AI researcher and roboticist Rodney Brooks, whose work resulted in the Roomba vacuum cleaner.

Joining Amazon

Always an advocate for Australia on the world stage, van den Hengel was keen to play a leading role in Amazon’s research push into the country. “It was a fantastic opportunity to start a new group in Australia for a company like Amazon.”

Typically, when academics transition to Amazon, they talk about the increase in pace from academia to industry. Van den Hengel bucks that trend.

“I was running a group with 140 people, trying to make enough money to pay them, keep the doors open, deliver on projects for tens of millions of dollars, doing PR, you name it,” he says. “Here, I've got about 25 world-class people with PhDs who work for me and 12 interns.”

Van den Hengel noted that Amazon is a results-focused environment. “At Amazon you are expected to deliver, but you do it with an engineering team and support systems all geared towards delivering customer benefit.”

So what is van den Hengel delivering on? A current project is applying visual inspection methods to help to make sure that Amazon customers get the best fresh produce possible.

I think the whole retail field is moving towards a better understanding of the nature of objects in the world and how humans relate to those objects, or products. And that's something that computer vision is particularly well-placed to deliver.
Anton van den Hengel

“Visual inspection is a magnificent challenge and a core problem in computer vision,” he says,” and addressing it means we can make sure that when a customer receives a delivery of, say, tomatoes, they are as perfect as can be.”

Another key project involves using computer vision and ML to understand in a deeper way the hundreds of millions of items in the ever-changing Amazon catalogue. The catalogue has a trove of information, both in the word-based product descriptions and the images supplied by sellers.

“Making the most of the information contained in these two sources of information – which is essentially what humans do – is an interesting challenge, because it relies on the relationships between visual signals and symbols,” he explains, adding that cracking this challenge will help customers who are using Amazon search find the product that best matches their need “even if they're not entirely sure how best to specify it themselves.”

Despite the considerable demands of managing a growing team, van den Hengel is determined to remain hands-on with his own research. “Amazon's an innovative company, and really, truly innovating in a way that's going to provide something of value to customers that nobody else can means that you need managers who deeply understand where the technology can go,” he says.

So where is the technology going?

“I think the whole retail field is moving towards a better understanding of the nature of objects in the world and how humans relate to those objects, or products,” he says. “And that's something that computer vision is particularly well-placed to deliver.”

Browse through the open science positions in Amazon's Australia offices.

Research areas

Related content

US, WA, Seattle
The Sponsored Products and Brands (SPB) team at Amazon Ads is re-imagining the advertising landscape through state-of-the-art generative AI technologies, revolutionizing how millions of customers discover products and engage with brands across Amazon.com and beyond. We are at the forefront of re-inventing advertising experiences, bridging human creativity with artificial intelligence to transform every aspect of the advertising lifecycle from ad creation and optimization to performance analysis and customer insights. We are a passionate group of innovators dedicated to developing responsible and intelligent AI technologies that balance the needs of advertisers, enhance the shopping experience, and strengthen the marketplace. If you're energized by solving complex challenges and pushing the boundaries of what's possible with AI, join us in shaping the future of advertising. Curious about our advertising solutions? Discover more about Sponsored Products and Sponsored Brands to see how we’re helping businesses grow on Amazon.com and beyond! Key job responsibilities This role will redesign how ads create personalized, relevant shopping experiences with customer value at the forefront. Key responsibilities include: - Design and develop solutions using GenAI, deep learning, multi-objective optimization and/or reinforcement learning to transform ad retrieval, auctions, whole-page relevance, and shopping experiences. - Partner with scientists, engineers, and product managers to build scalable, production-ready science solutions. - Apply industry advances in GenAI, Large Language Models (LLMs), and related fields to create innovative prototypes and concepts. - Improve the team's scientific and technical capabilities by implementing algorithms, methodologies, and infrastructure that enable rapid experimentation and scaling. - Mentor junior scientists and engineers to build a high-performing, collaborative team. A day in the life As an Applied Scientist on the Sponsored Products and Brands Off-Search team, you will contribute to the development in Generative AI (GenAI) and Large Language Models (LLMs) to revolutionize our advertising flow, backend optimization, and frontend shopping experiences. This is a rare opportunity to redefine how ads are retrieved, allocated, and/or experienced—elevating them into personalized, contextually aware, and inspiring components of the customer journey. You will have the opportunity to fundamentally transform areas such as ad retrieval, ad allocation, whole-page relevance, and differentiated recommendations through the lens of GenAI. By building novel generative models grounded in both Amazon’s rich data and the world’s collective knowledge, your work will shape how customers engage with ads, discover products, and make purchasing decisions. If you are passionate about applying frontier AI to real-world problems with massive scale and impact, this is your opportunity to define the next chapter of advertising science. About the team The Off-Search team within Sponsored Products and Brands (SPB) is focused on building delightful ad experiences across various surfaces beyond Search on Amazon—such as product detail pages, the homepage, and store-in-store pages—to drive monetization. Our vision is to deliver highly personalized, context-aware advertising that adapts to individual shopper preferences, scales across diverse page types, remains relevant to seasonal and event-driven moments, and integrates seamlessly with organic recommendations such as new arrivals, basket-building content, and fast-delivery options. To execute this vision, we work in close partnership with Amazon Stores stakeholders to lead the expansion and growth of advertising across Amazon-owned and -operated pages beyond Search. We operate full stack—from backend ads-retail edge services, ads retrieval, and ad auctions to shopper-facing experiences—all designed to deliver meaningful value.
US, CA, Palo Alto
The Sponsored Products and Brands (SPB) team at Amazon Ads is re-imagining the advertising landscape through state-of-the-art generative AI technologies, revolutionizing how millions of customers discover products and engage with brands across Amazon.com and beyond. We are at the forefront of re-inventing advertising experiences, bridging human creativity with artificial intelligence to transform every aspect of the advertising lifecycle from ad creation and optimization to performance analysis and customer insights. We are a passionate group of innovators dedicated to developing responsible and intelligent AI technologies that balance the needs of advertisers, enhance the shopping experience, and strengthen the marketplace. If you're energized by solving complex challenges and pushing the boundaries of what's possible with AI, join us in shaping the future of advertising. Curious about our advertising solutions? Discover more about Sponsored Products and Sponsored Brands to see how we’re helping businesses grow on Amazon.com and beyond! Key job responsibilities This role will be pivotal in redesigning how ads contribute to a personalized, relevant, and inspirational shopping experience, with the customer value proposition at the forefront. Key responsibilities include, but are not limited to: - Contribute to the design and development of GenAI, deep learning, multi-objective optimization and/or reinforcement learning empowered solutions to transform ad retrieval, auctions, whole-page relevance, and/or bespoke shopping experiences. - Collaborate cross-functionally with other scientists, engineers, and product managers to bring scalable, production-ready science solutions to life. - Stay abreast of industry trends in GenAI, LLMs, and related disciplines, bringing fresh and innovative concepts, ideas, and prototypes to the organization. - Contribute to the enhancement of team’s scientific and technical rigor by identifying and implementing best-in-class algorithms, methodologies, and infrastructure that enable rapid experimentation and scaling. - Mentor and grow junior scientists and engineers, cultivating a high-performing, collaborative, and intellectually curious team. A day in the life As an Applied Scientist on the Sponsored Products and Brands Off-Search team, you will contribute to the development in Generative AI (GenAI) and Large Language Models (LLMs) to revolutionize our advertising flow, backend optimization, and frontend shopping experiences. This is a rare opportunity to redefine how ads are retrieved, allocated, and/or experienced—elevating them into personalized, contextually aware, and inspiring components of the customer journey. You will have the opportunity to fundamentally transform areas such as ad retrieval, ad allocation, whole-page relevance, and differentiated recommendations through the lens of GenAI. By building novel generative models grounded in both Amazon’s rich data and the world’s collective knowledge, your work will shape how customers engage with ads, discover products, and make purchasing decisions. If you are passionate about applying frontier AI to real-world problems with massive scale and impact, this is your opportunity to define the next chapter of advertising science. About the team The Off-Search team within Sponsored Products and Brands (SPB) is focused on building delightful ad experiences across various surfaces beyond Search on Amazon—such as product detail pages, the homepage, and store-in-store pages—to drive monetization. Our vision is to deliver highly personalized, context-aware advertising that adapts to individual shopper preferences, scales across diverse page types, remains relevant to seasonal and event-driven moments, and integrates seamlessly with organic recommendations such as new arrivals, basket-building content, and fast-delivery options. To execute this vision, we work in close partnership with Amazon Stores stakeholders to lead the expansion and growth of advertising across Amazon-owned and -operated pages beyond Search. We operate full stack—from backend ads-retail edge services, ads retrieval, and ad auctions to shopper-facing experiences—all designed to deliver meaningful value.
US, CA, Sunnyvale
Industrial Robotics Group is seeking exceptional talent to help develop the next generation of advanced robotics systems that will transform automation at Amazon's scale. We're building revolutionary robotic systems that combine innovative AI, sophisticated control systems, and advanced mechanical design to create adaptable automation solutions capable of working safely alongside humans in dynamic environments. This is a unique opportunity to shape the future of robotics and automation at unprecedented scale, working with world-class teams pushing the boundaries of what's possible in robotic manipulation, locomotion, and human-robot interaction. This role presents an opportunity to shape the future of robotics through innovative applications of deep learning and large language models. We leverage advanced robotics, machine learning, and artificial intelligence to solve complex operational challenges at unprecedented scale. Our fleet of robots operates across hundreds of facilities worldwide, working in sophisticated coordination to fulfill our mission of customer excellence. We are pioneering the development of robotics foundation models that: - Enable unprecedented generalization across diverse tasks - Integrate multi-modal learning capabilities (visual, tactile, linguistic) - Accelerate skill acquisition through demonstration learning - Enhance robotic perception and environmental understanding - Streamline development processes through reusable capabilities The ideal candidate will contribute to research that bridges the gap between theoretical advancement and practical implementation in robotics. You will be part of a team that's revolutionizing how robots learn, adapt, and interact with their environment. Join us in building the next generation of intelligent robotics systems that will transform the future of automation and human-robot collaboration. As an Applied Scientist, you will develop and improve machine learning systems that help robots perceive, reason, and act in real-world environments. You will leverage state-of-the-art models (open source and internal research), evaluate them on representative tasks, and adapt/optimize them to meet robustness, safety, and performance needs. You will invent new algorithms where gaps exist. You’ll collaborate closely with research, controls, hardware, and product-facing teams, and your outputs will be used by downstream teams to further customize and deploy on specific robot embodiments. Key job responsibilities As an Applied Scientist in the Foundations Model team, you will: - Leverage state-of-the-art models for targeted tasks, environments, and robot embodiments through fine-tuning and optimization. - Execute rapid, rigorous experimentation with reproducible results and solid engineering practices, closing the gap between sim and real environments. - Build and run capability evaluations/benchmarks to clearly profile performance, generalization, and failure modes. - Contribute to the data and training workflow: collection/curation, dataset quality/provenance, and repeatable training recipes. - Write clean, maintainable, well commented and documented code, contribute to training infrastructure, create tools for model evaluation and testing, and implement necessary APIs - Stay current with latest developments in foundation models and robotics, assist in literature reviews and research documentation, prepare technical reports and presentations, and contribute to research discussions and brainstorming sessions. - Work closely with senior scientists, engineers, and leaders across multiple teams, participate in knowledge sharing, support integration efforts with robotics hardware teams, and help document best practices and methodologies.
IN, KA, Bengaluru
Alexa+ is the world’s best Generative AI powered personal assistant / agent for consumers. We are seeking an experienced Applied Science Manager to build and lead a new team of scientists in India dedicated to Alexa Conversational Ads and Personalization. As the leader of this team, you will shape both the scientific roadmap and the product strategy, working closely with global product stakeholders to ensure your team is delivering high-impact, scalable solutions. Key job responsibilities - Hire, develop, and mentor a high-performing team of applied scientists. - Partner with product management and engineering leadership to define the mid-to-long-term scientific roadmap for conversational ads and personalization. - Manage the execution of complex ML projects, ensuring rigorous experimental design, high modeling standards, and on-time delivery. - Bridge the gap between science, engineering, and product, translating business metrics into scientific goals and vice versa. - Establish best practices for ML lifecycle management, code quality, and technical documentation within the team.
IN, KA, Bengaluru
Alexa+ is the world’s best Generative AI powered personal assistant / agent for consumers. We are looking for a Senior Applied Scientist to provide technical leadership for our Alexa Conversational Ads and Personalization initiatives. You will be responsible for tackling our most ambiguous scientific challenges, setting the technical architecture for new ML systems, and pushing the boundaries of what is possible in voice-based advertising. Key job responsibilities - Define the scientific vision and lead the technical execution for complex, multi-quarter ML projects in conversational ads and personalization. - Architect end-to-end machine learning systems that operate at Alexa's massive scale. - Mentor and guide junior scientists on modeling techniques, experimental design, and best practices. - Partner closely with product and engineering stakeholders to translate ambiguous business requirements into rigorous scientific problem statements. - Contribute to the broader scientific community through internal technical papers and external publications.
IN, KA, Bengaluru
Alexa+ is the world’s best Generative AI powered personal assistant / agent for consumers. We are seeking an Applied Scientist to join our newly expanding team in India focused on Alexa Conversational Ads and Personalization. In this role, you will build machine learning models that seamlessly and naturally integrate relevant advertising into the Alexa experience while deeply personalizing user interactions. You will work closely with other scientists, engineers, and product managers to take models from conception to production. Key job responsibilities - Design, develop, and evaluate innovative machine learning and deep learning models for natural language processing (NLP), recommendation systems, and personalization. - Conduct hands-on data analysis and build scalable ML pipelines. - Design and run A/B experiments to measure the impact of new models on customer experience and ad performance. - Collaborate with software development engineers to deploy models into high-scale, real-time production environments.
US, CA, San Francisco
The Amazon Center for Quantum Computing (CQC) is a multi-disciplinary team of scientists, engineers, and technicians, all working to innovate in quantum computing for the benefit of our customers. We are looking to hire an Applied Scientist to design and model novel superconducting quantum devices (including qubits), readout and control schemes, and advanced quantum processors. The ideal candidate will have a track record of original scientific contributions, strong engineering principles, and/or software development experience. Resourcefulness, as well as strong organizational and communication skills, is essential. About the team About the team The Amazon Center for Quantum Computing (CQC) is a multi-disciplinary team of scientists, engineers, and technicians, on a mission to develop a fault-tolerant quantum computer. Inclusive Team Culture Here at Amazon, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon conferences, inspire us to never stop embracing our uniqueness. Diverse Experiences Amazon values diverse experiences. Even if you do not meet all of the preferred qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Mentorship & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. Export Control Requirement Due to applicable export control laws and regulations, candidates must be either a U.S. citizen or national, U.S. permanent resident (i.e., current Green Card holder), or lawfully admitted into the U.S. as a refugee or granted asylum, or be able to obtain a US export license. If you are unsure if you meet these requirements, please apply and Amazon will review your application for eligibility. Export Control Requirement: Due to applicable export control laws and regulations, candidates must be either a U.S. citizen or national, U.S. permanent resident (i.e., current Green Card holder), or lawfully admitted into the U.S. as a refugee or granted asylum, or be able to obtain a U.S export license. If you are unsure if you meet these requirements, please apply and Amazon will review your application for eligibility.
US, CA, Sunnyvale
Amazon Industrial Robotics Group is seeking exceptional talent to help develop the next generation of advanced robotics systems that will transform automation at Amazon's scale. We're building revolutionary robotic systems that combine innovative AI, sophisticated control systems, and advanced mechanical design to create adaptable automation solutions capable of working safely alongside humans in dynamic environments. This is a unique opportunity to shape the future of robotics and automation at unprecedented scale, working with world-class teams pushing the boundaries of what's possible in robotic manipulation, locomotion, and human-robot interaction. This role presents an opportunity to shape the future of robotics through innovative applications of deep learning and large language models. We leverage advanced robotics, machine learning, and artificial intelligence to solve complex operational challenges at unprecedented scale. Our fleet of robots operates across hundreds of facilities worldwide, working in sophisticated coordination to fulfill our mission of customer excellence. We are pioneering the development of robotics foundation models that: - Enable unprecedented generalization across diverse tasks - Integrate multi-modal learning capabilities (visual, tactile, linguistic) - Accelerate skill acquisition through demonstration learning - Enhance robotic perception and environmental understanding - Streamline development processes through reusable capabilities The ideal candidate will contribute to research that bridges the gap between theoretical advancement and practical implementation in robotics. You will be part of a team that's revolutionizing how robots learn, adapt, and interact with their environment. Join us in building the next generation of intelligent robotics systems that will transform the future of automation and human-robot collaboration. As a Senior Applied Scientist, you will lead the development of machine learning systems that help robots perceive, reason, and act in real-world environments. You will set technical direction for adapting and advancing state-of-the-art models (open source and internal research) into robust, safe, and high-performing “robot brain” capabilities for our target tasks, environments, and robot embodiments. You will drive rigorous capability profiling and experimentation, lead targeted innovation where gaps exist, and partner across research, controls, hardware, and product teams to ensure outputs can be further customized and deployed on specific robots. Key job responsibilities - Lead technical initiatives for foundation-model capabilities (e.g., visuomotor / VLA / video-action worldmodel-action policies), from problem definition through validated model deliverables. - Own model readiness for our embodiment class: drive adaptation, fine-tuning, and optimization (latency/throughput/robustness), and define success criteria that downstream teams can build on. - Establish and evolve capability evaluation: define benchmark strategy, metrics, and profiling methodology to quantify performance, generalization, and failure modes; ensure evaluations drive clear roadmap decisions. - Drive the data + training strategy needed to close key capability gaps, including data requirements, collection/curation standards, dataset quality/provenance, and repeatable training recipes (sim + real). - Invent and validate new methods when leveraging SOTA is insufficient—new training schemes, model components, supervision signals, or sim↔real techniques—backed by strong empirical evidence. - Influence cross-team technical decisions by collaborating with controls/WBC, hardware, and product teams on interfaces, constraints, and integration plans; communicate results via design docs and technical reviews. - Mentor and raise the bar: guide junior scientists/engineers, set best practices for experimentation and code quality, and drive a culture of rigor and reproducibility.
US, CA, Sunnyvale
Amazon Industrial Robotics Group is seeking exceptional talent to help develop the next generation of advanced robotics systems that will transform automation at Amazon's scale. We're building revolutionary robotic systems that combine innovative AI, sophisticated control systems, and advanced mechanical design to create adaptable automation solutions capable of working safely alongside humans in dynamic environments. This is a unique opportunity to shape the future of robotics and automation at unprecedented scale, working with world-class teams pushing the boundaries of what's possible in robotic manipulation, locomotion, and human-robot interaction. This role presents an opportunity to shape the future of robotics through innovative applications of deep learning and large language models. We leverage advanced robotics, machine learning, and artificial intelligence to solve complex operational challenges at unprecedented scale. Our fleet of robots operates across hundreds of facilities worldwide, working in sophisticated coordination to fulfill our mission of customer excellence. We are pioneering the development of robotics foundation models that: - Enable unprecedented generalization across diverse tasks - Integrate multi-modal learning capabilities (visual, tactile, linguistic) - Accelerate skill acquisition through demonstration learning - Enhance robotic perception and environmental understanding - Streamline development processes through reusable capabilities The ideal candidate will contribute to research that bridges the gap between theoretical advancement and practical implementation in robotics. You will be part of a team that's revolutionizing how robots learn, adapt, and interact with their environment. Join us in building the next generation of intelligent robotics systems that will transform the future of automation and human-robot collaboration. As a Senior Applied Scientist, you will lead the development of machine learning systems that help robots perceive, reason, and act in real-world environments. You will set technical direction for adapting and advancing state-of-the-art models (open source and internal research) into robust, safe, and high-performing “robot brain” capabilities for our target tasks, environments, and robot embodiments. You will drive rigorous capability profiling and experimentation, lead targeted innovation where gaps exist, and partner across research, controls, hardware, and product teams to ensure outputs can be further customized and deployed on specific robots. Key job responsibilities - Lead technical initiatives for foundation-model capabilities (e.g., visuomotor / VLA / video-action worldmodel-action policies), from problem definition through validated model deliverables. - Own model readiness for our embodiment class: drive adaptation, fine-tuning, and optimization (latency/throughput/robustness), and define success criteria that downstream teams can build on. - Establish and evolve capability evaluation: define benchmark strategy, metrics, and profiling methodology to quantify performance, generalization, and failure modes; ensure evaluations drive clear roadmap decisions. - Drive the data + training strategy needed to close key capability gaps, including data requirements, collection/curation standards, dataset quality/provenance, and repeatable training recipes (sim + real). - Invent and validate new methods when leveraging SOTA is insufficient—new training schemes, model components, supervision signals, or sim↔real techniques—backed by strong empirical evidence. - Influence cross-team technical decisions by collaborating with controls/WBC, hardware, and product teams on interfaces, constraints, and integration plans; communicate results via design docs and technical reviews. - Mentor and raise the bar: guide junior scientists/engineers, set best practices for experimentation and code quality, and drive a culture of rigor and reproducibility.
US, WA, Seattle
We are looking for a passionate Applied Scientist to help pioneer the next generation of agentic AI applications for Amazon advertisers. In this role, you will design agentic architectures, develop tools and datasets, and contribute to building systems that can reason, plan, and act autonomously across complex advertiser workflows. You will work at the forefront of applied AI, developing methods for fine-tuning, reinforcement learning, and preference optimization, while helping create evaluation frameworks that ensure safety, reliability, and trust at scale. You will work backwards from the needs of advertisers—delivering customer-facing products that directly help them create, optimize, and grow their campaigns. Beyond building models, you will advance the agent ecosystem by experimenting with and applying core primitives such as tool orchestration, multi-step reasoning, and adaptive preference-driven behavior. This role requires working independently on ambiguous technical problems, collaborating closely with scientists, engineers, and product managers to bring innovative solutions into production. Key job responsibilities - Design and build agents to guide advertisers in conversational and non-conversational experience. - Design and implement advanced model and agent optimization techniques, including supervised fine-tuning, instruction tuning and preference optimization (e.g., DPO/IPO). - Curate datasets and tools for MCP. - Build evaluation pipelines for agent workflows, including automated benchmarks, multi-step reasoning tests, and safety guardrails. - Develop agentic architectures (e.g., CoT, ToT, ReAct) that integrate planning, tool use, and long-horizon reasoning. - Prototype and iterate on multi-agent orchestration frameworks and workflows. - Collaborate with peers across engineering and product to bring scientific innovations into production. - Stay current with the latest research in LLMs, RL, and agent-based AI, and translate findings into practical applications. About the team The Sponsored Products and Brands team at Amazon Ads is re-imagining the advertising landscape through the latest generative AI technologies, revolutionizing how millions of customers discover products and engage with brands across Amazon.com and beyond. We are at the forefront of re-inventing advertising experiences, bridging human creativity with artificial intelligence to transform every aspect of the advertising lifecycle from ad creation and optimization to performance analysis and customer insights. We are a passionate group of innovators dedicated to developing responsible and intelligent AI technologies that balance the needs of advertisers, enhance the shopping experience, and strengthen the marketplace. If you're energized by solving complex challenges and pushing the boundaries of what's possible with AI, join us in shaping the future of advertising. The Campaign Strategies team within Sponsored Products and Brands is focused on guiding and supporting 1.6MM advertisers to meet their advertising needs of creating and managing ad campaigns. At this scale, the complexity of diverse advertiser goals, campaign types, and market dynamics creates both a massive technical challenge and a transformative opportunity: even small improvements in guidance systems can have outsized impact on advertiser success and Amazon’s retail ecosystem. Our vision is to build a highly personalized, context-aware agentic advertiser guidance system that leverages LLMs together with tools such as auction simulations, ML models, and optimization algorithms. This agentic framework, will operate across both chat and non-chat experiences in the ad console, scaling to natural language queries as well as proactively delivering guidance based on deep understanding of the advertiser. To execute this vision, we collaborate closely with stakeholders across Ad Console, Sales, and Marketing to identify opportunities—from high-level product guidance down to granular keyword recommendations—and deliver them through a tailored, personalized experience. Our work is grounded in state-of-the-art agent architectures, tool integration, reasoning frameworks, and model customization approaches (including tuning, MCP, and preference optimization), ensuring our systems are both scalable and adaptive.