zooxsensors.png
State-of-the-art sensors placed on each corner of the Zoox robotaxi enable it to ‘see’ in all directions simultaneously.

How the Zoox robotaxi predicts everything, everywhere, all at once

A combination of cutting-edge hardware, sensor technology, and bespoke machine learning approaches can predict trajectories of vehicles, people, and even animals, as far as 8 seconds into the future.

We humans often lament that we cannot predict the future, but perhaps we don’t give ourselves quite enough credit. With sufficient practice, our short-term predictive skills become truly remarkable.

Driving is a good example, particularly in urban environments. Navigating through a city, you become aware of a colossal number of dynamic aspects in your surroundings. The other cars — some moving, some stationary — pedestrians, cyclists, traffic lights changing. As you drive, your mind is generating predictions of how the universe around you is likely to manifest: “that car looks likely to pull out in front of me”; “that pedestrian is about to sleepwalk off the sidewalk – be ready to hit the brake”; “the front wheels of that parked car have just turned, so it’s about to move”.

Jesse Levinson, co-founder and CTO of Zoox, on the development of fully autonomous vehicles for mobility-as-a-service

Your power of prediction and anticipation throws a protective buffer zone around you, your passengers, and everyone in your vicinity as you travel from A to B. It is a broad yet very nuanced power, making it incredibly hard to recreate in real-world robotics applications.

Nevertheless, the teams at Zoox have achieved noteworthy success.

The integration of cutting-edge hardware, sensor technology, and bespoke machine learning (ML) approaches has resulted in an autonomous robotaxi that can predict the trajectories of vehicles, people, and even animals in its surroundings, as far as 8 seconds into the future — more than enough to enable the vehicle to make sensible and safe driving decisions.

“Predicting the future — the intentions and movements of other agents in the scene — is a core component of safe, autonomous driving,” says Kai Wang, director of the Zoox Prediction team.

Perceiving, predicting, planning

The AI stack at the center of the Zoox driving system broadly consists of three processes, which occur in order: perception, prediction, and planning. These equate to seeing the world and how everything around the vehicle is currently moving, predicting how everything will move next, and deciding how to move from A to B given those predictions.

The Perception team gathers high-resolution data from the vehicle’s dozens of sensors, which include visual cameras, LiDAR, radar, and longwave-infrared cameras. These sensors, positioned high on the four corners of the vehicle, provide an overlapping, 360-degree field of view that can extend for over a hundred meters. To borrow a popular phrase, this vehicle can see everything, everywhere, all at once.

Related content
Advanced machine learning systems help autonomous vehicles react to unexpected changes.

The robotaxi already contains a detailed semantic map of its environment, called the Zoox Road Network (ZRN), which means it understands everything about local infrastructure, road rules, speed limits, intersection layouts, locations of traffic signals, and so on.

Perception quickly identifies and classifies the other cars, pedestrians, and cyclists in the scene, which are dubbed “agents.” And crucially, it tracks each agent’s velocity and current trajectory. These data are then combined with the ZRN to provide the Zoox vehicle with an incredibly detailed understanding of its environment.

Before these combined data are passed to Prediction, they are instantly boiled down to their essentials, into a format optimized for machine learning. To this end, what Prediction ultimately operates on is a top-down, spatially accurate graphical depiction of the vehicle and all the relevant dynamic and static aspects of its environment: a machine-readable, birds-eye representation of the scene with the robotaxi at the center.

“We draw everything into a 2D image and present it to a convolutional neural network [CNN], which in turn determines what distances matter, what relationships between agents matter, and so on,” says Wang.

Learning from data-rich images

While a human can get the gist of this map, such as the relative positions of all the vehicles (represented by boxes) and pedestrians (different, smaller boxes) in the scene, it is not designed for human consumption, explains Andres Morales, staff software engineer.

zoonsceneprediction.png
A complex scene is converted into an image with many layers, each representing different semantic information. The result is fed into a convolutional neural network to generate predictions.

“This is not an RGB image. It’s got about 60 channels, or layers, which also include semantic information,” he notes. “For example, because someone holding a smartphone tends to behave differently, we might have one channel that represents a pedestrian holding their phone as a ‘1’ and a pedestrian with no phone as a ‘0’.”

From this data-rich image, the ML system produces a probability distribution of potential trajectories for each and every dynamic agent in the scene, from trucks right down to that pet dog milling around near the crosswalk.

These predictions consider not only the current trajectory of each agent, but also include factors such as how cars are expected to behave on given road layouts, what the traffic lights are doing, the workings of crosswalks, and so on.

zooxtruckpredictions.png
An example of a set of predictions for a truck navigating a 3-way intersection. The green boxes represent where the agent could be up to 6 seconds into the future, while the blue box represents where the agent actually went. Each path is a possible future generated by the Prediction system, with an associated likelihood.

These predictions are typically up to about 8 seconds into the future, but they are constantly recalculated every tenth of a second as new information is delivered from Perception.

These weighted predictions are delivered to the Planner aspect of the AI stack — the vehicle’s executive decision-maker — which uses those predictions to help it decide how the Zoox vehicle will operate safely.

From perception through to planning, the whole process is working in real-time; this robotaxi has lightning-quick reactions, should it need them.

Related content
Predicting the future trajectory of a moving agent can be easy when the past trajectory continues smoothly but is challenging when complex interactions with other agents are involved. Recent deep learning approaches for trajectory prediction show promising performance and partially attribute this to successful reasoning about agent-agent interactions. However, it remains unclear which features such black-box

The team can be confident of its predictions because it has a vast pool of data with which to train its ML algorithms — millions of road miles of high-resolution sensor data collected by the Zoox test fleet: Toyota Highlanders retrofitted with an almost identical sensor architecture as the robotaxi mapping and driving autonomously in San Francisco, Seattle, and Las Vegas.

This two framed animation shows Zoox's software making predictions about movements on the left, on the right is the camera view of those same pedestrians crossing the street as the vehicle is stopped
An example of a Zoox vehicle negotiating a busy intersection in Las Vegas at night. The green boxes show the most likely prediction for each agent in the scene as far as 8 seconds into the future.

Zoox has a further advantage.

“We don’t need to label any data by hand, because our data show where things actually moved into the future,” says Wang. “My team doesn’t have a data problem. Our main challenge is that the future is inherently uncertain. Even humans cannot do this task perfectly.”

Utilizing graph neural networks

While perfect prediction is, by its nature, impossible, Wang’s team is currently taking steps on several fronts to raise the vehicle’s prediction capabilities to the next level, firstly by leveraging a graph neural network (GNN) approach.

“Think of the GNN as a message-passing system by which all the agents and static elements in the scene are interconnected,” says Mahsa Ghafarianzadeh, senior software engineer on the Prediction team.

“What this enables is the explicit encoding of the relationships between all the agents in the scene, as well as the Zoox vehicle, and how these relationships might develop into the future.”

One of Zoox’s test vehicles driving autonomously in Las Vegas, the vehicle is traveling down Flamingo Road, there are other cars, several casinos, and a pedestrian bridge in the background
A Zoox test vehicle navigating Las Vegas autonomously.

To give an everyday example, imagine yourself walking down the middle of a long corridor and seeing a stranger walking toward you, also in the middle of the corridor. That act of seeing each other is effectively the passing of a tacit message that would likely cause you both to alter your course slightly, so that by the time you reach each other, you won’t collide or require a sharp course-correction. That’s human nature.

This animation shows the output of Zoox models on the same initial scene but conditioned on different future actions the vehicle (green) is considering. Zoox is able to predict different yielding behavior of other cars based on when their vehicle enters the intersection. The center animation even shows they predict a collision if we were to take that particular action.
This shows the output of Zoox models on the same initial scene but conditioned on different future actions the vehicle (green) is considering. Zoox is able to predict different yielding behavior of other cars based on when their vehicle enters the intersection. The center animation even shows they predict a collision if we were to take that particular action.

So this GNN approach results in the prediction of more natural behaviors between everyone around the Zoox vehicle, because the algorithm, through training on Zoox’s vast pool of real-world road data, is better able to model how agents, on foot or in cars, affect each other’s behavior in the real world.

Related content
Information extraction, drug discovery, and software analysis are just a few applications of this versatile tool.

Another way the Prediction team is improving accuracy is by embracing the fact that what you do as a driver affects other drivers, which in turn affects you. For example, if you get into your parked car and pull out just a little into busy traffic, a driver coming up the road behind you may slow down or stop to let you out, or they may drive straight past, obliging you to wait for a better opportunity.

“Prediction doesn’t happen in a vacuum. Other people’s behaviors are dependent on how their world is changing. If you’re not capturing that within prediction, you’re limiting yourself,” says Wang.

Next steps

Work is now underway to integrate Prediction even more deeply with Planner, creating a feedback loop. Instead of simply receiving predictions and making a decision on how to proceed, the Planner can now interact with Prediction along these lines: “If I perform action X, or Y, or Z, how are the agents in my vicinity likely to adjust their own behavior in each case?”

I’ve seen Prediction grow from being just three source code files implementing basic heuristics to predict trajectories to where it is now, at the cutting edge of deep learning. It’s incredible how fast everything is evolving.
Mahsa Ghafarianzadeh

In this way, the Zoox robotaxi will become even more naturalistic and adept at negotiations with other vehicles, while also creating a smoother-flowing ride for its customers.

“The team and I started to work on this new mode a couple years ago, just as a research project,” says Morales, “and now we’re focused on its integration, ironing everything out, reducing latency, and generally making it production-ready.”

The ever-increasing sophistication of the Zoox robotaxi’s predictive abilities is a clear source of pride for the team dedicated to it.

“I’ve been in this team for over five years. I’ve seen Prediction grow from being just three source code files implementing basic heuristics to predict trajectories to where it is now, at the cutting edge of deep learning. It’s incredible how fast everything is evolving,” says Ghafarianzadeh.

Indeed, at this rate, the Zoox robotaxi may ultimately become the most prescient vehicle on the road. Though that prediction comes with the usual caveat: Nobody can perfectly predict the future.

Research areas

Related content

IT, Turin
Are you a MS or PhD student interested in a 2026 internship in the field of machine learning, deep learning, generative AI, large language models, speech technology, robotics, computer vision, optimization, operations research, quantum computing, automated reasoning, or formal methods? If so, we want to hear from you! We are looking for students interested in using a variety of domain expertise to invent, design and implement state-of-the-art solutions for never-before-solved problems. You can find more information about the Amazon Science community as well as our interview process via the links below; https://www.amazon.science/ https://amazon.jobs/content/en/career-programs/university/science https://amazon.jobs/content/en/how-we-hire/university-roles/applied-science Key job responsibilities As an Applied Science Intern, you will own the design and development of end-to-end systems. You’ll have the opportunity to write technical white papers, create roadmaps and drive production level projects that will support Amazon Science. You will work closely with Amazon scientists and other science interns to develop solutions and deploy them into production. You will have the opportunity to design new algorithms, models, or other technical solutions whilst experiencing Amazon’s customer focused culture. The ideal intern must have the ability to work with diverse groups of people and cross-functional teams to solve complex business problems. A day in the life At Amazon, you will grow into the high impact person you know you’re ready to be. Every day will be filled with developing new skills and achieving personal growth. How often can you say that your work changes the world? At Amazon, you’ll say it often. Join us and define tomorrow. Some more benefits of an Amazon Science internship include; • All of our internships offer a competitive stipend/salary • Interns are paired with an experienced manager and mentor(s) • Interns receive invitations to different events such as intern program initiatives or site events • Interns can build their professional and personal network with other Amazon Scientists • Interns can potentially publish work at top tier conferences each year About the team Applicants will be reviewed on a rolling basis and are assigned to teams aligned with their research interests and experience prior to interviews. Start dates are available throughout the year and durations can vary in length from 3-6 months for full time internships. This role may available across multiple locations in the EMEA region (Austria, Estonia, France, Germany, Ireland, Israel, Italy, Jordan, Luxembourg, Netherlands, Poland, Romania, Spain, South Africa, UAE, and UK). Please note these are not remote internships.
US, CA, Pasadena
The Amazon Web Services (AWS) Center for Quantum Computing (CQC) is a multi-disciplinary team of theoretical and experimental physicists, materials scientists, and hardware and software engineers on a mission to develop a fault-tolerant quantum computer. Throughout your internship journey, you'll have access to unparalleled resources, including state-of-the-art computing infrastructure, cutting-edge research papers, and mentorship from industry luminaries. This immersive experience will not only sharpen your technical skills but also cultivate your ability to think critically, communicate effectively, and thrive in a fast-paced, innovative environment where bold ideas are celebrated. Join us at the forefront of applied science, where your contributions will shape the future of Quantum Computing and propel humanity forward. Seize this extraordinary opportunity to learn, grow, and leave an indelible mark on the world of technology. Amazon has positions available for Quantum Research Science and Applied Science Internships in Santa Clara, CA and Pasadena, CA. We are particularly interested in candidates with expertise in any of the following areas: superconducting qubits, cavity/circuit QED, quantum optics, open quantum systems, superconductivity, electromagnetic simulations of superconducting circuits, microwave engineering, benchmarking, quantum error correction, etc. In this role, you will work alongside global experts to develop and implement novel, scalable solutions that advance the state-of-the-art in the areas of quantum computing. You will tackle challenging, groundbreaking research problems, work with leading edge technology, focus on highly targeted customer use-cases, and launch products that solve problems for Amazon customers. The ideal candidate should possess the ability to work collaboratively with diverse groups and cross-functional teams to solve complex business problems. A successful candidate will be a self-starter, comfortable with ambiguity, with strong attention to detail and the ability to thrive in a fast-paced, ever-changing environment. About the team Diverse Experiences AWS values diverse experiences. Even if you do not meet all of the qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why AWS? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Inclusive Team Culture Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences, inspire us to never stop embracing our uniqueness. Mentorship & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. Hybrid Work We value innovation and recognize this sometimes requires uninterrupted time to focus on a build. We also value in-person collaboration and time spent face-to-face. Our team affords employees options to work in the office every day or in a flexible, hybrid work model near one of our U.S. Amazon offices.
US, MA, N.reading
Amazon Industrial Robotics is seeking exceptional talent to help develop the next generation of advanced robotics systems that will transform automation at Amazon's scale. We're building revolutionary robotic systems that combine cutting-edge AI, sophisticated control systems, and advanced mechanical design to create adaptable automation solutions capable of working safely alongside humans in dynamic environments. This is a unique opportunity to shape the future of robotics and automation at an unprecedented scale, working with world-class teams pushing the boundaries of what's possible in robotic dexterous manipulation, locomotion, and human-robot interaction. This role presents an opportunity to shape the future of robotics through innovative applications of deep learning and large language models. At Amazon Industrial Robotics we leverage advanced robotics, machine learning, and artificial intelligence to solve complex operational challenges at an unprecedented scale. Our fleet of robots operates across hundreds of facilities worldwide, working in sophisticated coordination to fulfill our mission of customer excellence. - We are pioneering the development of robotics dexterous hands that: - Enable unprecedented generalization across diverse tasks - Are compliant but at the same time impact resistant - Can enable power grasps with the same reliability as fine dexterity and nonprehensile manipulation - Can naturally cope with the uncertainty of the environment - Leverage mechanical intelligence, multi-modal sensor feedback and advanced control techniques. The ideal candidate will contribute to research that bridges the gap between theoretical advancement and practical implementation in robotics. You will be part of a team that's revolutionizing how robots learn, adapt, and interact with their environment. Join us in building the next generation of intelligent robotics systems that will transform the future of automation and human-robot collaboration. Key job responsibilities - Design and implement novel highly dexterous and reliable robotic dexterous hand morphologies - Develop parallel paths for rapid finger design and prototyping combining different actuation and sensing technologies as well as different finger morphologies - Develop new testing and validation strategies to support fast continuous integration and modularity - Build and test full hand prototypes to validate the performance of the solution - Create hybrid approaches combining different actuation technologies, under-actuation, active and passive compliance - Hand integration into rest of the embodiment - Partner with cross-functional teams to rapidly create new concepts and prototypes - Work with Amazon's robotics engineering and operations teams to grasp their requirements and develop tailored solutions - Document the designs, performance, and validation of the final system
US, CA, San Francisco
The Artificial General Intelligence (AGI) team is looking for a passionate, talented, and inventive Member of Technical Staff with a strong deep learning background, to build industry-leading Generative Artificial Intelligence (GenAI) technology with Large Language Models (LLMs) and multimodal systems. Key job responsibilities As a Member of Technical Staff with the AGI team, you will lead the development of algorithms and modeling techniques, to advance the state of the art with LLMs. You will lead the foundational model development in an applied research role, including model training, dataset design, and pre- and post-training optimization. Your work will directly impact our customers in the form of products and services that make use of GenAI technology. You will leverage Amazon’s heterogeneous data sources and large-scale computing resources to accelerate advances in LLMs. About the team The AGI team has a mission to push the envelope in GenAI with LLMs and multimodal systems, in order to provide the best-possible experience for our customers.
US, WA, Bellevue
Are you excited about customer-facing research and reinventing the way people think about long-held assumptions? At Amazon, we are constantly inventing and re-inventing to be the most customer-centric company in the world. To get there, we need exceptionally talented, bright, and driven people. Amazon is one of the most recognizable brand names in the world and we distribute millions of products each year to our loyal customers. A day in the life The ideal candidate will be responsible for quantitative data analysis, building models and prototypes for supply chain systems, and developing state-of-the-art optimization algorithms to scale. This team plays a significant role in various stages of the innovation pipeline from identifying business needs, developing new algorithms, prototyping/simulation, to implementation by working closely with colleagues in engineering, product management, operations, retail and finance. As a senior member of the research team, you will play an integral part on our Supply Chain team with the following technical and leadership responsibilities: * Interact with engineering, operations, science and business teams to develop an understanding and domain knowledge of processes, system structures, and business requirements * Apply domain knowledge and business judgment to identify opportunities and quantify the impact aligning research direction to business requirements and make the right judgment on research project prioritization * Develop scalable mathematical models to derive optimal or near-optimal solutions to existing and new supply chain challenges * Create prototypes and simulations to test devised solutions * Advocate technical solutions to business stakeholders, engineering teams, as well as executive-level decision makers * Work closely with engineers to integrate prototypes into production system * Create policy evaluation methods to track the actual performance of devised solutions in production systems, identify areas with potential for improvement and work with internal teams to improve the solution with new features * Mentor team members for their career development and growth * Present business cases and document models, analyses, and their results in order to influence important decisions About the team Our organization leads the innovation of Amazon’s ultra-fast grocery product initiatives. Our key vision is to transform the online grocery experience and provide a wide grocery selection in order to be the primary destination to fulfill customer’s food shopping needs. We are a team of passionate tech builders who work endlessly to make life better for our customers through amazing, thoughtful, and creative new grocery shopping experiences. To succeed, we need senior technical leaders to forge a path into the future by building innovative, maintainable, and scalable systems.
LU, Luxembourg
Are you a MS student interested in a 2026 internship in the field of machine learning, deep learning, generative AI, large language models and speech technology, robotics, computer vision, optimization, operations research, quantum computing, automated reasoning, or formal methods? If so, we want to hear from you! We are looking for a customer obsessed Data Scientist Intern who can innovate in a business environment, building and deploying machine learning models to drive step-change innovation and scale it to the EU/worldwide. If this describes you, come and join our Data Science teams at Amazon for an exciting internship opportunity. If you are insatiably curious and always want to learn more, then you’ve come to the right place. You can find more information about the Amazon Science community as well as our interview process via the links below; https://www.amazon.science/ https://amazon.jobs/content/en/career-programs/university/science Key job responsibilities As a Data Science Intern, you will have following key job responsibilities: • Work closely with scientists and engineers to architect and develop new algorithms to implement scientific solutions for Amazon problems. • Work on an interdisciplinary team on customer-obsessed research • Experience Amazon's customer-focused culture • Create and Deliver Machine Learning projects that can be quickly applied starting locally and scaled to EU/worldwide • Build and deploy Machine Learning models using large data-sets and cloud technology. • Create and share with audiences of varying levels technical papers and presentations • Define metrics and design algorithms to estimate customer satisfaction and engagement A day in the life At Amazon, you will grow into the high impact person you know you’re ready to be. Every day will be filled with developing new skills and achieving personal growth. How often can you say that your work changes the world? At Amazon, you’ll say it often. Join us and define tomorrow. Some more benefits of an Amazon Science internship include; • All of our internships offer a competitive stipend/salary • Interns are paired with an experienced manager and mentor(s) • Interns receive invitations to different events such as intern program initiatives or site events • Interns can build their professional and personal network with other Amazon Scientists • Interns can potentially publish work at top tier conferences each year About the team Applicants will be reviewed on a rolling basis and are assigned to teams aligned with their research interests and experience prior to interviews. Start dates are available throughout the year and durations can vary in length from 3-6 months for full time internships. This role may available across multiple locations in the EMEA region (Austria, France, Germany, Ireland, Israel, Italy, Luxembourg, Netherlands, Poland, Romania, Spain and the UK). Please note these are not remote internships.
US, WA, Redmond
Amazon Leo is Amazon’s low Earth orbit satellite broadband network. Its mission is to deliver fast, reliable internet to customers and communities around the world, and we’ve designed the system with the capacity, flexibility, and performance to serve a wide range of customers, from individual households to schools, hospitals, businesses, government agencies, and other organizations operating in locations without reliable connectivity. Export Control Requirement: Due to applicable export control laws and regulations, candidates must be a U.S. citizen or national, U.S. permanent resident (i.e., current Green Card holder), or lawfully admitted into the U.S. as a refugee or granted asylum. We are searching for a senior manager with expertise in the spaceflight aerospace engineering domain of Flight Dynamics, including Mission Design of LEO Constellations, Trajectory, Maneuver Planning, and Navigation. This role will be responsible for the research and development of core spaceflight algorithms that enable the Amazon Leo mission. This role will manage the team responsible for designing and developing flight dynamics innovations for evolving constellation mission needs. Key job responsibilities This position requires expertise in simulation and analysis of astrodynamics models and spaceflight trajectories. This position requires demonstrated achievement in managing technology research portfolios. A strong candidate will have demonstrated achievement in managing spaceflight engineering Guidance, Navigation, and Control teams for full mission lifecycle including design, prototype development and deployment, and operations. Working with the Leo Flight Dynamics Research Science team, you will manage, guide, and direct staff to: • Implement high fidelity modeling techniques for analysis and simulation of large constellation concepts. • Develop algorithms for station-keeping and constellation maintenance. • Perform analysis in support of multi-disciplinary trades within the Amazon Leo team. • Formulate solutions to address collision avoidance and conjunction assessment challenges. • Develop the Leo ground system’s evolving Flight Dynamics System functional requirements. • Work closely with GNC engineers to manage on-orbit performance and develop flight dynamics operations processes About the team The Flight Dynamics Research Science team is staffed with subject matter experts of various areas within the Flight Dynamics domain. It also includes a growing Position, Navigation, and Timing (PNT) team.
US, CA, San Francisco
The Artificial General Intelligence (AGI) team is looking for a passionate, talented, and inventive Member of Technical Staff with a strong deep learning background, to build industry-leading Generative Artificial Intelligence (GenAI) technology with Large Language Models (LLMs) and multimodal systems. Key job responsibilities As a Member of Technical Staff with the AGI team, you will lead the development of algorithms and modeling techniques, to advance the state of the art with LLMs. You will lead the foundational model development in an applied research role, including model training, dataset design, and pre- and post-training optimization. Your work will directly impact our customers in the form of products and services that make use of GenAI technology. You will leverage Amazon’s heterogeneous data sources and large-scale computing resources to accelerate advances in LLMs. About the team The AGI team has a mission to push the envelope in GenAI with LLMs and multimodal systems, in order to provide the best-possible experience for our customers.
US, CA, San Francisco
The Artificial General Intelligence (AGI) team is looking for a passionate, talented, and inventive Member of Technical Staff with a strong deep learning background, to build industry-leading Generative Artificial Intelligence (GenAI) technology with Large Language Models (LLMs) and multimodal systems. Key job responsibilities As a Member of Technical Staff with the AGI team, you will lead the development of algorithms and modeling techniques, to advance the state of the art with LLMs. You will lead the foundational model development in an applied research role, including model training, dataset design, and pre- and post-training optimization. Your work will directly impact our customers in the form of products and services that make use of GenAI technology. You will leverage Amazon’s heterogeneous data sources and large-scale computing resources to accelerate advances in LLMs. About the team The AGI team has a mission to push the envelope in GenAI with LLMs and multimodal systems, in order to provide the best-possible experience for our customers.
US, CA, San Francisco
The Artificial General Intelligence (AGI) team is looking for a passionate, talented, and inventive Member of Technical Staff with a strong deep learning background, to build industry-leading Generative Artificial Intelligence (GenAI) technology with Large Language Models (LLMs) and multimodal systems. Key job responsibilities As a Member of Technical Staff with the AGI team, you will lead the development of algorithms and modeling techniques, to advance the state of the art with LLMs. You will lead the foundational model development in an applied research role, including model training, dataset design, and pre- and post-training optimization. Your work will directly impact our customers in the form of products and services that make use of GenAI technology. You will leverage Amazon’s heterogeneous data sources and large-scale computing resources to accelerate advances in LLMs. About the team The AGI team has a mission to push the envelope in GenAI with LLMs and multimodal systems, in order to provide the best-possible experience for our customers.