blueswarm image.png
Swarm robotics involves scores of individual mobile robots that mimic the collective behavior demonstrated by animals. Certain robots, like the Bluebot pictured here, perform some of the same behaviors as a school of fish, such as aggregation, dispersion, and searching.
Courtesy of Radhika Nagpal, Harvard University

Schooling robots to behave like fish

Radhika Nagpal has created robots that can build towers without anyone in charge. Now she’s turned her focus to fulfillment center robots.

When Radhika Nagpal was starting graduate school in 1994, she and her future husband went snorkeling in the Caribbean. Nagpal, who grew up in a landlocked region of India, had never swum in the ocean before. It blew her away.

“The reef was super healthy and colorful, like being in a National Geographic television show,” she recalled. “As soon as I put my face in the water, this whole swarm of fish came towards me and then swerved to the right.”

Meet the Blueswarm
Blueswarm comprises seven identical miniature Bluebots that combine autonomous 3D multi-fin locomotion with 3D camera-based visual perception.

The fish fascinated her. As she watched, large schools of fish would suddenly stop or switch direction as if they were guided by a single mind. A series of questions occurred to her. How did they communicate with one another? What rules — think of them as algorithms — produced such complex group behaviors? What environmental prompts triggered their actions? And most importantly, what made collectives so much smarter and more successful than their individual members?

Radhika Nagpal is a professor of computer science at Harvard University’s Wyss Institute for Biologically Inspired Engineering and an Amazon Scholar
Radhika Nagpal is a professor of computer science at Harvard University’s Wyss Institute for Biologically Inspired Engineering and an Amazon Scholar.

Since then, Nagpal, a professor of computer science at Harvard University’s Wyss Institute for Biologically Inspired Engineering and an Amazon Scholar, has gone on to build swarming robots. Swarm robotics involves scores of individual mobile robots that mimic the collective behavior demonstrated by animals, e.g. how flocks of birds or schools of fish move together to achieve some end. The robots act as if they, too, were guided by a single mind, or, more precisely, a single computer. Yet they are not.

Instead, they follow a relatively simple set of behavioral rules. Without any external orders or directions, Nagpal’s swarms organize themselves to carry out surprisingly complex tasks, like spontaneously synchronizing their behavior, creating patterns, and even building a tower.

More recently, her lab developed swimming robots that performed some of the same behaviors as a school of fish, such as aggregation, dispersion, and searching. All without a leader.

Nagpal’s work demonstrates both how far we have come in creating self-organizing robot swarms that can perform tasks — and how far we still must go to emulate the complex tapestries woven by nature. It is a gap that Nagpal hopes to close by uncovering the secrets of swarm intelligence to make swarm robots far more useful.

Amorphous computing

The Caribbean fish sparked Nagpal’s imagination because she was already interested in distributed computing, where multiple computers collaborate to solve problems or transfer information without any single computer running the show. At MIT, where she had begun her PhD program, she was drawn to an offshoot of the field called amorphous computing. It investigates how limited, unreliable individuals — from cells to ants to fish — organize themselves to perform often complex tasks consistently without any hierarchies.

Amorphous computing was “hardware agnostic.” This meant that it sought rules that guided this behavior in both living organisms and computer systems. It asked, for example, how identical cells in an embryo form all the organs of an animal, how ants find the most direct route to food, or how fish coordinate their movements. By studying nature, these computer scientists hoped to build computer networks that operated on the same principles.

I got excited about how nature makes these complicated, distributed, mobile networks. Those multi-robot systems became a new direction of my research
Radhika Nagpal

After completing her doctoral work on self-folding materials inspired by how cells form tissues, Nagpal began teaching at Harvard. While there, she was visited by her friend James McLurkin, a pioneer in swarm robotics at MIT and iRobot.

“James is the one that got me into robot swarms by introducing me to all the things that ant and termite colonies do,” Nagpal said. “I got excited about how nature makes these complicated, distributed, mobile networks. James was developing that used similar principles to move around and work together. Those multi-robot systems became a new direction of my research.”

She was particularly taken by Namibian termites, which build large-scale nest mounds with multiple chambers and complex ventilation systems, often as high as 8 feet tall.

“As far as we know, there isn’t a blueprint or an a priori distribution between who’s doing the building and who is not. We know the queen does not set the agenda,” she explained. “These colonies start with hundreds of termites and expand their structure as they grow.”

The question fascinated her. “I have no idea how that works,” she said. “I mean, how do you create systems that are so adaptive?”

Finding the rules

Researchers have spent decades answering that question. One way, they found, is to act locally. Take, for example, a flock of geese at a pond. If one or two birds on the outside of the flock see a predator, they grow agitated and fly off, alerting the next nearest birds. The message percolates through flock. Once a certain number of birds have “voted” to fly off, the rest follow without any hesitation. They are not following a leader, only reacting only to the birds next to them.

How dynamic circle formation works

The same type of local behaviors could be used to make driverless vehicles safer. An autonomous vehicle, Nagpal explains, does not have to reason about all the other cars on the road, only the ones around it. By focusing on nearby vehicles, these distributed systems use less processing power without losing the ability to react to changes very quickly.

Such systems are highly scalable. “Instead of having to reason about everybody, your car only has to reason about its five neighbors,” Nagpal said. “I can make the system very large, but each individual’s reasoning space remains constant. That’s a traditional notion of scalable —the amount of processing per vehicle stays constant, but we’re allowed to increase the size of the system.”

Another key to swarm behavior involves embodied intelligence, the idea that brains interact with the world through bodies that can see, hear, touch, smell, and taste. This is a type of intelligence, too, Nagpal argues.

It’s almost like each individual fish acts like a distributed sensor. Instead of me doing all the work, somebody on the left can say, ‘Hey, I saw something.’ When the group divides the labor so that some of us look out for predators while the rest of us eat, it costs less in terms of energy and resources.
Radhika Nagpal

“When you think of an ant, there is not a concentrated set of neurons there,” she said, referring to the ant’s 20-microgram brain. “Instead, there is a huge amount of awareness in the body itself. I may wonder how an ant solves a problem, but I have to realize that somehow having a physical body full of sensors makes that easier. We do not really understand how to think about that still.”

Local actions, scalable behavior, and embodied intelligence are among the factors that make swarms successful. In fact, researchers have shown that the larger a school of fish, the more successful it is at evading predators, finding food, and not getting lost.

“It’s almost like each individual fish acts like a distributed sensor,” Nagpal said. “Instead of me doing all the work, somebody on the left can say, ‘Hey, I saw something.’ When the group divides the labor so that some of us look out for predators while the rest of us eat, it costs less in terms of energy and resources than trying to eat and look out for predators all by yourself.

“What’s really interesting about large insect colonies and fish schools is that they do really complicated things in a decentralized way, whereas people have a tendency to build hierarchies as soon as we have to work together,” she continued. “There is a cost to that, and if we try to do that with that with robots, we replicate the whole management structure and cost of a hierarchy.”

So Nagpal set out to build robots swarms that worked without top-down organization.

Animal behavior

A typical process in Nagpal’s group starts by identifying an interesting natural behavior and trying to discover the rules that generate those actions. Sometimes, they are surprisingly simple.

Take, for example, some behaviors exhibited by Nagpal’s colony of 1,000 interactive robots, each the size of quarter and each communicating with its nearest neighbors wirelessly. The robots will self-assemble into a simple line with a repeating color pattern based on only two rules: a motion rule that allows them to move around any stationary robots, and a pattern rule that tells them to take on the color of their two nearest neighbors.

Other combinations of simple rules spontaneously synchronize the blinking of robot lights, guide migrations, and get the robots to form the letter “K.”

Most impressively, Nagpal and her lab used a behavior found in termites, called stigmergy, to prompt self-organized robot swarms to build a tower. Stigmergy involves leaving a mark on the environment that triggers a specific behavior by another member of the group.

Stigmergy plays a role in how termites build their huge nests. One termite may sense that a spot would make a good place to build, so it puts down its equivalent of a mud brick. When a second termite comes along, the brick triggers it to place its brick there. As the number of bricks increase, the trigger grows stronger and other termites begin building pillars nearby. When they grow high enough, something triggers the termites to begin connecting them with roofs.

“The building environment has become a physical memory of what should happen next,” Nagpal said.

Nagpal used that type of structural memory to prompt her robotic swarm to build a ziggurat tower. The instructions included a motion rule about how to move through the tower and a pattern rule about where to place the blocks. She then built some small, block-carrying robots that built a smaller but no less impressive structure.

Her lab developed a compiler that could generate algorithms that would enable the robots to build specific types of structures — perhaps towers with minarets — by interacting with stigmergic physical memories. One day, algorithm-driven robots could move sandbags to shore up a levee in a hurricane or buttress a collapsed building. They could even monitor coral reefs, underwater infrastructure, and pipelines — if they could swim.

Schooling robofish

From the start, Nagpal wanted to build her own school of robotic fish, but the hardware was simply too clunky to make them practical. That changed with the advent of smartphones, with their low-cost, low-power processors, sensors, and batteries.

In 2018, she got her chance when she received an Amazon Machine Learning Research Award. This allowed her to build Blueswarm, a group of robotic fish that performed tasks like those she observed in the Caribbean years ago.

Each Bluebot is just four inches long, but it packs a small Raspberry Pi computer, two fish-eye cameras, and three blue LED lights. It also has a tail (caudal) fin for thrust, a dorsal fin to move up or down, and side fins (pectoral fins) to turn, stop, or swim backward.

Bluebots do not use Wi-Fi, GPS, or external cameras to communicate their positions without error. Instead, she wants to explore what behaviors are possible relying only on cameras and local perception of one’s mates.

How multi-behavior search works

Researchers, she explained, find it difficult to rely only upon local perception. It has been difficult to tackle fundamental questions, like how does a robot visually detect other members of the swarm, how they parse information, and what happens when one member moves in front of another. Limiting Bluebot sensing to local perception forces Nagpal and her team to think more deeply about what robots really need to know about their neighbors, especially when data is limited and imprecise. 

Bluebots can mimic several fish school behaviors by tracking LED lights on the neighboring fishbots around them. Using 3D cameras and simple algorithms, they estimate distance between lights on neighboring fish. (The closer they appear, the further the fish.)

Nagpal’s seven Bluebots form a circle (called milling) by turning right if there is a robot in front of them. If there is no robot, they turn left. After a few moments, the school will be swimming in a circle, a formation fish use to trap prey.

They can also search for a target flashing red light. First, the school disperses within the tank. When a Bluebot finds the red LED, it begins to flash its lights. This signals the nearest Bluebots to aggregate, followed by the rest. If a single robot had to conduct a similar search by itself, it would take significantly longer.

These behaviors are impressive for robots, but represent a small subset of fish school behaviors. They also take place in a static fish tank populated by only one school of robot fish. To go further, Nagpal wants to improve their sensors and perhaps use machine learning to discover new rules that could be combined to produce the aquatic equivalent of a tower.

In the end, though, Nagpal does not want to build a better fish. Instead, she wants to apply the lessons she has learned to real-world robots. She is doing just that during a sabbatical working at Amazon, which operates the largest fleet of robots — more than 200,000 units — in the world.

Practical uses

Nagpal had little previous experience working in industry, but she jumped at the chance to work with Amazon.

“There are few others with hundreds of robots moving around safely in a facility space,” she said. “And the opportunity to work on algorithms in a deployed system was very exciting."

There are few others [like Amazon] with hundreds of robots moving around safely in a facility space. And the opportunity to work on algorithms in a deployed system was very exciting.
Radhika Nagpal

“The other factor is that Amazon’s robots do a mix of centralized and decentralized decision-making," she continued. "The robots plan their own paths, but they also use the cloud to know more. That lets us ask: Is it better to know everything about all your neighbors all the time? Or is it better to only know about the neighbors that are closer to you?”

Her current focus is on sortation centers, where robots help route packages to shipping stations sorted by ZIP codes. Not surprisingly, robots setting out from multiple points to dozens of different locations require a degree of coordination. Amazon’s robots are already aware of other robots. If they see one, they will choose an alternate route. But what path should they take, Nagpal asks. She wants to make sure those robots are making the most effective possible choices.

Cities already manage this. They limit access to some roads, change speed limits, and add one-way streets. Computer networks do it as well, rerouting traffic when packet delivery slows down.

Some of those concepts, such as one-way travel lanes, also work in sortation centers. They could act as stigmergic signals to guide robot behavior. She also believes there might be a way to create simple swarm behaviors that enable robots to react to advanced data about incoming packages.

Once her sabbatical is over, Nagpal plans to return to the lab. She wants to keep working on her Bluebots, improving their vision, and turning them loose in environments that look more like the coral reef she went snorkeling in 25 years ago.

She is also dreaming of swarms of bigger robots for use in construction or trash collection.

“Maybe we could do what Amazon is doing, but do it outside,” she said. “We could have swarms of robots that actually do some sort of practical task. At Amazon, that task is delivery. But given Boston’s snowstorms, I think shoveling the sidewalks would be nice.”

Research areas

Related content

IL, Haifa
We’re looking for a Principal Applied Scientist in the Personalization team with experience in generative AI and large models. You will be responsible for developing and disseminating customer-facing personalized recommendation models. This is a hands-on role with global impact working with a team of world-class engineers and scientists across the wider organization. You will lead the design of machine learning models that scale to very large quantities of data, and serve high-scale low-latency recommendations to all customers worldwide. You will embody scientific rigor, designing and executing experiments to demonstrate the technical efficacy and business value of your methods. You will work alongside a science team to delight customers by aiding in recommendations relevancy, and raise the profile of Amazon as a global leader in machine learning and personalization. Successful candidates will have strong technical ability, focus on customers by applying a customer-first approach, excellent teamwork and communication skills, and a motivation to achieve results in a fast-paced environment. Our position offers exceptional opportunities for every candidate to grow their technical and non-technical skills. If you are selected, you have the opportunity to make a difference to our business by designing and building state of the art machine learning systems on big data, leveraging Amazon’s vast computing resources (AWS), working on exciting and challenging projects, and delivering meaningful results to customers world-wide. Key job responsibilities Develop machine learning algorithms for high-scale recommendations problem Rapidly design, prototype and test many possible hypotheses in a high-ambiguity environment, making use of both quantitative analysis and business judgement. Collaborate with software engineers to integrate successful experimental results into large-scale, highly complex Amazon production systems capable of handling 100,000s of transactions per second at low latency. Report results in a manner which is both statistically rigorous and compellingly relevant, exemplifying good scientific practice in a business environment.
DE, Aachen
The Artificial General Intelligence (AGI) team is looking for a passionate, talented, and inventive Applied Scientist with a strong deep learning background, to build industry-leading Generative Artificial Intelligence (GenAI) technology with Large Language Models (LLMs) and multimodal systems. Key job responsibilities As an Applied Scientist with the AGI team, you will work with talented peers to lead the development of novel algorithms and modeling techniques, to advance the state of the art with LLMs. Your work will directly impact our customers in the form of products and services that make use of speech and language technology. You will leverage Amazon’s heterogeneous data sources and large-scale computing resources to accelerate advances in spoken language understanding. About the team The AGI team has a mission to push the envelope in GenAI with LLMs and multimodal systems, in order to provide the best-possible experience for our customers.
US, WA, Seattle
Are you a brilliant mind seeking to push the boundaries of what's possible with intelligent robotics? Join our elite team of researchers and engineers - led by Pieter Abeel, Rocky Duan, and Peter Chen - at the forefront of applied science, where we're harnessing the latest advancements in large language models (LLMs) and generative AI to reshape the world of robotics and unlock new realms of innovation. As an Applied Science Intern, you'll have the unique opportunity to work alongside world-renowned experts, gaining invaluable hands-on experience with cutting-edge robotics technologies. You'll dive deep into exciting research projects at the intersection of AI and robotics. This internship is not just about executing tasks – it's about being a driving force behind groundbreaking discoveries. You'll collaborate with cross-functional teams, leveraging your expertise in areas such as deep learning, reinforcement learning, computer vision, and motion planning to tackle real-world problems and deliver impactful solutions. Throughout your journey, you'll have access to unparalleled resources, including state-of-the-art computing infrastructure, cutting-edge research papers, and mentorship from industry luminaries. This immersive experience will not only sharpen your technical skills but also cultivate your ability to think critically, communicate effectively, and thrive in a fast-paced, innovative environment where bold ideas are celebrated. Join us at the forefront of applied robotics and AI, where your contributions will shape the future of intelligent systems and propel humanity forward. Seize this extraordinary opportunity to learn, grow, and leave an indelible mark on the world of technology. Amazon has positions available in San Francisco, CA and Seattle, WA. The ideal candidate should possess: - Strong background in machine learning, deep learning, and/or robotics - Publication record at science conferences such as NeurIPS, CVPR, ICRA, RSS, CoRL, and ICLR. - Experience in areas such as multimodal LLMs, world models, image/video tokenization, real2Sim/Sim2real transfer, bimanual manipulation, open-vocabulary panoptic scene understanding, scaling up multi-modal LLMs, and end-to-end vision-language-action models. - Proficiency in Python, Experience with PyTorch or JAX - Excellent problem-solving skills, attention to detail, and the ability to work collaboratively in a team Join us at the forefront of applied robotics and AI, and be a part of the team that's reshaping the future of intelligent systems. Apply now and embark on an extraordinary journey of discovery and innovation! Key job responsibilities - Develop novel, scalable algorithms and modeling techniques that advance the state-of-the-art in areas at the intersection of LLMs and generative AI for robotics - Tackle challenging, groundbreaking research problems on production-scale data, with a focus on robotic perception, manipulation, and control - Collaborate with cross-functional teams to solve complex business problems, leveraging your expertise in areas such as deep learning, reinforcement learning, computer vision, and motion planning - Demonstrate the ability to work independently, thrive in a fast-paced, ever-changing environment, and communicate effectively with diverse stakeholders
US, WA, Seattle
Join the next revolution in robotics at Amazon's Frontier AI & Robotics team, where you'll work alongside world-renowned AI pioneers like Pieter Abbeel, Rocky Duan, and Peter Chen to lead key initiatives in robotic intelligence. As a Senior Applied Scientist, you'll spearhead the development of breakthrough foundation models that enable robots to perceive, understand, and interact with the world in unprecedented ways. You'll drive technical excellence in areas such as perception, manipulation, scence understanding, sim2real transfer, multi-modal foundation models, and multi-task learning, designing novel algorithms that bridge the gap between cutting-edge research and real-world deployment at Amazon scale. In this role, you'll combine hands-on technical work with scientific leadership, ensuring your team delivers robust solutions for dynamic real-world environments. You'll leverage Amazon's vast computational resources to tackle ambitious problems in areas like very large multi-modal robotic foundation models and efficient, promptable model architectures that can scale across diverse robotic applications. Key job responsibilities - Lead technical initiatives in robotics foundation models, driving breakthrough approaches through hands-on research and development in areas like open-vocabulary panoptic scene understanding, scaling up multi-modal LLMs, sim2real/real2sim techniques, end-to-end vision-language-action models, efficient model inference, video tokenization - Design and implement novel deep learning architectures that push the boundaries of what robots can understand and accomplish - Guide technical direction for specific research initiatives, ensuring robust performance in production environments - Mentor fellow scientists while maintaining strong individual technical contributions - Collaborate with engineering teams to optimize and scale models for real-world applications - Influence technical decisions and implementation strategies within your area of focus A day in the life - Develop and implement novel foundation model architectures, working hands-on with our extensive compute infrastructure - Guide fellow scientists in solving complex technical challenges, from sim2real transfer to efficient multi-task learning - Lead focused technical initiatives from conception through deployment, ensuring successful integration with production systems - Drive technical discussions within your team and with key stakeholders - Conduct experiments and prototype new ideas using our massive compute cluster - Mentor team members while maintaining significant hands-on contribution to technical solutions Amazon offers a full range of benefits that support you and eligible family members, including domestic partners and their children. Benefits can vary by location, the number of regularly scheduled hours you work, length of employment, and job status such as seasonal or temporary employment. The benefits that generally apply to regular, full-time employees include: 1. Medical, Dental, and Vision Coverage 2. Maternity and Parental Leave Options 3. Paid Time Off (PTO) 4. 401(k) Plan If you are not sure that every qualification on the list above describes you exactly, we'd still love to hear from you! At Amazon, we value people with unique backgrounds, experiences, and skillsets. If you’re passionate about this role and want to make an impact on a global scale, please apply! About the team At Frontier AI & Robotics, we're not just advancing robotics – we're reimagining it from the ground up. Our team, led by pioneering AI researchers Pieter Abbeel, Rocky Duan, and Peter Chen, is building the future of intelligent robotics through groundbreaking foundation models and end-to-end learned systems. We tackle some of the most challenging problems in AI and robotics, from developing sophisticated perception systems to creating adaptive manipulation strategies that work in complex, real-world scenarios. What sets us apart is our unique combination of ambitious research vision and practical impact. We leverage Amazon's massive computational infrastructure and rich real-world datasets to train and deploy state-of-the-art foundation models. Our work spans the full spectrum of robotics intelligence – from multimodal perception using images, videos, and sensor data, to sophisticated manipulation strategies that can handle diverse real-world scenarios. We're building systems that don't just work in the lab, but scale to meet the demands of Amazon's global operations. Join us if you're excited about pushing the boundaries of what's possible in robotics, working with world-class researchers, and seeing your innovations deployed at unprecedented scale.
US, WA, Seattle
The Private Brands Discovery team designs innovative machine learning solutions to drive customer awareness for Amazon’s own brands and help customers discover products they love. Private Brands Discovery is an interdisciplinary team of Scientists and Engineers, who incubate and build disruptive solutions using cutting-edge technology to solve some of the toughest science problems at Amazon. To this end, the team employs methods from Natural Language Processing, Deep learning, multi-armed bandits and reinforcement learning, Bayesian Optimization, causal and statistical inference, and econometrics to drive discovery across the customer journey. Our solutions are crucial for the success of Amazon’s own brands and serve as a beacon for discovery solutions across Amazon. This is a high visibility opportunity for someone who wants to have business impact, dive deep into large-scale problems, enable measurable actions on the consumer economy, and work closely with scientists and engineers. As a scientist, you bring business and industry context to science and technology decisions. You set the standard for scientific excellence and make decisions that affect the way we build and integrate algorithms. Your solutions are exemplary in terms of algorithm design, clarity, model structure, efficiency, and extensibility. You tackle intrinsically hard problems, acquiring expertise as needed. You decompose complex problems into straightforward solutions.. With a focus on bias for action, this individual will be able to work equally well with Science, Engineering, Economics and business teams. Key job responsibilities - 5+ yrs of relevant, broad research experience after PhD degree or equivalent. - Advanced expertise and knowledge of applying observational causal interference methods - Strong background in statistics methodology, applications to business problems, and/or big data. - Ability to work in a fast-paced business environment. - Strong research track record. - Effective verbal and written communications skills with both economists and non-economist audiences.
US, WA, Seattle
The AWS Marketplace & Partner Services Science team is hiring an Applied Scientist to develop science products that support AWS initiatives to grow AWS Partners. The team is seeking candidates with strong background in machine learning and engineering, creativity, curiosity, and great business judgment. As an applied scientist on the team, you will work on targeting and lead prioritization related AI/ML products, recommendation systems, and deliver them into the production ecosystem. You are comfortable with ambiguity and have a deep understanding of ML algorithms and an analytical mindset. You are capable of summarizing complex data and models through clear visual and written explanations. You thrive in a collaborative environment and are passionate about learning. Key job responsibilities - Work with scientists, product managers and engineers to deliver high-quality science products - Experiment with large amounts of data to deliver the best possible science solutions - Design, build, and deploy innovative ML solutions to impact AWS Co-Sell initiatives About the team The AWS Marketplace & Partner Services team is the center of Analytics, Insights, and Science supporting the AWS Specialist Partner Organization on its mission to provide customers with an outstanding experience while working with AWS partners. The Science team supports science models and recommendation systems that are deployed directly to AWS Customers, AWS partners, and internal AWS Sellers.
US, WA, Seattle
The People eXperience and Technology (PXT) Central Science Team uses economics, behavioral science, statistics, and machine learning to proactively identify mechanisms, process improvements and products, which simultaneously improve Amazon and the lives, wellbeing, and the value of work of Amazonians. We are an interdisciplinary team which combines the talents of science and engineering to develop and deliver solutions that measurably achieve this goal. We invest in innovation and rapid prototyping of scientific models, AI/ML technologies and software solutions to accelerate informed, accurate, and reliable decision backed by science and data. As a research scientist you will you will design and carry out surveys to address business questions; analyze survey and other forms of data with regression models; perform weighting and multiple imputation to reduce bias due to nonresponse. You will conduct methodological and statistical research to understand the quality of survey data. You will work with economists, engineers, and computer scientists to select samples, draft and test survey questions, calculate nonresponse adjusted weights, and estimate regression models on large scale data. You will evaluate, diagnose, understand, and surface drivers and moderators for key research streams, including (but are not limited to) attrition, engagement, productivity, inclusion, and Amazon culture. Key job responsibilities Help to design and execute a scalable global content development and validation strategy to drive more effective decisions and improve the employee experience across all of Amazon Conduct psychometric and econometric analyses to evaluate integrity and practical application of survey questions and data Identify and execute research streams to evaluate how to mitigate or remove sources of measurement error Partner closely and drive effective collaborations across multi-disciplinary research and product teams Manage full life cycle of large-scale research programs (Develop strategy, gather requirements, manage and execute)
US, WA, Seattle
The Artificial General Intelligence (AGI) team is looking for a passionate, talented, and inventive Applied Scientist with a strong deep learning background, to build industry-leading Generative Artificial Intelligence (GenAI) technology with Large Language Models (LLMs) and multimodal systems. Key job responsibilities - Leverage Amazon’s heterogeneous data sources and large-scale computing resources to accelerate advances in generative artificial intelligence (GenAI). - Work with talented peers to lead the development of novel algorithms and modeling techniques to advance the state of the art with LLMs. - Collaborate with other science and engineering teams as well as business stakeholders to maximize the velocity and impact of your contributions. About the team It's an exciting time to be a leader in AI research. In Amazon's AGI Information team, you can make your mark by improving information-driven experiences of Amazon customers worldwide. Your work will directly impact our customers in the form of products and services that make use of language and multimodal technology!
US, WA, Seattle
Are you excited about developing foundation models to revolutionize automation, robotics and computer vision? Are you looking for opportunities to build and deploy them on real problems at truly vast scale? At Amazon Fulfillment Technologies and Robotics we are on a mission to build high-performance autonomous systems that perceive and act to further improve our world-class customer experience - at Amazon scale. We are looking for collaborative scientists, engineers and program managers for a variety of roles. The Amazon Robotics software team is seeking an experienced and senior Applied Scientist to focus on computer vision machine learning models. This includes building multi-viewpoint and time-series computer vision systems. It includes building large-scale models using data from many different tasks and scenes. This work spans from basic research such as cross domain training, to experimenting on prototype in the lab, to running wide-scale A/B tests on robots in our facilities. Key job responsibilities * Research vision - Where should we be focusing our efforts * Research delivery – Proving/dis-proving strategies in offline data or in the lab * Production studies - Insights from production data or ad-hoc experimentation. A day in the life Amazon offers a full range of benefits that support you and eligible family members, including domestic partners and their children. Benefits can vary by location, the number of regularly scheduled hours you work, length of employment, and job status such as seasonal or temporary employment. The benefits that generally apply to regular, full-time employees include: 1. Medical, Dental, and Vision Coverage 2. Maternity and Parental Leave Options 3. Paid Time Off (PTO) 4. 401(k) Plan If you are not sure that every qualification on the list above describes you exactly, we'd still love to hear from you! At Amazon, we value people with unique backgrounds, experiences, and skillsets. If you’re passionate about this role and want to make an impact on a global scale, please apply!
US, CA, East Palo Alto
The Customer Engagement Technology team leads AI/LLM-driven customer experience transformation using task-oriented dialogue systems. We develop multi-modal, multi-turn, goal-oriented dialog systems that can handle customer issues at Amazon scale across multiple languages. These systems are designed to adapt to changing company policies and invoke correct APIs to automate solutions to customer problems. Additionally, we enhance associate productivity through response/action recommendation, summarization to capture conversation context succinctly, retrieving precise information from documents to provide useful information to the agent, and machine translation to facilitate smoother conversations when the customer and agent speak different languages. Key job responsibilities Research and development of LLM-based chatbots and conversational AI systems for customer service applications. Design and implement state-of-the-art NLP and ML models for tasks such as language understanding, dialogue management, and response generation. Collaborate with cross-functional teams, including data scientists, software engineers, and product managers, to integrate LLM-based solutions into Amazon's customer service platforms. 4. Develop and implement strategies for data collection, annotation, and model training to ensure high-quality and robust performance of the chatbots. Conduct experiments and evaluations to measure the performance of the developed models and systems, and identify areas for improvement. Stay up-to-date with the latest advancements in NLP, LLMs, and conversational AI, and explore opportunities to incorporate new techniques and technologies into Amazon's customer service solutions. Collaborate with internal and external research communities, participate in conferences and publications, and contribute to the advancement of the field. A day in the life Amazon offers a full range of benefits that support you and eligible family members, including domestic partners and their children. Benefits can vary by location, the number of regularly scheduled hours you work, length of employment, and job status such as seasonal or temporary employment. The benefits that generally apply to regular, full-time employees include: 1. Medical, Dental, and Vision Coverage 2. Maternity and Parental Leave Options 3. Paid Time Off (PTO) 4. 401(k) Plan If you are not sure that every qualification on the list above describes you exactly, we'd still love to hear from you! At Amazon, we value people with unique backgrounds, experiences, and skillsets. If you’re passionate about this role and want to make an impact on a global scale, please apply!