De'Aira Bryant, who has done two internships at Amazon, and is a fourth-year computer science PhD student at the Georgia Institute of Technology, is seen posing in front of a wall with some transportation logos and Amazon Web Services written on it
De'Aira Bryant, who has done two internships at Amazon, is a fourth-year computer science PhD student at the Georgia Institute of Technology, where her research focuses on the application of robotics in health care and rehabilitation.
Courtesy of De'Aira Bryant

How De’Aira Bryant found her path into robotics

The computer scientist recently finished her second internship at Amazon, where she worked on a new way to estimate the human expression on faces in images.

Growing up in Estill, South Carolina, De’Aira Bryant didn’t know she was interested in computer science until she was persuaded to explore the field by her mother, who noted that computer scientists have good career prospects and get to do interesting work.

“I was handy with making flyers and doing the programs for church, that type of thing,” Bryant says. “She somehow convinced me that was computer science and I had no way to know better.”

In her first class as a computer science major at the University of South Carolina (UofSC), she realized that she didn’t really know what computer science entailed. “I was completely out of my league, coming from a small town with no computer science or robotics background at all.”

De'Aira Bryant is seen standing on a stage with a screen elevated above her in the background showing robots at her TEDx Talk
At her TEDx talk, De'Aira Bryant discussed how lessons from society's technological past can shed light on embracing a future with social robots.
Courtesy of De'Aira Bryant

Bryant immediately wanted to change her major, but Karina Liles — the graduate teaching assistant and the only female TA in the program at that time — convinced her to stay. “We were doing that ‘Hello, World!’ program and I was like: Do you want me to type it on Word? What do you mean, I'm writing a program?” Bryant remembers Liles looked at her in astonishment and set out to help her.

After the initial shock, Bryant started to thrive.

“It actually worked out for me, because I've always been really good at math, I also got a minor in math. And later I realized that what I actually like is logic, which was perfect for a computer science student at UofSC, because a lot of courses focused on the principles of logic.”

It turned out her mother was right after all.

Today, she’s a fourth-year computer science PhD student at the Georgia Institute of Technology, where her research focuses on the application of robotics in health care and rehabilitation. Over the years, Bryant has received research awards, given a TEDx Talk, and even programmed a robot that starred in a movie. Having recently completed her second internship at Amazon Web Services (AWS), she still finds time to think about fun and exciting ways to make computer science more accessible to diverse populations.

Making robots dance (and act)

Right after her first class, Bryant was invited by Liles, the TA, to do an internship at Assistive Robotics and Technology Lab (ART lab), headed by Jenay Beer, who was Liles’ advisor at the time and also played a crucial role in Bryant’s education at UofSC. (Currently, Liles is a professor at Claflin University and Beer is a professor at the University of Georgia.) Bryant didn’t think twice before accepting.

“I have my own desk, and I’m getting paid? Sign me up! What better job could there be?” she remembers thinking. She worked on designing systems for children in schools that did not have computer science curriculums, using robots as a method of engagement and exposure.

Initially, she would prepare the robots for studies, take them in the field, and watch kids interact with them. Later, she got to take crash courses to learn how to program them. “I don't think I was interested in robotics until I got to see to see how they were used, their application in the real world,” she says. The fact that she loved seeing them in action made her want to learn how to make them work.

As an undergrad, she started to program these robots to do short dance moves. She posted those clips to her social media, which piqued the curiosity of kids who followed her.

An unexpected journey: De'Aira Bryant

“I thought, ‘I'm going to trick them into asking more questions and I'm going to recruit more computer scientists by posting robots dancing,’” she says. “That kind of turned into a thing. Now I have a whole social media presence on making robots dance and do cool stuff.”

Bryant is deeply interested in changing the way computer science is taught.

From a culturally relevant perspective, a lot of the ways that we teach these concepts can miss the mark with a lot of students, especially students who come from minority backgrounds.
De'Aira Bryant

“From a culturally relevant perspective, a lot of the ways that we teach these concepts can miss the mark with a lot of students, especially students who come from minority backgrounds.” She says that throughout her computer science curriculum, a lot of the examples and problems proposed by the professors were not relevant to her. “I would completely rewrite the problem and that was how I was able to make it through my undergrad and graduate education.”

Currently, her main research at the Georgia Institute of Technology is focused on the applications of robotics on rehabilitation for children who have motor and cognitive disabilities.

“That kind of attracted me and now we have more robots and more resources and we’re linked with rehabilitative therapy centers in Atlanta and getting to work in those places as well,” she said.

Bryant still uses the expertise she acquired with the dancing robots. When HBO Max was filming the movie Superintelligence on Georgia Tech’s campus in 2019 and wanted to add cool futuristic robot scenes, Bryant’s adviser, Ayanna Howard, who today is dean and professor in the College of Engineering at Ohio State University, said she would be the right person for the job.

She had two weeks to prepare.

By the time she got to the set, the script had changed and she ended up having to redo the work on the set. “I was programming in real-time. And I think the movie people were so excited about that. They were standing over my shoulders saying, 'You’re actually coding.'” Bryant got to meet Melissa McCarthy, the star of the movie, and teach her kids how to make the robot move. “They all wanted pictures with the robot. I felt like my robot was the biggest star on the set.”

Interning at Amazon

Bryant then met Nashlie Sephus, a machine learning technology evangelist for AWS, at the National GEM Consortium Fellowship conference in 2019 (Bryant is a current GEM fellow and Sephus is an alum). After Bryant presented her research during a competition, Sephus approached her. “She said, ‘The work you're doing is very similar to what my team is doing at Amazon, and I think it would be really awesome if you came to work with us’,” Bryant recalls.

Sephus focuses on fairness and identifying biases in artificial intelligence, areas that Bryant was beginning to explore. She applied to the 2020 summer internship, went through the interview process, and got to work directly with Sephus.

During Bryant’s first AWS internship, she worked on bias auditing of services that estimate the expression of faces in images, an active area of research within academia and industry. In Bryant’s robotics healthcare research at Georgia Tech, the robots utilize emotion estimation to help identify what the patient they're working with is feeling in order to inform what they should do or say next.

This summer, during her second AWS internship, Bryant researched how to potentially improve the way the emotion being expressed on a person’s face is estimated. Other research within Amazon on emotion estimation entails making a determination of the physical appearance of a person's face. It is not a determination of the person’s internal emotional state. Currently, the way researchers generally train machine learning models for that type of estimation is by annotating numerous face images. Each image is labeled with a single emotion — happiness, sadness, surprise, disgust, or anger.

“We see that a lot of people disagree in their interpretations of the expressions on some faces. And what normally happens if a face has too many people disagreeing on the emotion it is expressing is that we throw it out of the dataset. We say it's not a good way to teach our models about emotion,” Bryant says. She thinks that maybe that’s exactly what the system should be learning. “We should be teaching it ambiguity just as much as we are teaching it about things of which we are absolutely sure.”

To that end, the team she was on explored letting people rate a series of emotions on a scale for each image, instead of labeling it with a single emotion. “Instead of throwing out the images, we can model that into a distribution that tells us: most people see this image as happy, but there is a significant amount of people who also see it as surprise.”

Even after the end of her internship, Bryant continues to work with her team to write a paper to describe some of the work they did over the last two summers.

“It's been a big project, but we have enough now that we're ready to put out a paper. So, I'm excited about that.”

Bryant recently got a return offer to come back to Amazon next summer, possibly to work on a partnership between Sephus’s team and the robotics team. “I haven't done anything with robotics at Amazon yet so I would actually love to see what they're doing over there, so the offer is very appealing.”

What robots should look like

Another area of research for Bryant is understanding how people conceptualize a robot based on its perceived abilities. There is an ongoing debate in robotics circles about whether developing humanoid robots is a good thing. Among other aspects, the controversy has to do with the fact that they are expensive to build and deploy.

“A lot of people are questioning: 'Do we even really need to be designing humanoids?’,” she says.

Bryant, along with colleagues at Georgia Tech who are interested in robots that are capable of perceiving emotions, designed an experiment to investigate how people imagine a robot’s appearance based on what it can do. The study’s participants worked on an emotion annotation activity with the assistance of an expert artificial intelligence system that followed a set of rules. The participants were told that “a robot is available to assist you in completing each task using its newly developed computer vision algorithm.”

De'Aira Bryant is seen from behind, she is typing on an open laptop and there is a humanoid robot with a display tablet on its chest looking at her to the right of the laptop
De'Aira Bryant and her colleagues at Georgia Tech designed an experiment to investigate how people imagine a robot’s appearance based on what it can do.
Courtesy of De'Aira Bryant

But the researchers did not tell them what the robot looked like. The robot’s predictions were provided via text. At the end of the study, participants were asked to describe how they envisioned it in their heads. Half of the people envisioned the robot with human-like qualities, with a head, arms, legs and the ability to walk, for example.

For that work – described in the article “The Effect of Conceptual Embodiment on Human-Robot Trust During a Youth Emotion Classification Task” — Bryant and her colleagues won the best paper award in the IEEE International Conference on Advanced Robotics and its Social Impacts (ARSO2021).

The goal of the research: investigate factors that influence human-robot trust when the embodiment of the robot is left for the user to conceptualize.

“In that paper, we presented the method of trying to gauge how humans expect a robot to look based on what it can do. That was one of the contributions,” says Bryant. The other contribution: demonstrate that it can be beneficial for a robot to look a certain way depending on its function. The study found that the participants who imagined the robot with human-like characteristics reported higher levels of trust than those who did not.

“For the robots that are emotionally perceptive, if we fail to meet the expectations of most people, then we could already be losing some of the effect that we intend to have,” says Bryant. “People expect that a robot that can perceive emotions will be human-like and if we don't design robots in that way, people could be less willing to depend on that robot.”

Future career plans

Bryant says that her long-term career plans are constantly changing. She was set on being a professor, but her experience at Amazon has redefined what industry research is for her. “On the last team I was on, I was actually working with a lot of professors. And I think it’s so cool to have the ability to bridge that gap.”

When she was about to start her first AWS internship, she expected she would be given a project, a few tasks, a deadline to complete them, and wouldn’t have a lot of say in that. “But when I first got there I actually did have a lot of say. They were interested in what I was doing at Georgia Tech, they wanted to know more about my research and made a strong effort to make the internship experience mine,” she says.

One of her ideas of a perfect job is being an Amazon Scholar. “I would get to work with students in a university and still work with Amazon. That is the perfect goal.”

Research areas

Related content

US, WA, Seattle
The People eXperience and Technology (PXT) Central Science Team uses economics, behavioral science, statistics, and machine learning to proactively identify mechanisms, process improvements and products, which simultaneously improve Amazon and the lives, wellbeing, and the value of work of Amazonians. We are an interdisciplinary team which combines the talents of science and engineering to develop and deliver solutions that measurably achieve this goal. We invest in innovation and rapid prototyping of scientific models, AI/ML technologies and software solutions to accelerate informed, accurate, and reliable decision backed by science and data. As a research scientist you will you will design and carry out surveys to address business questions; analyze survey and other forms of data with regression models; perform weighting and multiple imputation to reduce bias due to nonresponse. You will conduct methodological and statistical research to understand the quality of survey data. You will work with economists, engineers, and computer scientists to select samples, draft and test survey questions, calculate nonresponse adjusted weights, and estimate regression models on large scale data. You will evaluate, diagnose, understand, and surface drivers and moderators for key research streams, including (but are not limited to) attrition, engagement, productivity, inclusion, and Amazon culture. Key job responsibilities Help to design and execute a scalable global content development and validation strategy to drive more effective decisions and improve the employee experience across all of Amazon Conduct psychometric and econometric analyses to evaluate integrity and practical application of survey questions and data Identify and execute research streams to evaluate how to mitigate or remove sources of measurement error Partner closely and drive effective collaborations across multi-disciplinary research and product teams Manage full life cycle of large-scale research programs (Develop strategy, gather requirements, manage and execute)
US, WA, Seattle
The Artificial General Intelligence (AGI) team is looking for a passionate, talented, and inventive Applied Scientist with a strong deep learning background, to build industry-leading Generative Artificial Intelligence (GenAI) technology with Large Language Models (LLMs) and multimodal systems. Key job responsibilities - Leverage Amazon’s heterogeneous data sources and large-scale computing resources to accelerate advances in generative artificial intelligence (GenAI). - Work with talented peers to lead the development of novel algorithms and modeling techniques to advance the state of the art with LLMs. - Collaborate with other science and engineering teams as well as business stakeholders to maximize the velocity and impact of your contributions. About the team It's an exciting time to be a leader in AI research. In Amazon's AGI Information team, you can make your mark by improving information-driven experiences of Amazon customers worldwide. Your work will directly impact our customers in the form of products and services that make use of language and multimodal technology!
US, WA, Seattle
Are you excited about developing foundation models to revolutionize automation, robotics and computer vision? Are you looking for opportunities to build and deploy them on real problems at truly vast scale? At Amazon Fulfillment Technologies and Robotics we are on a mission to build high-performance autonomous systems that perceive and act to further improve our world-class customer experience - at Amazon scale. We are looking for collaborative scientists, engineers and program managers for a variety of roles. The Amazon Robotics software team is seeking an experienced and senior Applied Scientist to focus on computer vision machine learning models. This includes building multi-viewpoint and time-series computer vision systems. It includes building large-scale models using data from many different tasks and scenes. This work spans from basic research such as cross domain training, to experimenting on prototype in the lab, to running wide-scale A/B tests on robots in our facilities. Key job responsibilities * Research vision - Where should we be focusing our efforts * Research delivery – Proving/dis-proving strategies in offline data or in the lab * Production studies - Insights from production data or ad-hoc experimentation. A day in the life Amazon offers a full range of benefits that support you and eligible family members, including domestic partners and their children. Benefits can vary by location, the number of regularly scheduled hours you work, length of employment, and job status such as seasonal or temporary employment. The benefits that generally apply to regular, full-time employees include: 1. Medical, Dental, and Vision Coverage 2. Maternity and Parental Leave Options 3. Paid Time Off (PTO) 4. 401(k) Plan If you are not sure that every qualification on the list above describes you exactly, we'd still love to hear from you! At Amazon, we value people with unique backgrounds, experiences, and skillsets. If you’re passionate about this role and want to make an impact on a global scale, please apply!
US, CA, East Palo Alto
The Customer Engagement Technology team leads AI/LLM-driven customer experience transformation using task-oriented dialogue systems. We develop multi-modal, multi-turn, goal-oriented dialog systems that can handle customer issues at Amazon scale across multiple languages. These systems are designed to adapt to changing company policies and invoke correct APIs to automate solutions to customer problems. Additionally, we enhance associate productivity through response/action recommendation, summarization to capture conversation context succinctly, retrieving precise information from documents to provide useful information to the agent, and machine translation to facilitate smoother conversations when the customer and agent speak different languages. Key job responsibilities Research and development of LLM-based chatbots and conversational AI systems for customer service applications. Design and implement state-of-the-art NLP and ML models for tasks such as language understanding, dialogue management, and response generation. Collaborate with cross-functional teams, including data scientists, software engineers, and product managers, to integrate LLM-based solutions into Amazon's customer service platforms. 4. Develop and implement strategies for data collection, annotation, and model training to ensure high-quality and robust performance of the chatbots. Conduct experiments and evaluations to measure the performance of the developed models and systems, and identify areas for improvement. Stay up-to-date with the latest advancements in NLP, LLMs, and conversational AI, and explore opportunities to incorporate new techniques and technologies into Amazon's customer service solutions. Collaborate with internal and external research communities, participate in conferences and publications, and contribute to the advancement of the field. A day in the life Amazon offers a full range of benefits that support you and eligible family members, including domestic partners and their children. Benefits can vary by location, the number of regularly scheduled hours you work, length of employment, and job status such as seasonal or temporary employment. The benefits that generally apply to regular, full-time employees include: 1. Medical, Dental, and Vision Coverage 2. Maternity and Parental Leave Options 3. Paid Time Off (PTO) 4. 401(k) Plan If you are not sure that every qualification on the list above describes you exactly, we'd still love to hear from you! At Amazon, we value people with unique backgrounds, experiences, and skillsets. If you’re passionate about this role and want to make an impact on a global scale, please apply!
US, MA, Boston
The Amazon Dash Cart team is seeking a highly motivated Research Scientist (Level 5) to join our team that is focused on building new technologies for grocery stores. We are a team of scientists invent new algorithms (especially artificial intelligence, computer vision and sensor fusion) to improve customer experiences in grocery shopping. The Amazon Dash Cart is a smart shopping cart that uses sensors to keep track of what a shopper has added. Once done, they can bypass the checkout lane and just walk out. The cart comes with convenience features like a store map, a basket that can weigh produce, and product recommendations. Amazon Dash Cart’s are available at Amazon Fresh, Whole Foods. Learn more about the Dash Cart at https://www.amazon.com/b?ie=UTF8&node=21289116011. Key job responsibilities As a research scientist, you will help solve a variety of technical challenges and mentor other engineers. You will play an active role in translating business and functional requirements into concrete deliverables and build quick prototypes or proofs of concept in partnership with other technology leaders within the team. You will tackle challenging, novel situations every day and given the size of this initiative, you’ll have the opportunity to work with multiple technical teams at Amazon in different locations. You should be comfortable with a degree of ambiguity that’s higher than most projects and relish the idea of solving problems that, frankly, haven’t been solved before - anywhere. Along the way, we guarantee that you’ll learn a ton, have fun and make a positive impact on millions of people. About the team Amazon Dash cart allows shoppers to checkout without lines — you just place the items in the cart and the cart will take care of the rest. When you’re done shopping, you leave the store through a designated dash lane. We charge the payment method in your Amazon account as you walk through the dash lane and send you a receipt. Check it out at https://www.amazon.com/b?ie=UTF8&node=21289116011. Designed and custom-built by Amazonians, our Dash cart uses a variety of technologies including computer vision, sensor fusion, and advanced machine learning.
US, WA, Seattle
The Customer Engagement Technology team leads AI/LLM-driven customer experience transformation using task-oriented dialogue systems. We develop multi-modal, multi-turn, goal-oriented dialog systems that can handle customer issues at Amazon scale across multiple languages. These systems are designed to adapt to changing company policies and invoke correct APIs to automate solutions to customer problems. Additionally, we enhance associate productivity through response/action recommendation, summarization to capture conversation context succinctly, retrieving precise information from documents to provide useful information to the agent, and machine translation to facilitate smoother conversations when the customer and agent speak different languages. Key job responsibilities Research and development of LLM-based chatbots and conversational AI systems for customer service applications. Design and implement state-of-the-art NLP and ML models for tasks such as language understanding, dialogue management, and response generation. Collaborate with cross-functional teams, including data scientists, software engineers, and product managers, to integrate LLM-based solutions into Amazon's customer service platforms. Develop and implement strategies for data collection, annotation, and model training to ensure high-quality and robust performance of the chatbots. Conduct experiments and evaluations to measure the performance of the developed models and systems, and identify areas for improvement. Stay up-to-date with the latest advancements in NLP, LLMs, and conversational AI, and explore opportunities to incorporate new techniques and technologies into Amazon's customer service solutions. Collaborate with internal and external research communities, participate in conferences and publications, and contribute to the advancement of the field. A day in the life We thrive on solving challenging problems to innovate for our customers. By pushing the boundaries of technology, we create unparalleled experiences that enable us to rapidly adapt in a dynamic environment. Our decisions are guided by data, and we collaborate with engineering, science, and product teams to foster an innovative learning environment. If you are not sure that every qualification on the list above describes you exactly, we'd still love to hear from you! At Amazon, we value people with unique backgrounds, experiences, and skillsets. If you’re passionate about this role and want to make an impact on a global scale, please apply! Benefits Summary: Amazon offers a full range of benefits that support you and eligible family members, including domestic partners and their children. Benefits can vary by location, the number of regularly scheduled hours you work, length of employment, and job status such as seasonal or temporary employment. The benefits that generally apply to regular, full-time employees include: 1. Medical, Dental, and Vision Coverage 2. Maternity and Parental Leave Options 3. Paid Time Off (PTO) 4. 401(k) Plan About the team Join our team of scientists and engineers who develop and deploy LLM-based Conversational AI systems to enhance Amazon's customer service experience and effectiveness. We work on innovative solutions that help customers solve their issues and get their questions answered efficiently, and associate-facing products that support our customer service associate workforce.
US, CA, San Francisco
If you are interested in this position, please apply on Twitch's Career site https://www.twitch.tv/jobs/en/ About Us: Twitch is the world’s biggest live streaming service, with global communities built around gaming, entertainment, music, sports, cooking, and more. It is where thousands of communities come together for whatever, every day. We’re about community, inside and out. You’ll find coworkers who are eager to team up, collaborate, and smash (or elegantly solve) problems together. We’re on a quest to empower live communities, so if this sounds good to you, see what we’re up to on LinkedIn and X, and discover the projects we’re solving on our Blog. Be sure to explore our Interviewing Guide to learn how to ace our interview process. About the Role Data is critical to the algorithms that power the recommendation, search, and ranking systems. It's also critical to making decisions, especially working on systems that are themselves data-driven. As a Senior Data Scientist on the CDML team, you'll be responsible for helping drive improvements to the machine learning systems as well as analytics to drive decision-making. While there is a team of Applied Scientists building and shipping the algorithms themselves, data science can help improve these systems directly. In this role, you can identify and build new signals to input into the models. We're also working on the value model that the algorithm optimizes, and your input will be critical to understanding the tradeoffs and balancing multiple objectives in a scientific way. We also still have big unanswered analytics questions to solve. How often do viewers just want to get to the content they already know they want to watch, and when are they open to exploring new channels? These are the sorts of questions you'll be tackling. You Will - Inform product strategies by defining and updating core metrics for each initiative - Estimate the opportunity sizing of new features the team could take on - Identify and build new signals to incorporate into the algorithms driving recommendations, search, and feed ranking at Twitch - Identify metric tradeoff ratios that help inform value model choices, long-term impact from early-growth-funnel users, and other product decisions - Establish analytical framework for your team: ad-hoc analysis, automated dashboards, and self-service reporting tools to surface key data to stakeholders - Design A/B experiments to drive product direction with iterative innovation and measurement - Work hand-in-hand with business, product, engineering, and design to proactively influence and inform teammates' decisions throughout the product life cycle - Distill ambiguous product or business questions, find clever ways to answer them, and to quantify the uncertainty Perks - Medical, Dental, Vision & Disability Insurance - 401(k) - Maternity & Parental Leave - Flexible PTO - Amazon Employee Discount
US, WA, Seattle
Join the next revolution in robotics at Amazon's Frontier AI & Robotics team, where you'll work alongside world-renowned AI pioneers like Pieter Abbeel, Rocky Duan, and Peter Chen to push the boundaries of what's possible in robotic intelligence. As an Applied Scientist, you'll be at the forefront of developing breakthrough foundation models that enable robots to perceive, understand, and interact with the world in unprecedented ways. You'll drive independent research initiatives in areas such as perception, manipulation, scence understanding, sim2real transfer, multi-modal foundation models, and multi-task learning, designing novel algorithms that bridge the gap between cutting-edge research and real-world deployment at Amazon scale. In this role, you'll balance innovative technical exploration with practical implementation, collaborating with platform teams to ensure your models and algorithms perform robustly in dynamic real-world environments. You'll have access to Amazon's vast computational resources, enabling you to tackle ambitious problems in areas like very large multi-modal robotic foundation models and efficient, promptable model architectures that can scale across diverse robotic applications. Key job responsibilities - Design and implement novel deep learning architectures that push the boundaries of what robots can understand and accomplish - Drive independent research initiatives in robotics foundation models, focusing on breakthrough approaches in perception, and manipulation, for example open-vocabulary panoptic scene understanding, scaling up multi-modal LLMs, sim2real/real2sim techniques, end-to-end vision-language-action models, efficient model inference, video tokenization - Lead technical projects from conceptualization through deployment, ensuring robust performance in production environments - Collaborate with platform teams to optimize and scale models for real-world applications - Contribute to the team's technical strategy and help shape our approach to next-generation robotics challenges A day in the life - Design and implement novel foundation model architectures, leveraging our extensive compute infrastructure to train and evaluate at scale - Collaborate with our world-class research team to solve complex technical challenges - Lead technical initiatives from conception to deployment, working closely with robotics engineers to integrate your solutions into production systems - Participate in technical discussions and brainstorming sessions with team leaders and fellow scientists - Leverage our massive compute cluster and extensive robotics infrastructure to rapidly prototype and validate new ideas - Transform theoretical insights into practical solutions that can handle the complexities of real-world robotics applications Amazon offers a full range of benefits that support you and eligible family members, including domestic partners and their children. Benefits can vary by location, the number of regularly scheduled hours you work, length of employment, and job status such as seasonal or temporary employment. The benefits that generally apply to regular, full-time employees include: 1. Medical, Dental, and Vision Coverage 2. Maternity and Parental Leave Options 3. Paid Time Off (PTO) 4. 401(k) Plan If you are not sure that every qualification on the list above describes you exactly, we'd still love to hear from you! At Amazon, we value people with unique backgrounds, experiences, and skillsets. If you’re passionate about this role and want to make an impact on a global scale, please apply! About the team At Frontier AI & Robotics, we're not just advancing robotics – we're reimagining it from the ground up. Our team, led by pioneering AI researchers Pieter Abbeel, Rocky Duan, and Peter Chen, is building the future of intelligent robotics through groundbreaking foundation models and end-to-end learned systems. We tackle some of the most challenging problems in AI and robotics, from developing sophisticated perception systems to creating adaptive manipulation strategies that work in complex, real-world scenarios. What sets us apart is our unique combination of ambitious research vision and practical impact. We leverage Amazon's massive computational infrastructure and rich real-world datasets to train and deploy state-of-the-art foundation models. Our work spans the full spectrum of robotics intelligence – from multimodal perception using images, videos, and sensor data, to sophisticated manipulation strategies that can handle diverse real-world scenarios. We're building systems that don't just work in the lab, but scale to meet the demands of Amazon's global operations. Join us if you're excited about pushing the boundaries of what's possible in robotics, working with world-class researchers, and seeing your innovations deployed at unprecedented scale.
US, WA, Seattle
Are you seeking an environment where you can drive innovation? WW Amazon Stores Finance Science (ASFS) works to leverage science and economics to drive improved financial results, foster data backed decisions, and embed science within Finance. ASFS is focused on developing products that empower controllership, improve financial planning by understanding financial drivers, and innovate science capabilities for efficiency and scale. Our team owns sophisticated science capabilities for forecasting the WW Amazon Stores P&L, focusing on costs and the bottomline (profitability). We are looking for an outstanding Senior economist to lead new high visibility initiatives for forecasting the WW Amazon Stores P&L (focusing on costs and the bottomline). The forecasting models will be used to enable better financial planning and decision making for senior leadership up to VP level. You will build new econometric models from the ground up. The role will develop new driver based forecasting models for Retail related P&L lines that incorporate business drivers. The Sr Economist will also help generate new insights on how macroeconomic factors impact the P&L. This role will have very high visibility with senior leadership up to VP level. We prize creative problem solvers with the ability to draw on an expansive methodological toolkit to transform financial planning and decision-making through economics. The ideal candidate combines econometric acumen with strong business judgment. You have versatile modeling skills and are comfortable owning and extracting insights from data. You are excited to learn from and alongside seasoned scientists, engineers, economists, and business leaders. You are an excellent communicator and effectively translate technical findings into business action.
US, CA, East Palo Alto
The Customer Engagement Technology team leads AI/LLM-driven customer experience transformation using task-oriented dialogue systems. We develop multi-modal, multi-turn, goal-oriented dialog systems that can handle customer issues at Amazon scale across multiple languages. These systems are designed to adapt to changing company policies and invoke correct APIs to automate solutions to customer problems. Additionally, we enhance associate productivity through response/action recommendation, summarization to capture conversation context succinctly, retrieving precise information from documents to provide useful information to the agent, and machine translation to facilitate smoother conversations when the customer and agent speak different languages. Key focus areas include: 1. Task-Oriented Dialog Systems: Building reliable, scalable, and adaptive LLM-based agents for understanding intents, determining eligibilities, making API calls, confirming outcomes, and exploring alternatives across hundreds of customer service intents, while adapting to changing policies. 2. Lifelong Learning: Researching continuous learning approaches for injecting new domain knowledge while retaining the model's foundational abilities and prevent catastrophic forgetting. 3. Agentic Systems: Developing a modular agentic framework to handle multi domain conversations through appropriate system abstractions. 4. Complex Multi-turn Instruction Following: Identifying approaches to guarantee compliance with instructions that specify standard operating procedures for handling multi-turn complex scenarios. 5. Inference-Time Adaptability: Researching inference-time scaling methods and improving in-context learning abilities of custom models to enable real-time adaptability to new features, actions, or bug fixes without solely relying on retraining. 6. Context Adherence: Exploring methods to ground responses in specific customer attributes, account information, and behavioral data to prevent hallucinations and ensure high-fidelity responses. 7. Policy Grounding: Investigating techniques to align bot behavior with evolving company policies by grounding on complex, unstructured policy documents, ensuring consistent and compliant actions. 1. End to End Dialog Policy Optimization: Researching alignment approaches to optimize successful dialog completions. 2. Scalable Evaluations: Developing automated approaches to evaluate quality of experience, and correctness of agentic resolutions Key job responsibilities 1. Research and development of LLM-based chatbots and conversational AI systems for customer service applications. 2. Design and implement state-of-the-art NLP and ML models for tasks such as language understanding, dialogue management, and response generation. 3. Collaborate with cross-functional teams, including data scientists, software engineers, and product managers, to integrate LLM-based solutions into Amazon's customer service platforms. 4. Develop and implement strategies for data collection, annotation, and model training to ensure high-quality and robust performance of the chatbots. 5. Conduct experiments and evaluations to measure the performance of the developed models and systems, and identify areas for improvement. 6. Stay up-to-date with the latest advancements in NLP, LLMs, and conversational AI, and explore opportunities to incorporate new techniques and technologies into Amazon's customer service solutions. 7. Collaborate with internal and external research communities, participate in conferences and publications, and contribute to the advancement of the field.