"This technology will be transformative in ways we can barely comprehend"

A judge and some of the finalists from the Alexa Prize Grand Challenge 3 talk about the competition, the role of COVID-19, and the future of socialbots.

Human beings are social creatures, and conversations are what connect us—they enable us to share everything from the prosaic to the profound with the people that matter to us. Living through an era marked by pandemic-induced isolation means many of those conversations have shifted online, but the connection they provide remains essential.

So what happens when you replace one of the human participants in a conversation with a socialbot? What does it mean to have an engaging conversation with an AI assistant? How can that kind of conversation prove to be valuable, and can it provide its own kind of connection?

Application period for next Alexa Prize challenge opens

The Amazon Alexa Prize team encourages all interested teams to apply for the Grand Challenge 4 by 11:59 p.m. PST on October 6, 2020.

The participants in this year’s Alexa Prize contest are driven by those questions. Amazon recently announced that a team from Emory University has won the 2020 Alexa Prize. We talked to that team, along with a judge from this year’s competition, as well as representatives from the other finalist teams at Czech Technical University, Stanford University, University of California, Davis, and University of California, Santa Cruz. We wanted to learn what drives them to participate, how COVID-19 has influenced their work and what they see as the possibilities and challenges for socialbots moving forward.

Winners of the Alexa Prize SocialBot Grand Challenge 3 discuss their research

Q: What inspired you to participate in this year’s competition?

Sarah Fillwock, team leader, Emora, Emory University: We had a group of students who were interested in dialogue system research, some of whom had actually participated in the Alexa Prize in its previous years, and we all knew that the Alexa Prize offers a really unique opportunity for anyone interested in this type of work. It is really exciting to use the Alexa device platform to launch a socialbot, because we are able to get hundreds of conversations a day between our socialbot and human users, which really allows for quick turnaround time when assessing whether or not our hypotheses and strategies are improving the performance of our dialogue system.

Marilyn Walker, faculty advisor, Athena, University of California, Santa Cruz: In our Natural Language and Dialogue Systems lab, our main research focus is dialogue management and language generation. Conversational AI is a very challenging problem, and we felt like we could have a research impact in this area. The field has been developing extremely quickly recently, and the Alexa Prize offers an opportunity to try out cutting-edge technologies in dialogue management and language generation on a large Alexa user population.

Amazon Alexa Prize Finalists 2020
The five Alexa Prize finalist teams: Czech Technical University in Prague; Emory University; Stanford University; the University of California, Davis; and the University of California, Santa Cruz.

Vrindavan (Davan) Harrison, team leader, Athena, UCSC: As academics, our primary focus is on research. This year’s competition aimed at being more research-oriented, allowing the teams to spend more time on developing new ideas.

Kai-Hui Liang, team lead, Gunrock, University of California, Davis: Our experience in last year’s competition motivated us to join again as we realized there is still a large room for improvement. I’m especially interested in how to find topics that engage users the most, including trying different ways to elicit and reason about users’ interests. How can we retrieve content that is relevant and interesting, and make the dialog flow more naturally?

Jan Pichl, team leader, Alquist, Czech Technical University: Since the first year of the Alexa Prize competition, we have been developing Alquist to deliver a wide range of topics with a closer focus on the most popular ones. The first Alquist guided a user through the conversation quite strictly. We learned quickly that we needed to introduce more flexibility and let the user be "in charge". With that in mind, we have been pushing Alquist in that direction. Moreover, we want Alquist to manage dialogue utilizing the knowledge graph, and suggest relevant information based on the previously discussed topics and entities.

Christopher D. Manning, faculty advisor, Chirpy Cardinal, Stanford University: It was our first time doing the Alexa Prize, and the team really hadn’t done advance preparation, so it’s all been a wild ride—by which I mean a lot of work and stress for everyone on the team. But it was super exciting that we were largely able to catch up with other leading teams who have been doing the competition for several years.

Hugh Howey, judge and science fiction author: Artificial intelligence is a passionate interest of mine. As a science fiction author, I have the freedom to write about most anything, but the one topic I keep coming back to is the impact that thinking machines already have on our lives and how that impact will only expand in the future. So any chance to be involved with those doing work and research in the field is a no-brainer for me. I leapt at the chance like a Boston Dynamics dog.

Q: What excites you about the potential of socialbots?

Hugh Howey (Judge): This technology will be transformative in ways we can barely comprehend. Right now, the human/computer interface is a bottleneck. It takes a long time for us to tell our computers what we want them to do, and they'll generally only do that thing the one time and forget what it learned. In the future, more and more of the trivial will be automated. This will free up human capital to tackle larger problems. It will also bring us together by removing language barriers, by helping those with disabilities, and eventually this technology will be available to anyone who needs it.

Jinho D. Choi, faculty advisor, Emory: It has been reported that more than 44 million adults in US have mental health issues such as anxiety or depression. We believe that developing an innovative socialbot that comforts people can really help those with mental health conditions, who are generally afraid of talking to other human beings. You may wonder how artificial intelligence can convey a human emotion such as caring. However, humans have used their own creations, such as arts and music, to comfort themselves. It is our vision to advance AI, the greatest invention of humankind, to help individuals learn more about their inner selves so they can feel more positive about themselves, and have a bigger impact in the world.

Ashwin Paranjape, co-team leader, Stanford: As socialbots become more sophisticated and prevalent, increasing numbers of people are chatting with them regularly. As the name suggests, socialbots have the potential to fulfill social needs, such as chit-chatting about everyday life, or providing support to a person struggling with mental health difficulties. Furthermore, socialbots could become a primary user interface through which we engage with the world—for example, chatting about the news, or discussing a book.

Sarah Fillwock, Emory: Our experience in this competition has really solidified this idea of the potential of socialbots being value to people who need support and are in troubling situations. I think that the most compelling role for socialbots in global challenges is to provide a supportive environment to allow people to express themselves, and explore their feelings with regard to whatever dramatic event is going on. This is especially important for vulnerable populations, such as those who do not have a strong social circle or have reduced social contact with others, prohibiting them from being able to achieve the feeling of being valued and understood.

Q: What are the main challenges to realizing that potential?

Abigail See, co-team leader, Stanford: Currently, socialbots struggle to make sense of long, involved conversations, and this limits their ability to talk about any topic in depth. To do this better, socialbots will need to understand what a particular user wants—not only in terms of discussion topics, but also what kind of conversation they want to have. Another important challenge is to allow users to take more initiative, and drive the conversation themselves. Currently, socialbots tend to take more initiative, to ensure the conversation stays within their capabilities. If we can make our socialbots more flexible, they will be much more useful and engaging to people.

Sarah Fillwock, Emory: One major challenge facing the field of dialogue system research is establishing a best practice for evaluation of the performance of dialogue approaches. There is currently a diverse set of evaluation strategies that the research community uses to determine how well their new dialogue approach performs. Another challenge is that dialogues are more than just a pattern-matching problem. Having a back-and-forth dialogue on any topic with another agent tends to involve planning towards achieving specific goals during the conversation as new information about your speaking partner is revealed. Dialogues also rely a lot on having a foundation of general world knowledge that you use to fully understand the implications of what the other person is saying.

Amazon releases Topical Chat dataset

The text-based collection of more than 235,000 utterances will help support high-quality, repeatable research in the field of dialogue systems.

Marilyn Walker, UCSC: There’s a shortage of large annotated conversational corpora for the task of open-domain conversation. For example, progress in NLU has been supported by large annotated corpora, such as Penn Treebank, however, there are currently no such publicly available corpora for open-domain conversation. Also, a rich model of individual users would enable much more natural conversations, but privacy issues currently make it difficult to build such models.

Hugh Howey (Judge): The challenge will be for our ethics and morality to keep up with our gizmos. We will be far more powerful in the future. I only hope we'll be more responsible as well.

Q: What role has the COVID-19 pandemic played in your work?

Jurik Juraska, team member, UCSC: The most immediate effect the onset of the pandemic had on our socialbot was, of course, that it could not just ignore this new dynamic situation. Our socialbot had to acknowledge this new development, as that was what most people were talking about at that point. We would thus have Athena bring up the topic at the beginning of the conversation, sympathizing with the users' current situation, but avoiding wallowing in the negative aspects of it. In the feedback that some users left, there were a number of expressions of gratitude for the ability to have a fun interaction with a socialbot at a time when direct social interaction with friends and family was greatly restricted.

Kai-Hui Liang, UC Davis: We noticed an evident difference in the way Alexa users reacted to popular topics. For example, before COVID-19, many users gave engaging responses when discussing their favorite sports to watch, their travel experiences, or events they plan to do over the weekend. After the breakout of COVID-19, more users replied saying there’s no sports game to watch or they are not able to travel. Therefore, we adapted our topics to better fit the situation. We added discussion about their life experience during the quarantine (eg. how their diet has changed or if they walk outside daily to stay healthy). We also observed more users having negative feelings potentially due to the quarantine. For instance, some users said they feel lonely and they miss their friends or family. Therefore, we enhanced our comforting module that expresses empathy through active listening.

Abigail See, Stanford: As the pandemic unfolded, we saw in real time how users changed their expectations of our socialbot. Not only did they want our bot to deliver up-to-date information, they also wanted it to show emotional understanding for the situation they were in.

Sarah Fillwock, Emory: When COVID became a significant societal issue, we tried two things: we had an experience-oriented COVID topic where our bot discussed with people how they felt about COVID in a sympathetic and reassuring atmosphere, and we had a fact-oriented COVID topic that gave objective information. What we observed was that people had a much stronger positive reaction to the experience-oriented COVID-19 approach than the fact-oriented COVID-19 approach, and seemed to prefer it when talking. This really gave us some empirical evidence that social agents have a strong potential to be helpful in times of turmoil by giving people a safe and caring space to talk about these major events in their life since people responded positively to our approach at doing this.

Q: Lastly, are there any particular advancements in the fields of NLU, dialogue management, conversational AI, etc., that you find promising?

Jan Pichl, Czech Technical University: It is exciting to see the capabilities of the Transformer-based models these days. They are able to generate large articles or even whole stories that are coherent. However, they demand a lot of computation power during the training phase and even during the runtime. Additionally, it is still challenging to use them in a socialbot when you need to work with constantly changing information about the world.

Abigail See, Stanford: As NLP researchers, we are amazed by the incredible pace of progress in the field. Since the last Alexa Prize in 2018, there have been game-changing advancements, particularly in the use of large pretrained language models to understand and generate language. The Alexa Prize offers a unique opportunity for us to apply these techniques, which so far have mostly been tested only on neat, well-defined tasks, and put them in front of real people, with all the messiness that entails! In particular, we were excited to explore the possibility of using neural generative models to chat with people. As recently as the 2018 Alexa Prize, these models generally performed poorly, and so were not used by any of the finalist teams. However, this year, these systems became an important backbone of our system.

Sarah Fillwock, Emory: The work people have been putting into incorporating common sense knowledge and common sense reasoning into dialogue systems is one of the most interesting directions of the current conversational AI field. A lot of the common sense knowledge we use is not explicitly detailed in any type of data set as people have learned them through physical experience or inference over time, so there isn’t necessarily any convenient way to currently accomplish this goal. There have been a lot of attempts to see how far a language modeling approach to dialogue agents can go, but even using huge dialogue data sets and highly complex models still results in hit-and-miss success at common sense information. I am really looking forward to the dialogue approaches and dialogue resources that more explicitly try to model this type of common sense knowledge.

Research areas

Latest news

The latest updates, stories, and more about Alexa Prize.
US, WA, Seattle
Amazon is looking for a passionate, talented, and inventive Senior Applied Scientist with a strong machine learning background to help build industry-leading language technology. Our mission is to provide a delightful experience to Amazon’s customers by pushing the envelope in Natural Language Processing (NLP), Generative AI, Large Language Model (LLM), Natural Language Understanding (NLU), Machine Learning (ML), Retrieval-Augmented Generation, Responsible AI, Agent, Evaluation, and Model Adaptation. As part of our AI team in Amazon AWS, you will work alongside internationally recognized experts to develop novel algorithms and modeling techniques to advance the state-of-the-art in human language technology. Your work will directly impact millions of our customers in the form of products and services, as well as contributing to the wider research community. You will gain hands on experience with Amazon’s heterogeneous text and structured data sources, and large-scale computing resources to accelerate advances in language understanding. The Science team at AWS Bedrock builds science foundations of Bedrock, which is a fully managed service that makes high-performing foundation models available for use through a unified API. We are adamant about continuously learning state-of-the-art NLP/ML/LLM technology and exploring creative ways to delight our customers. In our daily job we are exposed to large scale NLP needs and we apply rigorous research methods to respond to them with efficient and scalable innovative solutions. At AWS Bedrock, you’ll experience the benefits of working in a dynamic, entrepreneurial environment, while leveraging AWS resources, one of the world’s leading cloud companies and you’ll be able to publish your work in top tier conferences and journals. We are building a brand new team to help develop a new NLP service for AWS. You will have the opportunity to conduct novel research and influence the science roadmap and direction of the team. Come join this greenfield opportunity! Amazon Bedrock team is part of Utility Computing (UC) About the team AWS Utility Computing (UC) provides product innovations — from foundational services such as Amazon’s Simple Storage Service (S3) and Amazon Elastic Compute Cloud (EC2), to consistently released new product innovations that continue to set AWS’s services and features apart in the industry. As a member of the UC organization, you’ll support the development and management of Compute, Database, Storage, Internet of Things (Iot), Platform, and Productivity Apps services in AWS, including support for customers who require specialized security solutions for their cloud services. Diverse Experiences AWS values diverse experiences. Even if you do not meet all of the qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why AWS? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Inclusive Team Culture Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences, inspire us to never stop embracing our uniqueness. Mentorship & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. Hybrid Work We value innovation and recognize this sometimes requires uninterrupted time to focus on a build. We also value in-person collaboration and time spent face-to-face. Our team affords employees options to work in the office every day or in a flexible, hybrid work model near one of our U.S. Amazon offices. We are open to hiring candidates to work out of one of the following locations: Seattle, WA, USA
US, WA, Seattle
Alexa Personality Fundamentals is chartered with infusing Alexa's trustworthy, reliable, considerate, smart, and playful personality. Come join us in creating the future of personality forward AI here at Alexa. Key job responsibilities As a Data Scientist with Alexa Personality, your work will involve machine learning, Large Language Model (LLM) and other generative technologies. You will partner with engineers, applied scientists, voice designers, and quality assurance to ensure that Alexa can sing, joke, and delight our customers in every interaction. You will take a central role in defining our experimental roadmap, sourcing training data, authoring annotation criteria and building automated benchmarks to track the improvement of our Alexa's personality. We are open to hiring candidates to work out of one of the following locations: Bellevue, WA, USA | Seattle, WA, USA
US, CA, Palo Alto
The Amazon Search Mission Understanding (SMU) team is at the forefront of revolutionizing the online shopping experience through the Amazon search page. Our ambition extends beyond facilitating a seamless shopping journey; we are committed to creating the next generation of intelligent shopping assistants. Leveraging cutting-edge Large Language Models (LLMs), we aim to redefine navigation and decision-making in e-commerce by deeply understanding our users' shopping missions, preferences, and goals. By developing responsive and scalable solutions, we not only accomplish the shopping mission but also foster unparalleled trust among our customers. Through our advanced technology, we generate valuable insights, providing a guided navigation system into various search missions, ensuring a comprehensive and holistic shopping experience. Our dedication to continuous improvement through constant measurement and enhancement of the shopper experience is crucial, as we strategically navigate the balance between immediate results and long-term business growth. We are seeking an Applied Scientist who is not just adept in the theoretical aspects of Machine Learning (ML), Artificial Intelligence (AI), and Large Language Models (LLMs) but also possesses a pragmatic, hands-on approach to navigating the complexities of innovation. The ideal candidate will have a profound expertise in developing, deploying, and contributing to the next-generation shopping search engine, including but not limited to Retrieval-Augmented Generation (RAG) models, specifically tailored towards enhancing the Rufus application—an integral part of our mission to revolutionize shopping assistance. You will take the lead in conceptualizing, building, and launching groundbreaking models that significantly improve our understanding of and capabilities in enhancing the search experience. A successful applicant will display a comprehensive skill set across machine learning model development, implementation, and optimization. This includes a strong foundation in data management, software engineering best practices, and a keen awareness of the latest developments in distributed systems technology. We are looking for individuals who are determined, analytically rigorous, passionate about applied sciences, creative, and possess strong logical reasoning abilities. Join the Search Mission Understanding team, a group of pioneering ML scientists and engineers dedicated to building core ML models and developing the infrastructure for model innovation. As part of Amazon Search, you will experience the dynamic, innovative culture of a startup, backed by the extensive resources of Amazon.com (AMZN), a global leader in internet services. Our collaborative, customer-centric work environment spans across our offices in Palo Alto, CA, and Seattle, WA, offering a unique blend of opportunities for professional growth and innovation. Key job responsibilities Collaborate with cross-functional teams to identify requirements for ML model development, focusing on enhancing mission understanding through innovative AI techniques, including retrieval-Augmented Generation or LLM in general. Design and implement scalable ML models capable of processing and analyzing large datasets to improve search and shopping experiences. Must have a strong background in machine learning, AI, or computational sciences. Lead the management and experiments of ML models at scale, applying advanced ML techniques to optimize science solution. Serve as a technical lead and liaison for ML projects, facilitating collaboration across teams and addressing technical challenges. Requires strong leadership and communication skills, with a PhD in Computer Science, Machine Learning, or a related field. We are open to hiring candidates to work out of one of the following locations: Palo Alto, CA, USA | Seattle, WA, USA
US, WA, Bellevue
The Artificial General Intelligence (AGI) team is looking for a passionate, talented, and inventive Applied Science Manager with a strong deep learning background, to lead the development of industry-leading technology with multimodal systems. Key job responsibilities As an Applied Science Manager with the AGI team, you will lead the development of novel algorithms and modeling techniques to advance the state of the art with multimodal systems. Your work will directly impact our customers in the form of products and services that make use of vision and language technology. You will leverage Amazon’s heterogeneous data sources and large-scale computing resources to accelerate development with multimodal Large Language Models (LLMs) and Generative Artificial Intelligence (GenAI) in Computer Vision. About the team The AGI team has a mission to push the envelope with multimodal LLMs and GenAI in Computer Vision, in order to provide the best-possible experience for our customers. We are open to hiring candidates to work out of one of the following locations: Bellevue, WA, USA | Seattle, WA, USA | Sunnyvale, CA, USA
US, MA, Boston
The Artificial General Intelligence (AGI) - Automations team is developing AI technologies to automate workflows, processes for browser automation, developers and ops teams. As part of this, we are developing services and inference engine for these automation agents, and techniques for reasoning, planning, and modeling workflows. If you are interested in a startup mode team in Amazon to build the next level of agents then come join us. Scientists in AGI - Automations will develop cutting edge multimodal LLMs to observe, model and derive insights from manual workflows to automate them. You will get to work in a joint scrum with engineers for rapid invention, develop cutting edge automation agent systems, and take them to launch for millions of customers. Key job responsibilities - Build automation agents by developing novel multimodal LLMs. A day in the life An Applied Scientist with the AGI team will support the science solution design, run experiments, research new algorithms, and find new ways of optimizing the customer experience.; while setting examples for the team on good science practice and standards. Besides theoretical analysis and innovation, an Applied Scientist will also work closely with talented engineers and scientists to put algorithms and models into practice. We are open to hiring candidates to work out of one of the following locations: Boston, MA, USA
US, MA, Boston
The Artificial General Intelligence (AGI) - Automations team is developing AI technologies to automate workflows, processes for browser automation, developers and ops teams. As part of this, we are developing services and inference engine for these automation agents, and techniques for reasoning, planning, and modeling workflows. If you are interested in a startup mode team in Amazon to build the next level of agents then come join us. Scientists in AGI - Automations will develop cutting edge multimodal LLMs to observe, model and derive insights from manual workflows to automate them. You will get to work in a joint scrum with engineers for rapid invention, develop cutting edge automation agent systems, and take them to launch for millions of customers. Key job responsibilities - Build automation agents by developing novel multimodal LLMs. A day in the life An Applied Scientist with the AGI team will support the science solution design, run experiments, research new algorithms, and find new ways of optimizing the customer experience.; while setting examples for the team on good science practice and standards. Besides theoretical analysis and innovation, an Applied Scientist will also work closely with talented engineers and scientists to put algorithms and models into practice. We are open to hiring candidates to work out of one of the following locations: Boston, MA, USA
US, CA, Palo Alto
The Amazon Search Mission Understanding (SMU) team is at the forefront of revolutionizing the online shopping experience through the Amazon search page. Our ambition extends beyond facilitating a seamless shopping journey; we are committed to creating the next generation of intelligent shopping assistants. Leveraging cutting-edge Large Language Models (LLMs), we aim to redefine navigation and decision-making in e-commerce by deeply understanding our users' shopping missions, preferences, and goals. By developing responsive and scalable solutions, we not only accomplish the shopping mission but also foster unparalleled trust among our customers. Through our advanced technology, we generate valuable insights, providing a guided navigation system into various search missions, ensuring a comprehensive and holistic shopping experience. Our dedication to continuous improvement through constant measurement and enhancement of the shopper experience is crucial, as we strategically navigate the balance between immediate results and long-term business growth. We are seeking an Applied Scientist who is not just adept in the theoretical aspects of Machine Learning (ML), Artificial Intelligence (AI), and Large Language Models (LLMs) but also possesses a pragmatic, hands-on approach to navigating the complexities of innovation. The ideal candidate will have a profound expertise in developing, deploying, and contributing to the next-generation shopping search engine, including but not limited to Retrieval-Augmented Generation (RAG) models, specifically tailored towards enhancing the Rufus application—an integral part of our mission to revolutionize shopping assistance. You will take the lead in conceptualizing, building, and launching groundbreaking models that significantly improve our understanding of and capabilities in enhancing the search experience. A successful applicant will display a comprehensive skill set across machine learning model development, implementation, and optimization. This includes a strong foundation in data management, software engineering best practices, and a keen awareness of the latest developments in distributed systems technology. We are looking for individuals who are determined, analytically rigorous, passionate about applied sciences, creative, and possess strong logical reasoning abilities. Join the Search Mission Understanding team, a group of pioneering ML scientists and engineers dedicated to building core ML models and developing the infrastructure for model innovation. As part of Amazon Search, you will experience the dynamic, innovative culture of a startup, backed by the extensive resources of Amazon.com (AMZN), a global leader in internet services. Our collaborative, customer-centric work environment spans across our offices in Palo Alto, CA, and Seattle, WA, offering a unique blend of opportunities for professional growth and innovation. Key job responsibilities Collaborate with cross-functional teams to identify requirements for ML model development, focusing on enhancing mission understanding through innovative AI techniques, including retrieval-Augmented Generation or LLM in general. Design and implement scalable ML models capable of processing and analyzing large datasets to improve search and shopping experiences. Must have a strong background in machine learning, AI, or computational sciences. Lead the management and experiments of ML models at scale, applying advanced ML techniques to optimize science solution. Serve as a technical lead and liaison for ML projects, facilitating collaboration across teams and addressing technical challenges. Requires strong leadership and communication skills, with a PhD in Computer Science, Machine Learning, or a related field. We are open to hiring candidates to work out of one of the following locations: Palo Alto, CA, USA | Seattle, WA, USA
US, WA, Seattle
The Artificial General Intelligence (AGI) team is looking for a passionate, talented, and inventive Senior Applied Scientist with a strong deep learning background, to lead the development of industry-leading technology with multimodal systems. Key job responsibilities As a Senior Applied Scientist with the AGI team, you will lead the development of novel algorithms and modeling techniques to advance the state of the art with multimodal systems. Your work will directly impact our customers in the form of products and services that make use of vision and language technology. You will leverage Amazon’s heterogeneous data sources and large-scale computing resources to accelerate development with multimodal Large Language Models (LLMs) and Generative Artificial Intelligence (GenAI) in Computer Vision. About the team The AGI team has a mission to push the envelope with multimodal LLMs and GenAI in Computer Vision, in order to provide the best-possible experience for our customers. We are open to hiring candidates to work out of one of the following locations: Cambridge, MA, USA | New York, NY, USA | Seattle, WA, USA | Sunnyvale, CA, USA
US, WA, Bellevue
The Artificial General Intelligent team (AGI) seeks an Applied Scientist with a strong background in machine learning and production level software engineering to spearhead the advancement and deployment of cutting-edge ML systems. As part of this team, you will collaborate with talented peers to create scalable solutions for an innovative conversational assistant, aiming to revolutionize user experiences for millions of Alexa customers. The ideal candidate possesses a solid understanding of machine learning fundamentals and has experience writing high quality software in production setting. The candidate is self-motivated, thrives in ambiguous and fast-paced environments, possess the drive to tackle complex challenges, and excel at swiftly delivering impactful solutions while iterating based on user feedback. Join us in our mission to redefine industry standards and provide unparalleled experiences for our customers. Key job responsibilities You will be expected to: · Analyze, understand, and model customer behavior and the customer experience based on large scale data · Build and measure novel online & offline metrics for personal digital assistants and customer scenarios, on diverse devices and endpoints · Create, innovate and deliver deep learning, policy-based learning, and/or machine learning based algorithms to deliver customer-impacting results · Build and deploy automated model training and evaluation pipelines · Perform model/data analysis and monitor metrics through online A/B testing · Research and implement novel machine learning and deep learning algorithms and models. We are open to hiring candidates to work out of one of the following locations: Bellevue, WA, USA | Boston, MA, USA
ZA, Cape Town
We are a new team in AWS' Kumo organisation - a combination of software engineers and AI/ML experts. Kumo is the software engineering organization that scales AWS’ support capabilities. Amazon’s mission is to be earth’s most customer-centric company and this also applies when it comes to helping our own Amazon employees with their everyday IT Support needs. Our team is innovating for the Amazonian, making the interaction with IT Support as smooth as possible. We achieve this through multiple mechanisms which eliminate root causes altogether, automate issue resolution or point customers towards the optimal troubleshooting steps for their situation. We deliver the support solutions plus the end-user content with instructions to help them self-serve. We employ machine learning solutions on multiple ends to understand our customer's behavior, predict customer's intent, deliver personalized content and automate issue resolution through chatbots. As an applied scientist on our team, you will help to build the next generation of case routing using artificial intelligence to optimize business metric targets addressing the business challenge of ensuring that the right case gets worked by the right agent within the right time limit whilst meeting the target business success metric. You will develop machine learning models and pipelines, harness and explain rich data at Amazon scale, and provide automated insights to improve case routing that impact millions of customers every day. You will be a pragmatic technical leader comfortable with ambiguity, capable of summarizing complex data and models through clear visual and written explanations. About AWS Diverse Experiences AWS values diverse experiences. Even if you do not meet all of the qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why AWS? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Inclusive Team Culture Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences, inspire us to never stop embracing our uniqueness. Mentorship & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why flexible work hours and arrangements are part of our culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. Sales, Marketing and Global Services (SMGS) AWS Sales, Marketing, and Global Services (SMGS) is responsible for driving revenue, adoption, and growth from the largest and fastest growing small- and mid-market accounts to enterprise-level customers including public sector. Amazon knows that a diverse, inclusive culture empowers us all to deliver the best results for our customers. We celebrate diversity in our workforce and in the ways we work. As part of our inclusive culture, we offer accommodations during the interview and onboarding process. If you’d like to discuss your accommodation options, please contact your recruiter, who will partner you with the Applicant-Candidate Accommodation Team (ACAT). You may also contact ACAT directly by emailing acat-africa@amazon.com. We want all Amazonians to have the best possible Day 1 experience. If you’ve already completed the interview process, you can contact ACAT for accommodation support before you start to ensure all your needs are met Day 1. Key job responsibilities Deliver real world production systems at AWS scale. Work closely with the business to understand the problem space, identify the opportunities and formulate the problems. Use machine learning, data mining, statistical techniques, Generative AI and others to create actionable, meaningful, and scalable solutions for the business problems. Establish scalable, efficient, automated processes for large scale data analyses, model development, model validation and model implementation Analyze complex support case datasets and metrics to drive insight Design, build, and deploy effective and innovative ML solutions to optimize case routing Evaluate the proposed solutions via offline benchmark tests as well as online A/B tests in production. Drive collaborative research and creative problem solving across science and software engineering team Propose and validate hypothesis to deliver and direct our product road map Work with engineers to deliver low latency model predictions to production We are open to hiring candidates to work out of one of the following locations: Cape Town, ZAF