Science innovations power Alexa Conversations dialogue management

Dialogue simulator and conversations-first modeling architecture provide ability for customers to interact with Alexa in a natural and conversational manner.

Today we announced the public beta launch of Alexa Conversations dialogue management. Alexa developers can now leverage a state-of-the-art dialogue manager powered by deep learning to create complex, nonlinear experiences — conversations that go well beyond today's typical one-shot interactions, such as "Alexa, what's the weather forecast for today?" or "Alexa, set a ten-minute pasta timer".

Alexa’s natural-language-understanding models classify requests according to domain, or the particular service that should handle the intent that the customer wants executed. The models also identify the slot types of the entities named in the requests, or the roles those entities play in fulfilling the request. In the request “Play ‘Rise Up’ by Andra Day”, the domain is Music, the intent is PlayMusic, and the names “Rise Up” and “Andra Day” fill the slots SongName and ArtistName.

Also at today's Alexa Live event, Nedim Fresko, vice president of Alexa Devices and Developers, announced that Amazon scientists have begun applying deep neural networks to custom skills and are seeing increases in accuracy. Read more here.

Natural conversations don’t follow these kinds of predetermined dialogue paths and often include anaphoric references (such as referring to a previously mentioned song by saying “play it”), contextual carryover of entities, customer revisions of requests, and many other types of interactions.

Alexa Conversations enables customers to interact with Alexa in a natural and conversational manner. At the same time, it relieves developers of the effort they would typically need to expend in authoring complex dialogue management rules, which are hard to maintain and often result in brittle customer experiences. Our dialogue augmentation algorithms and deep-learning models address the challenge of designing flexible and robust conversational experiences.

Dialogue management for Alexa Conversations is powered by two major science innovations: a dialogue simulator for data augmentation that generalizes a small number of sample dialogues provided by a developer into tens of thousands of annotated dialogues, and a conversations-first modeling architecture that leverages the generated dialogues to train deep-learning-based models to support dialogues beyond just the happy paths provided by the sample dialogues.

The Alexa Conversations dialogue simulator

Building high-performing deep-learning models requires large and diverse data sets, which are costly to acquire. With Alexa Conversations, the dialogue simulator automatically generates diversity from a few developer-provided sample dialogues that cover skill functionality, and it also generates difficult or uncommon exchanges that could occur.

The inputs to the dialogue simulator include developer application programming interfaces (APIs), slots and associated catalogues for slot values (e.g. city, state), and response templates (Alexa’s responses in different situations, such as requesting a slot value from the customer). These inputs together with their input arguments and output values define the skill-specific schema of actions and slots that the dialogue manager will predict.

Alexa Conversations dialogue simulator
The Alexa Conversations dialogue simulator generates tens of thousands of annotated dialogue examples that are used to train conversational models.

The dialogue simulator uses these inputs to generate additional sample dialogues in two steps.

In the first step, the simulator generates dialogue variations that represent different paths a conversation can take, such as different sequences of slot values and divergent paths that arise when a customer changes her mind.

More specifically, we conceive a conversation as a collaborative, goal-oriented interaction between two agents, a customer and Alexa. In this setting, the customer has a goal she wants to achieve, such as booking an airplane flight, and Alexa has access to resources, such as APIs for searching flight information or booking flights, that can help the customer reach her goal.

The simulated dialogues are generated through the interaction of two agent simulators, one for the customer, the other for Alexa. From the sample dialogues provided by the developer, the simulator first samples several plausible goals that customers interacting with the skill may want to achieve.

Conditioned on a sample goal, we generate synthetic interactions between the two simulator agents. The customer agent progressively reveals its goal to the Alexa agent, while the Alexa agent gathers the customer agent’s information, confirms information, and asks follow-up questions about missing information, guiding the interaction toward goal completion.

In the second step, the simulator injects language variations into the dialogue paths. The variations include alternate expressions of the same customer intention, such as “recommend me a movie” versus “I want to watch a movie”. Some of these alternatives are provided by the sample conversations and Alexa response templates, while others are generated through paraphrasing.

The variations also include alternate slot values (such as “Andra Day” or “Alicia Keys” for the slot ArtistName), which are sampled from slot catalogues provided by the developer. Through these two steps, the simulator generates tens of thousands of annotated dialogue examples that are used for training the conversational models.

The Alexa Conversations modeling architecture

A natural conversational experience could follow any one of a wide range of nonlinear dialogue patterns. Our conversations-first modeling architecture leverages dialogue-simulator and conversational-modeling components to support dialogue patterns that include carryover of entities, anaphora, confirmation of slots and APIs, and proactively offering related functionality, as well as robust support for a customer changing her mind midway through a conversation.

We follow an end-to-end dialogue-modeling approach, where the models take into account the current customer utterance and context from the entire conversation history to predict the optimal next actions for Alexa. Those actions might include calling a developer-provided API to retrieve information and relaying that information to the customer; asking for more information from the customer; or any number of other possibilities.

The modeling architecture is built using state-of-the-art deep-learning technology and consists of three models: a named-entity-recognition (NER) model, an action prediction (AP) model, and an argument-filling (AF) model. The models are built by combining supervised training techniques on the annotated synthetic dialogues generated by the dialogue simulator and unsupervised pretraining of large Transformer-based components on text corpora.

Alexa Conversations modeling architecture
The Alexa Conversations modeling architecture uses state-of-the-art deep-learning technology and consists of three models: a named-entity-recognition model, an action prediction model, and an argument-filling model. The models are built by combining supervised training techniques on the annotated synthetic dialogues generated by the dialogue simulator and unsupervised pretraining of large Transformer-based components on text corpora.

First, the NER model identifies slots in each of the customer utterances, selecting from slots the developer defined as part of the build-time assets (date, city, etc.). For example, for the request “search for flights to Seattle tomorrow”, the NER model will identify “Seattle” as a city slot and “tomorrow” as a date slot.

The NER model is a sequence-tagging model built using a bidirectional LSTM layer on top of a Transformer-based pretrained sentence encoder. In addition to the current sentence, NER also takes dialogue context as input, which is encoded through a hierarchical LSTM architecture that captures the conversational history, including past slots and Alexa actions.

Next, the AP model predicts the optimal next action for Alexa to take, such as calling an API or responding to the customer to either elicit more information or complete a request. The action space is defined by the APIs and Alexa response templates that the developer provides during the skill-authoring process.

The AP model is a classification model that, like the NER model, uses a hierarchical LSTM architecture to encode the current utterance and past dialogue context, which ultimately passes to a feed-forward network to generate the action prediction.

Finally, the AF model fills in the argument values for the API and response templates by looking at the entire dialogue for context. Using an attention-based pointing mechanism over the dialogue context, the AF model selects compatible slots from all slot values that the NER model recognized earlier.

For example, suppose slot values “Seattle” and “tomorrow” exist in the dialogue context for city and date slots respectively, and the AP model predicted the SearchFlight API as the optimal next action. The AF model will fill in the API arguments with the appropriate values, generating a complete API call: SearchFlight (city=“Seattle”, date="tomorrow").

The AP and AF models may also predict and generate more than one action after a customer utterance. For example, they may decide to first call an API to retrieve flight information and then call an Alexa response template to communicate this information to the customer. Therefore, the AP and AF models can make sequential predictions of actions, including the decision to stop predicting more actions and wait for the next customer request.

The finer points

Consistency check logic ensures that the resulting predictions are all valid actions, consistent with developer-provided information about their APIs. For example, the system would not generate an API call with an empty input argument, if that input argument is required by the developer.

The inputs include the entire dialogue history, as well as the latest customer request, and the resulting model predictions are contextual, relevant, and not repetitive. For example, if a customer has already provided the date of a trip while searching for a flight, Alexa will not ask for the date when booking the flight. Instead, the date provided earlier will contextually carry over and pass to the appropriate API.

We leveraged large pretrained Transformer components (BERT) that encode current and past requests in the conversation. To ensure state-of-the-art model build-time and runtime latency, we performed inference architecture optimizations such as accelerating embedding computation on GPUs, implementing efficient caching, and leveraging both data- and model-level parallelism.

We are excited about the advances that enable Alexa developers to build flexible and robust conversational experiences that allow customers to have natural interactions with their devices. Developers interested in learning more about the "how" of building these conversational experiences should read our accompanying developer blog.

For more information about the technical advances behind Alexa Conversations, at right are relevant publications related to our work in dialogue systems, dialogue state tracking, and data augmentation.

Acknowledgments: The entire Alexa Conversations team for making the innovations highlighted here possible.

Research areas

Related content

US, WA, Seattle
The Automated Reasoning Group in AWS Platform is looking for an Applied Scientist with experience in building scalable solver solutions that delight customers. You will be part of a world-class team building the next generation of automated reasoning tools and services. AWS has the most services and more features within those services, than any other cloud provider–from infrastructure technologies like compute, storage, and databases–to emerging technologies, such as machine learning and artificial intelligence, data lakes and analytics, and Internet of Things. You will apply your knowledge to propose solutions, create software prototypes, and move prototypes into production systems using modern software development tools and methodologies. In addition, you will support and scale your solutions to meet the ever-growing demand of customer use. You will use your strong verbal and written communication skills, are self-driven and own the delivery of high quality results in a fast-paced environment. Each day, hundreds of thousands of developers make billions of transactions worldwide on AWS. They harness the power of the cloud to enable innovative applications, websites, and businesses. Using automated reasoning technology and mathematical proofs, AWS allows customers to answer questions about security, availability, durability, and functional correctness. We call this provable security, absolute assurance in security of the cloud and in the cloud. See https://aws.amazon.com/security/provable-security/ As an Applied Scientist in AWS Platform, you will play a pivotal role in shaping the definition, vision, design, roadmap and development of product features from beginning to end. You will: - Define and implement new solver applications that are scalable and efficient approaches to difficult problems - Apply software engineering best practices to ensure a high standard of quality for all team deliverables - Work in an agile, startup-like development environment, where you are always working on the most important stuff - Deliver high-quality scientific artifacts - Work with the team to define new interfaces that lower the barrier of adoption for automated reasoning solvers - Work with the team to help drive business decisions The AWS Platform is the glue that holds the AWS ecosystem together. From identity features such as access management and sign on, cryptography, console, builder & developer tools, to projects like automating all of our contractual billing systems, AWS Platform is always innovating with the customer in mind. The AWS Platform team sustains over 750 million transactions per second. Learn and Be Curious. We have a formal mentor search application that lets you find a mentor that works best for you based on location, job family, job level etc. Your manager can also help you find a mentor or two, because two is better than one. In addition to formal mentors, we work and train together so that we are always learning from one another, and we celebrate and support the career progression of our team members. Inclusion and Diversity. Our team is diverse! We drive towards an inclusive culture and work environment. We are intentional about attracting, developing, and retaining amazing talent from diverse backgrounds. Team members are active in Amazon’s 10+ affinity groups, sometimes known as employee resource groups, which bring employees together across businesses and locations around the world. These range from groups such as the Black Employee Network, Latinos at Amazon, Indigenous at Amazon, Families at Amazon, Amazon Women and Engineering, LGBTQ+, Warriors at Amazon (Military), Amazon People With Disabilities, and more. Key job responsibilities Work closely with internal and external users on defining and extending application domains. Tune solver performance for application-specific demands. Identify new opportunities for solver deployment. About the team Solver science is a talented team of scientists from around the world. Expertise areas include solver theory, performance, implementation, and applications. Diverse Experiences AWS values diverse experiences. Even if you do not meet all of the qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why AWS? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Inclusive Team Culture Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences, inspire us to never stop embracing our uniqueness. Mentorship & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. Hybrid Work We value innovation and recognize this sometimes requires uninterrupted time to focus on a build. We also value in-person collaboration and time spent face-to-face. Our team affords employees options to work in the office every day or in a flexible, hybrid work model near one of our U.S. Amazon offices. We are open to hiring candidates to work out of one of the following locations: Portland, OR, USA | Seattle, WA, USA
CN, 11, Beijing
Amazon Search JP builds features powering product search on the Amazon JP shopping site and expands the innovations to world wide. As an Applied Scientist on this growing team, you will take on a key role in improving the NLP and ranking capabilities of the Amazon product search service. Our ultimate goal is to help customers find the products they are searching for, and discover new products they would be interested in. We do so by developing NLP components that cover a wide range of languages and systems. As an Applied Scientist for Search JP, you will design, implement and deliver search features on Amazon site, helping millions of customers every day to find quickly what they are looking for. You will propose innovation in NLP and IR to build ML models trained on terabytes of product and traffic data, which are evaluated using both offline metrics as well as online metrics from A/B testing. You will then integrate these models into the production search engine that serves customers, closing the loop through data, modeling, application, and customer feedback. The chosen approaches for model architecture will balance business-defined performance metrics with the needs of millisecond response times. Key job responsibilities - Designing and implementing new features and machine learned models, including the application of state-of-art deep learning to solve search matching, ranking and Search suggestion problems. - Analyzing data and metrics relevant to the search experiences. - Working with teams worldwide on global projects. Your benefits include: - Working on a high-impact, high-visibility product, with your work improving the experience of millions of customers - The opportunity to use (and innovate) state-of-the-art ML methods to solve real-world problems with tangible customer impact - Being part of a growing team where you can influence the team's mission, direction, and how we achieve our goals We are open to hiring candidates to work out of one of the following locations: Beijing, 11, CHN | Shanghai, 31, CHN
US, WA, Seattle
Are you interested in building, developing, and driving the machine learning technical vision, strategy, and implementation for AWS Hardware? AWS Hardware is hiring a Senior Applied Scientist (AS) to lead the definition and prioritization of our customer focused technologies and services. AWS Hardware is responsible for designing, qualifying, and maintaining server solutions for AWS and its customers as well as developing new cloud focused hardware solutions. You will be a senior technical leader in the existing Data Sciences and Analytics Team, build, and drive the data science and machine learning needed for our product development and operations. As a Senior AS at Amazon, you will provide technical leadership to the teams, organization and products for machine learning. Senior AS’s are specialists with deep expertise in areas such as machine learning, speech recognition, large language models (LLMs), natural language processing, computer vision, and knowledge acquisition, and help drive the ML vision for our products. They are externally aware of the state-of-the-art in their respective field of expertise and are constantly focused on advancing the state-of-the-art for improving Amazon’s products and services. The ideal candidate will be an expert in the areas of data science, machine learning, and statistics; specifically in recommendation systems development, classification, and LLMs. You will have hands-on experience leading multiple simultaneous product development and operations initiatives as well as be able to balance technical leadership with strong business judgment to make the right decisions about technology, infrastructure, methodologies, and productionizing models and code. You will strive for simplicity, and demonstrate significant creativity and high judgment backed by statistical proof. Key job responsibilities MS in Data Science, Machine Learning, Statistics, Computer Science, Applied Math or equivalent highly technical field. 10+ years of hands-on experience working in data science and/or machine learning using models and methods such as neural networks, random forests, SVMs or Bayesian classification. 3+ years developing recommendation systems and/or LLMs. 3+ years of experience working in software development, machine learning engineering or ops. Have a history of building highly scalable systems that capture and utilize large data sets in order to quantify your products performance via metrics, monitoring, and alarming. Experience using R, Python, Java, or other equivalent statistics and machine learning tools. Experienced in computer science fundamentals such as object-oriented design, data structures and algorithm design. 3+ years of experience developing in a cloud environment. We are open to hiring candidates to work out of one of the following locations: Seattle, WA, USA
US, CA, San Diego
Do you want to join an innovative team of scientists who use deep learning, natural language processing, large language models to help Amazon provide the best seller experience across the entire Seller life cycle, including recruitment, growth, support and provide the best customer and seller experience by automatically mitigating risk? Do you want to build advanced algorithmic systems that help manage the trust and safety of millions of customer interactions every day? Are you excited by the prospect of analyzing and modeling terabytes of data and creating state-of-the-art algorithms to solve real world problems? Are you excited by the opportunity to leverage GenAI and innovate on top of the state-of-the-art large language models to improve customer and seller experience? Do you like to build end-to-end business solutions and directly impact the profitability of the company? Do you like to innovate and simplify processes? If yes, then you may be a great fit to join the Machine Learning Accelerator team in the Amazon Selling Partner Services (SPS) group. Key job responsibilities The scope of an Applied Scientist III in the Selling Partner Services (SPS) Machine Learning Accelerator (MLA) team is to research and prototype Machine Learning applications that solve strategic business problems across SPS domains. Additionally, the scientist collaborates with engineers and business partners to design and implement solutions at scale when they are determined to be of broad benefit to SPS organizations. They develop large-scale solutions for high impact projects, introduce tools and other techniques that can be used to solve problems from various perspectives, and show depth and competence in more than one area. They influence the team’s technical strategy by making insightful contributions to the team’s priorities, approach and planning. They develop and introduce tools and practices that streamline the work of the team, and they mentor junior team members and participate in hiring. We are open to hiring candidates to work out of one of the following locations: San Diego, CA, USA
IN, KA, Bengaluru
How to use the world’s richest collection of e-commerce data to improve payments experience for our customers? Amazon Payments Global Data Science team seeks a Senior Data Scientist for building analytical and scientific solutions that will address increasingly complex business questions in the Gift-Cards space. Amazon.com has a culture of data-driven decision-making and demands intelligence that is timely, accurate, and actionable. This team operates at WW level and provides a fast-paced environment where every day brings new challenges and opportunities. As a Senior Data Scientist in this team, you will be driving the Data Science/ML roadmap for business continuity & growth. You will develop statistical and machine learning models to solve for complex business problems in Gift-Cards space, design and run global experiments, and find new ways to optimize the customer experience. You will need to collaborate effectively with internal stakeholders, cross-functional teams to solve problems, create operational efficiencies, and deliver successfully against high organizational standards. You will explore GenAI use-cases within Gift-Cards space and also work on cross-disciplinary efforts with other scientists within Amazon. Key job responsibilities - You should be detail-oriented and must have an aptitude for solving unstructured and ambiguous problems. You should work in a self-directed environment, own tasks and drive them to completion - You should be passionate about working with huge data sets and be someone who loves to bring datasets together to answer business questions - You should demonstrate thorough technical expertise on feature engineering of massive datasets, exploratory data analysis, and model building using state-of-art ML algorithms - Random Forest, Gradient Boosting, SVM, Neural Nets, DL, Reinforcement Learning etc. You should be aware of automating feedback loops for algorithms in production - You should work closely with internal stakeholders like the business teams, engineering teams and partner teams and align them with respect to your focus areas - You should have excellent business and communication skills to be able to work with business owners to develop and define key business questions and build mechanisms that answer those questions We are open to hiring candidates to work out of one of the following locations: Bengaluru, KA, IND
IN, KA, Bangalore
Are you interested in changing the Digital Reading Experience? We are from Kindle Books Team looking for a set of Scientists to take the reading experience in Kindle to next level with a set of innovations! We envision Kindle as the place where readers find the best manifestation of all written content optimized with features that enable them to get the most out of reading, and creators are able to realize their vision to customers quickly and at scale. Every time customers open their content, regardless of surface, they start or restart their reading in a familiar, useful and engaging place. We achieve this by building a strong foundation of core experiences and act as a force multiplier and partner for content creators (directly or indirectly) to easily innovate on top of Kindle's purpose built content experience stack in a simple and extensible way. We will achieve this by providing a best-in-class reading experience, unique content experiences, and remaining agile in meeting the evolving needs and preferences of our users. Our goal is to foster long-lasting reading habits and make us the preferred destination for enriching literary experiences. We are building a In The Book Science team and looking for Scientists, who are passionate about Reading and are willing to take Reading to the next level. Every Book is a complex structure with different entities, layout, format and semantics, with more than 17MM eBooks in our catalog. We are looking for experts in all domains like core NLP, Generative AI, CV and Deep Learning Techniques for unlocking capabilities like analysis, enhancement, curation, moderation, translation, transformation and generation in Books based on Content structure, features, Intent & Synthesis. Scientists will focus on Inside the book content and semantically learn the different entities to enhance the Reading experience overall (Kindle & beyond). They have an opportunity to influence in 2 major phases of life-cycle - Publishing (Creation of Books process) and Reading experience (building engaging features & representation in the book thereby driving reading engagement). Key job responsibilities - 5+ years of building machine learning models for business application experience - PhD, or Master's degree and 6+ years of applied research experience - Knowledge of programming languages such as C/C++, Python, Java or Perl - Experience programming in Java, C++, Python or related language - You have expertise in one of the applied science disciplines, such as machine learning, natural language processing, computer vision, Deep learning - You are able to use reasonable assumptions, data, and customer requirements to solve problems. - You initiate the design, development, execution, and implementation of smaller components with input and guidance from team members. - You work with SDEs to deliver solutions into production to benefit customers or an area of the business. - You assume responsibility for the code in your components. You write secure, stable, testable, maintainable code with minimal defects. - You understand basic data structures, algorithms, model evaluation techniques, performance, and optimality tradeoffs. - You follow engineering and scientific method best practices. You get your designs, models, and code reviewed. You test your code and models thoroughly - You participate in team design, scoping and prioritization discussions. You are able to map a business goal to a scientific problem and map business metrics to technical metrics. - You invent, refine and develop your solutions to ensure they are meeting customer needs and team goals. You keep current with research trends in your area of expertise and scrutinize your results. - Experience in mentoring junior scientists A day in the life You will be working with a group of talented scientists on researching algorithm and running experiments to test solutions to improve our experience. This will involve collaboration with partner teams including engineering, PMs, data annotators, and other scientists to discuss data quality, model development and productionizing the same. You will mentor other scientists, review and guide their work, help develop roadmaps for the team. We are open to hiring candidates to work out of one of the following locations: Banagalore, KA, IND | Bangalore, IND | Bangalore, KA, IND
IN, KA, Bangalore
Are you interested in changing the Digital Reading Experience? We are from Kindle Books Team looking for a set of Scientists to take the reading experience in Kindle to next level with a set of innovations! We envision Kindle as the place where readers find the best manifestation of all written content optimized with features that enable them to get the most out of reading, and creators are able to realize their vision to customers quickly and at scale. Every time customers open their content, regardless of surface, they start or restart their reading in a familiar, useful and engaging place. We achieve this by building a strong foundation of core experiences and act as a force multiplier and partner for content creators (directly or indirectly) to easily innovate on top of Kindle's purpose built content experience stack in a simple and extensible way. We will achieve this by providing a best-in-class reading experience, unique content experiences, and remaining agile in meeting the evolving needs and preferences of our users. Our goal is to foster long-lasting reading habits and make us the preferred destination for enriching literary experiences. We are building a In The Book Science team and looking for Scientists, who are passionate about Reading and are willing to take Reading to the next level. Every Book is a complex structure with different entities, layout, format and semantics, with more than 17MM eBooks in our catalog. We are looking for experts in all domains like core NLP, Generative AI, CV and Deep Learning Techniques for unlocking capabilities like analysis, enhancement, curation, moderation, translation, transformation and generation in Books based on Content structure, features, Intent & Synthesis. Scientists will focus on Inside the book content and semantically learn the different entities to enhance the Reading experience overall (Kindle & beyond). They have an opportunity to influence in 2 major phases of life-cycle - Publishing (Creation of Books process) and Reading experience (building engaging features & representation in the book thereby driving reading engagement). Key job responsibilities - 3+ years of building machine learning models for business application experience - PhD, or Master's degree and 2+ years of applied research experience - Knowledge of programming languages such as C/C++, Python, Java or Perl - Experience programming in Java, C++, Python or related language - You have expertise in one of the applied science disciplines, such as machine learning, natural language processing, computer vision, Deep learning - You are able to use reasonable assumptions, data, and customer requirements to solve problems. - You initiate the design, development, execution, and implementation of smaller components with input and guidance from team members. - You work with SDEs to deliver solutions into production to benefit customers or an area of the business. - You assume responsibility for the code in your components. You write secure, stable, testable, maintainable code with minimal defects. - You understand basic data structures, algorithms, model evaluation techniques, performance, and optimality tradeoffs. - You follow engineering and scientific method best practices. You get your designs, models, and code reviewed. You test your code and models thoroughly - You participate in team design, scoping and prioritization discussions. You are able to map a business goal to a scientific problem and map business metrics to technical metrics. - You invent, refine and develop your solutions to ensure they are meeting customer needs and team goals. You keep current with research trends in your area of expertise and scrutinize your results. A day in the life You will be working with a group of talented scientists on researching algorithm and running experiments to test solutions to improve our experience. This will involve collaboration with partner teams including engineering, PMs, data annotators, and other scientists to discuss data quality, model development and productionizing the same. You will mentor other scientists, review and guide their work, help develop roadmaps for the team. We are open to hiring candidates to work out of one of the following locations: Bangalore, IND | Bangalore, KA, IND
US, WA, Seattle
Amazon is looking for a strategic, innovative science leader within the Global Talent and Compensation (GTMC) organization to lead an interdisciplinary team charged with developing data-driven solutions to model, automate, and inform high judgement decision making by bringing together science and technology in consumer grade internal talent products. GTMC delivers employee-focused experiences by providing scalable and responsive mechanisms for employees, as well as listening and signaling mechanisms for managers and leaders. They do this through intelligent, flexible, and extensible products and scalable data and science services. They set out to deliver a singular experience supporting multiple employee talent journeys (e.g., onboarding, evaluation, compensation, movement, promotion, exit), to generate and capture signals from product data, surface outliers, increase personalization, and improve the efficacy of “next best action” recommendations, for 1.6 million Amazonians around the world. In this role you will lead multiple research teams across the disciplines of Talent Management, Diversity Equity and Inclusion, and Compensation. You will interface with the most senior leaders at Amazon to develop and deliver on a strategic research roadmap that crosses all lines of Amazon businesses (e.g., Consumer, AWS, Devices, Advertising). This role will then partner with engineering and product management leader to deliver the outcomes of this research in production environments. Successful candidates will have an established background expertise in machine learning with some experience in applying this expertise to the fields of talent management, product management and/or software development. We are open to hiring candidates to work out of one of the following locations: Seattle, WA, USA
US, WA, Bellevue
Where will Amazon's growth come from in the next year? What about over the next five? Which product lines are poised to quintuple in size? Are we investing enough in our infrastructure, or too much? How do our customers react to changes in prices, product selection, or delivery times? These are among the most important questions at Amazon today. The Topline Forecasting team in the Supply Chain Optimization Technologies (SCOT) group is looking for innovative, passionate and results-oriented Principal Economist to provide thought-leadership to help answer these questions. You will have an opportunity to own the long-run outlook for Amazon’s global consumer business and shape strategic decisions at the highest level. The successful candidate will be able to formalize problem definitions from ambiguous requirements, build econometric models using Amazon’s world-class data systems, and develop cutting-edge solutions for non-standard problems. Key job responsibilities - You understand the state-of-the-art in time series and econometric modeling. - You apply econometric tools and theory to solve business problems in a fast moving environment. - You excel at extracting insights and correct interpretations from data using advanced modeling techniques. - You communicate insights in a digestible manner to senior leaders in Finance and Operations within the company. - You are able to anticipate future business challenges and key questions, and have the ability to design modeling solutions to tackle them. - You have broad influence over the Topline team’s scientific research agenda and deliverables. - You contribute to the broader Econ research community in Amazon. - You advise other economists on scientific best-practices and raise the bar of research. - You will actively mentor other scientists and contribute to their career development. We are open to hiring candidates to work out of one of the following locations: Bellevue, WA, USA | New York, NY, USA
US, WA, Seattle
Are you a scientist interested in pushing the state of the art in LLMs, ML or Computer Vision forward? Are you interested in working on ground-breaking research projects that will lead to great products and scientific publications? Do you wish you had access to large datasets? Answer yes to any of these questions and you’ll fit right in here at Amazon. We are looking for a hands-on researcher, who wants to derive, implement, and test the next generation of Generative AI algorithms (either LLMs, Diffusion Models, auto-regressors, VAEs, or other generative models). The research we do is innovative, multidisciplinary, and far-reaching. We aim to define, deploy, and publish cutting edge research. In order to achieve our vision, we think big and tackle technology problems that are cutting edge. Where technology does not exist, we will build it. Where it exists we will need to modify it to make it work at Amazon scale. We need members who are passionate and willing to learn. “Amazon Science gives you insight into the company’s approach to customer-obsessed scientific innovation. Amazon fundamentally believes that scientific innovation is essential to being the most customer-centric company in the world. It’s the company’s ability to have an impact at scale that allows us to attract some of the brightest minds in artificial intelligence and related fields. Our scientists continue to publish, teach, and engage with the academic community, in addition to utilizing our working backwards method to enrich the way we live and work.” Please visit https://www.amazon.science for more information #hltech #hitech Key job responsibilities - Derive novel ML or Computer Vision or LLMs and NLP algorithms - Design and develop scalable ML solutions - Work with very large datasets - Work closely with software engineering teams and Product Managers to deploy your innovations - Publish your work at major conferences/journals. - Mentor team members in the use of your Generative AI and LLMs. About the team We are a tight-knit group that shares our experiences and help each other succeed. We believe in team work. We love hard problems and like to move fast in a growing and changing environment. We use data to guide our decisions and we always push the technology and process boundaries of what is feasible on behalf of our customers. If that sounds like an environment you like, join us. We are open to hiring candidates to work out of one of the following locations: Seattle, WA, USA