How Voice and Graphics Working Together Enhance the Alexa Experience

Last week, Amazon announced the release of both a redesigned Echo Show with a bigger screen and the Alexa Presentation Language, which enables third-party developers to build “multimodal” skills that coordinate Alexa’s natural-language-understanding systems with on-screen graphics.

Echo in use

One way that multimodal interaction can improve Alexa customers’ experiences is by helping resolve ambiguous requests. If a customer says, “Alexa, play Harry Potter”, the Echo Show screen could display separate graphics representing a Harry Potter audiobook, a movie, and a soundtrack. If the customer follows up by saying “the last one”, the system must determine whether that means the last item in the on-screen list, the last Harry Potter movie, or something else.

Alexa’s ability to handle these types of interactions derives in part from research that my colleagues and I presented earlier this year at the annual meeting of the Association for the Advancement of Artificial Intelligence. In our paper, we consider three different neural-network designs that treat query resolution as an integrated problem involving both on-screen data and natural-language understanding.

We find that they consistently outperform a natural-language-understanding network that uses hand-coded rules to factor in on-screen data. And on inputs that consist of voice only, their performance is comparable to that of a system trained exclusively on speech inputs. That means that extending the network to consider on-screen data does not degrade accuracy for voice-only inputs.

The other models we investigated are derivatives of the voice-only model, so I’ll describe it first.

All of our networks were trained to classify utterances according to two criteria, intent and slot. An intent is the action that the customer wants Alexa to perform, such as PlayAction<Movie>. Slot values designate the entities on which the intents act, such as ‘Harry Potter’->Movie.name. We have found, empirically, that training a single network to perform both classifications works better than training a separate network for each.

As inputs to the network, we use two different embeddings of each utterance. Embeddings represent words as points in a geometric space, such that strings with similar meanings (or functional roles) are clustered together. Our network learns one embedding from the data on which it is trained, so it is specifically tailored to typical Alexa commands. We also use a standard embedding, based on a much larger corpus of texts, which groups words together according to the words they co-occur with.

The embeddings pass to a bidirectional long short-term memory network. A long short-term memory (LSTM) network processes inputs in order, and its judgment about any given input reflects its judgments about the preceding inputs. LSTMs are widely used in both speech recognition and natural-language processing because they can use context to resolve ambiguities. A bidirectional LSTM (bi-LSTM) is a pair of LSTMs that process an input utterance both backward and forward.

Intent classification is based on the final outputs of the forward and backward LSTMs, since the networks’ confidence in their intent classifications should increase the more of the utterance they see. Slot classification is based on the total output of the LSTMs, since the relevant slot values can occur anywhere in the utterance.

A diagram describing the architectures of all four neural models we evaluated
A diagram describing the architectures of all four neural models we evaluated. The baseline system,which doesn’t use screen information, received only the (a) inputs. The three multimodal neuralsystems received, respectively, (a) and (b); (a), (b), and (c); and (a), (b), and (d).

The data on which we trained all our networks was annotated using the Alexa Meaning Representation Language, a formal language that captures more sophisticated relationships between the parts of an input sentence than earlier methods did. A team of Amazon researchers presented a paper describing the language earlier this year at the annual meeting of the North American chapter of the Association for Computational Linguistics.

The other four models we investigated factored in on-screen content in various ways. The first was a benchmark system that modifies the outputs of the voice-only network according to hand-coded rules.

If, for instance, a customer says, “Play Harry Potter,” the voice-only classifier, absent any other information, might estimate a 50% probability that the customer means the audiobook, a 40% probability that she means the movie, and a 10% probability that she means the soundtrack. If, however, the screen is displaying only movies, our rules would boost the probability that the customer wants the movie.

The factors by which our rules increase or decrease probabilities were determined by a “grid search” on a subset of the training data, in which an algorithm automatically swept through a range of possible modifications to find those that yielded the most accurate results.

The first of our experimental neural models takes as input both the embeddings of the customer’s utterances and a vector representing the types of data displayed on-screen, such as Onscreen_Movie or Onscreen_Book. We assume a fixed number of data types, so the input is a “one-hot” vector, with a bit for each type. If data of a particular type is currently displayed on-screen, its bit is set to 1; otherwise, its bit is set to 0.

The next neural model takes as additional input not only the type of data displayed on-screen but the specific name of each data item — so not just Onscreen_Movie but also ‘Harry Potter’ or ‘The Black Panther’. Those names, too, undergo an embedding, which the network learns to perform during training.

Our third and final neural model factors in the names of on-screen data items as well, but in a more complex way. During training, it uses convolutional filters to, essentially, identify the separate contribution that each name on the screen makes toward the accuracy of the final classification. During operation, it thus bases each of its classifications on the single most relevant name on-screen, rather than all the names at once.

So, in all, we built, trained, and evaluated five different networks: the voice-only network; the voice-only network with hand-coded rules; the voice-and-data-type network; the voice, data type, and data name network; and the voice, data type, and convolutional-filter network.

We tested each of the five networks on four different data sets: slots with and without screen information and intents with and without screen information.

We evaluated performance according to two different metrics, micro-F1 and macro-F1. Micro-F1 scores the networks’ performance separately on each intent and slot, then averages the results. Macro-F1, by contrast, pools the scores across intents and slots and then averages. Micro-F1 gives more weight to intents and slots that are underrepresented in the data, macro-F1 less.

According to micro-F1, all three multimodal neural nets outperformed both the voice-only and the rule-based system across the board. The difference was dramatic on the test sets that included screen information, as might be expected, but the neural nets even had a slight edge on voice-only test sets. On all four test sets, the voice, data type, and data name network achieved the best results.

According to macro-F1, the neural nets generally outperformed the baseline systems, although the voice, data type, and data name network lagged slightly behind the baselines on voice-only slot classification. There was more variation in the top-performing system, too, with each of the three neural nets achieving the highest score on at least one test. Again, however, the neural nets dramatically outperformed the baseline systems on test sets that included screen information.

Acknowledgments: Angeliki Metallinou, Rahul Goel

Related content

US, CA, Santa Clara
About Amazon Health Amazon Health’s mission is to make it dramatically easier for customers to access the healthcare products and services they need to get and stay healthy. Towards this mission, we (Health Storefront and Shared Tech) are building the technology, products and services, that help customers find, buy, and engage with the healthcare solutions they need. Job summary We are seeking an exceptional Applied Scientist to join a team of experts in the field of machine learning, and work together to break new ground in the world of healthcare to make personalized and empathetic care accessible, convenient, and cost-effective. We leverage and train state-of-the-art large-language-models (LLMs) and develop entirely new experiences to help customers find the right products and services to address their health needs. We work on machine learning problems for intent detection, dialogue systems, and information retrieval. You will work in a highly collaborative environment where you can pursue both near-term productization opportunities to make immediate, meaningful customer impacts while pursuing ambitious, long-term research. You will work on hard science problems that have not been solved before, conduct rapid prototyping to validate your hypothesis, and deploy your algorithmic ideas at scale. You will get the opportunity to pursue work that makes people's lives better and pushes the envelop of science. Key job responsibilities - Translate product and CX requirements into science metrics and rigorous testing methodologies. - Invent and develop scalable methodologies to evaluate LLM outputs against metrics and guardrails. - Design and implement the best-in-class semantic retrieval system by creating high-quality knowledge base and optimizing embedding models and similarity measures. - Conduct tuning, training, and optimization of LLMs to achieve a compelling CX while reducing operational cost to be scalable. A day in the life In a fast-paced innovation environment, you work closely with product, UX, and business teams to understand customer's challenges. You translate product and business requirements into science problems. You dive deep into challenging science problems, enabling entirely new ML and LLM-driven customer experiences. You identify hypothesis and conduct rapid prototyping to learn quickly. You develop and deploy models at scale to pursue productizations. You mentor junior science team members and help influence our org in scientific best practices. About the team We are the ML Science and Engineering team, with a strong focus on Generative AI. The team consists of top-notch ML Scientists with diverse background in healthcare, robotics, customer analytics, and communication. We are committed to building and deploying the most advanced scientific capabilities and solutions for the products and services at Amazon Health. We are open to hiring candidates to work out of one of the following locations: Santa Clara, CA, USA
US, WA, Seattle
We are designing the future. If you are in quest of an iterative fast-paced environment, where you can drive innovation through scientific inquiry, and provide tangible benefit to hundreds of thousands of our associates worldwide, this is your opportunity. Come work on the Amazon Worldwide Fulfillment Design & Engineering Team! We are looking for an experienced and senior Research Scientist with background in Ergonomics and Industrial Human Factors, someone that is excited to work on complex real-world challenges for which a comprehensive scientific approach is necessary to drive solutions. Your investigations will define human factor / ergonomic thresholds resulting in design and implementation of safe and efficient workspaces and processes for our associates. Your role will entail assessment and design of manual material handling tasks throughout the entire Amazon network. You will identify fundamental questions pertaining to the human capabilities and tolerances in a myriad of work environments, and will initiate and lead studies that will drive decision making on an extreme scale. .You will provide definitive human factors/ ergonomics input and participate in design with every single design group in our network, including Amazon Robotics, Engineering R&D, and Operations Engineering. You will work closely with our Worldwide Health and Safety organization to gain feedback on designs and work tenaciously to continuously improve our associate’s experience. Key job responsibilities - Collaborating and designing work processes and workspaces that adhere to human factors / ergonomics standards worldwide. - Producing comprehensive and assessments of workstations and processes covering biomechanical, physiological, and psychophysical demands. - Effectively communicate your design rationale to multiple engineering and operations entities. - Identifying gaps in current human factors standards and guidelines, and lead comprehensive studies to redefine “industry best practices” based on solid scientific foundations. - Continuously strive to gain in-depth knowledge of your profession, as well as branch out to learn about intersecting fields, such as robotics and mechatronics. - Travelling to our various sites to perform thorough assessments and gain in-depth operational feedback, approximately 25%-50% of the time. We are open to hiring candidates to work out of one of the following locations: Seattle, WA, USA
GB, London
Amazon Advertising is looking for a Data Scientist to join its brand new initiative that powers Amazon’s contextual advertising products. Advertising at Amazon is a fast-growing multi-billion dollar business that spans across desktop, mobile and connected devices; encompasses ads on Amazon and a vast network of hundreds of thousands of third party publishers; and extends across US, EU and an increasing number of international geographies. The Supply Quality organization has the charter to solve optimization problems for ad-programs in Amazon and ensure high-quality ad-impressions. We develop advanced algorithms and infrastructure systems to optimize performance for our advertisers and publishers. We are focused on solving a wide variety of problems in computational advertising like traffic quality prediction (robot and fraud detection), Security forensics and research, Viewability prediction, Brand Safety, Contextual data processing and classification. Our team includes experts in the areas of distributed computing, machine learning, statistics, optimization, text mining, information theory and big data systems. We are looking for a dynamic, innovative and accomplished Data Scientist to work on data science initiatives for contextual data processing and classification that power our contextual advertising solutions. Are you an experienced user of sophisticated analytical techniques that can be applied to answer business questions and chart a sustainable vision? Are you exited by the prospect of communicating insights and recommendations to audiences of varying levels of technical sophistication? Above all, are you an innovator at heart and have a track record of resolving ambiguity to deliver result? As a data scientist, you help our data science team build cutting edge models and measurement solutions to power our contextual classification technology. As this is a new initiative, you will get an opportunity to act as a thought leader, work backwards from the customer needs, dive deep into data to understand the issues, define metrics, conceptualize and build algorithms and collaborate with multiple cross-functional teams. Key job responsibilities * Define a long-term science vision for contextual-classification tech, driven fundamentally from the needs of our advertisers and publishers, translating that direction into specific plans for the science team. Interpret complex and interrelated data points and anecdotes to build and communicate this vision. * Collaborate with software engineering teams to Identify and implement elegant statistical and machine learning solutions * Oversee the design, development, and implementation of production level code that handles billions of ad requests. Own the full development cycle: idea, design, prototype, impact assessment, A/B testing (including interpretation of results) and production deployment. * Promote the culture of experimentation and applied science at Amazon. * Demonstrated ability to meet deadlines while managing multiple projects. * Excellent communication and presentation skills working with multiple peer groups and different levels of management * Influence and continuously improve a sustainable team culture that exemplifies Amazon’s leadership principles. We are open to hiring candidates to work out of one of the following locations: London, GBR
JP, 13, Tokyo
We are seeking a Principal Economist to be the science leader in Amazon's customer growth and engagement. The wide remit covers Prime, delivery experiences, loyalty program (Amazon Points), and marketing. We look forward to partnering with you to advance our innovation on customers’ behalf. Amazon has a trailblazing track record of working with Ph.D. economists in the tech industry and offers a unique environment for economists to thrive. As an economist at Amazon, you will apply the frontier of econometric and economic methods to Amazon’s terabytes of data and intriguing customer problems. Your expertise in building reduced-form or structural causal inference models is exemplary in Amazon. Your strategic thinking in designing mechanisms and products influences how Amazon evolves. In this role, you will build ground-breaking, state-of-the-art econometric models to guide multi-billion-dollar investment decisions around the global Amazon marketplaces. You will own, execute, and expand a research roadmap that connects science, business, and engineering and contributes to Amazon's long term success. As one of the first economists outside North America/EU, you will make an outsized impact to our international marketplaces and pioneer in expanding Amazon’s economist community in Asia. The ideal candidate will be an experienced economist in empirical industrial organization, labour economics, or related structural/reduced-form causal inference fields. You are a self-starter who enjoys ambiguity in a fast-paced and ever-changing environment. You think big on the next game-changing opportunity but also dive deep into every detail that matters. You insist on the highest standards and are consistent in delivering results. Key job responsibilities - Work with Product, Finance, Data Science, and Data Engineering teams across the globe to deliver data-driven insights and products for regional and world-wide launches. - Innovate on how Amazon can leverage data analytics to better serve our customers through selection and pricing. - Contribute to building a strong data science community in Amazon Asia. We are open to hiring candidates to work out of one of the following locations: Tokyo, 13, JPN
DE, BE, Berlin
Ops Integration: Concessions team is looking for a motivated, creative and customer obsessed Snr. Applied Scientist with a strong machine learning background, to develop advanced analytics models (Computer Vision, LLMs, etc.) that improve customer experiences We are the voice of the customer in Amazon’s operations, and we take that role very seriously. If you join this team, you will be a key contributor to delivering the Factory of the Future: leveraging Internet of Things (IoT) and advanced analytics to drive tangible, operational change on the ground. You will collaborate with a wide range of stakeholders (You will partner with Research and Applied Scientists, SDEs, Technical Program Managers, Product Managers and Business Leaders) across the business to develop and refine new ways of assessing challenges within Amazon operations. This role will combine Amazon’s oldest Leadership Principle, with the latest analytical innovations, to deliver business change at scale and efficiently The ideal candidate will have deep and broad experience with theoretical approaches and practical implementations of vision techniques for task automation. They will be a motivated self-starter who can thrive in a fast-paced environment. They will be passionate about staying current with sensing technologies and algorithms in the broader machine vision industry. They will enjoy working in a multi-disciplinary team of engineers, scientists and business leaders. They will seek to understand processes behind data so their recommendations are grounded. Key job responsibilities Your solutions will drive new system capabilities with global impact. You will design highly scalable, large enterprise software solutions involving computer vision. You will develop complex perception algorithms integrating across multiple sensing devices. You will develop metrics to quantify the benefits of a solution and influence project resources. You will validate system performance and use insights from your live models to drive the next generation of model development. Common tasks include: • Research, design, implement and evaluate complex perception and decision making algorithms integrating across multiple disciplines • Work closely with software engineering teams to drive scalable, real-time implementations • Collaborate closely with team members on developing systems from prototyping to production level • Collaborate with teams spread all over the world • Track general business activity and provide clear, compelling management reports on a regular basis We are open to hiring candidates to work out of one of the following locations: Berlin, BE, DEU | Berlin, DEU
DE, BY, Munich
Ops Integration: Concessions team is looking for a motivated, creative and customer obsessed Snr. Applied Scientist with a strong machine learning background, to develop advanced analytics models (Computer Vision, LLMs, etc.) that improve customer experiences We are the voice of the customer in Amazon’s operations, and we take that role very seriously. If you join this team, you will be a key contributor to delivering the Factory of the Future: leveraging Internet of Things (IoT) and advanced analytics to drive tangible, operational change on the ground. You will collaborate with a wide range of stakeholders (You will partner with Research and Applied Scientists, SDEs, Technical Program Managers, Product Managers and Business Leaders) across the business to develop and refine new ways of assessing challenges within Amazon operations. This role will combine Amazon’s oldest Leadership Principle, with the latest analytical innovations, to deliver business change at scale and efficiently The ideal candidate will have deep and broad experience with theoretical approaches and practical implementations of vision techniques for task automation. They will be a motivated self-starter who can thrive in a fast-paced environment. They will be passionate about staying current with sensing technologies and algorithms in the broader machine vision industry. They will enjoy working in a multi-disciplinary team of engineers, scientists and business leaders. They will seek to understand processes behind data so their recommendations are grounded. Key job responsibilities Your solutions will drive new system capabilities with global impact. You will design highly scalable, large enterprise software solutions involving computer vision. You will develop complex perception algorithms integrating across multiple sensing devices. You will develop metrics to quantify the benefits of a solution and influence project resources. You will validate system performance and use insights from your live models to drive the next generation of model development. Common tasks include: • Research, design, implement and evaluate complex perception and decision making algorithms integrating across multiple disciplines • Work closely with software engineering teams to drive scalable, real-time implementations • Collaborate closely with team members on developing systems from prototyping to production level • Collaborate with teams spread all over the world • Track general business activity and provide clear, compelling management reports on a regular basis We are open to hiring candidates to work out of one of the following locations: Munich, BE, DEU | Munich, BY, DEU | Munich, DEU
IT, MI, Milan
Ops Integration: Concessions team is looking for a motivated, creative and customer obsessed Snr. Applied Scientist with a strong machine learning background, to develop advanced analytics models (Computer Vision, LLMs, etc.) that improve customer experiences We are the voice of the customer in Amazon’s operations, and we take that role very seriously. If you join this team, you will be a key contributor to delivering the Factory of the Future: leveraging Internet of Things (IoT) and advanced analytics to drive tangible, operational change on the ground. You will collaborate with a wide range of stakeholders (You will partner with Research and Applied Scientists, SDEs, Technical Program Managers, Product Managers and Business Leaders) across the business to develop and refine new ways of assessing challenges within Amazon operations. This role will combine Amazon’s oldest Leadership Principle, with the latest analytical innovations, to deliver business change at scale and efficiently The ideal candidate will have deep and broad experience with theoretical approaches and practical implementations of vision techniques for task automation. They will be a motivated self-starter who can thrive in a fast-paced environment. They will be passionate about staying current with sensing technologies and algorithms in the broader machine vision industry. They will enjoy working in a multi-disciplinary team of engineers, scientists and business leaders. They will seek to understand processes behind data so their recommendations are grounded. Key job responsibilities Your solutions will drive new system capabilities with global impact. You will design highly scalable, large enterprise software solutions involving computer vision. You will develop complex perception algorithms integrating across multiple sensing devices. You will develop metrics to quantify the benefits of a solution and influence project resources. You will validate system performance and use insights from your live models to drive the next generation of model development. Common tasks include: • Research, design, implement and evaluate complex perception and decision making algorithms integrating across multiple disciplines • Work closely with software engineering teams to drive scalable, real-time implementations • Collaborate closely with team members on developing systems from prototyping to production level • Collaborate with teams spread all over the world • Track general business activity and provide clear, compelling management reports on a regular basis We are open to hiring candidates to work out of one of the following locations: Milan, MI, ITA
ES, M, Madrid
Ops Integration: Concessions team is looking for a motivated, creative and customer obsessed Snr. Applied Scientist with a strong machine learning background, to develop advanced analytics models (Computer Vision, LLMs, etc.) that improve customer experiences We are the voice of the customer in Amazon’s operations, and we take that role very seriously. If you join this team, you will be a key contributor to delivering the Factory of the Future: leveraging Internet of Things (IoT) and advanced analytics to drive tangible, operational change on the ground. You will collaborate with a wide range of stakeholders (You will partner with Research and Applied Scientists, SDEs, Technical Program Managers, Product Managers and Business Leaders) across the business to develop and refine new ways of assessing challenges within Amazon operations. This role will combine Amazon’s oldest Leadership Principle, with the latest analytical innovations, to deliver business change at scale and efficiently The ideal candidate will have deep and broad experience with theoretical approaches and practical implementations of vision techniques for task automation. They will be a motivated self-starter who can thrive in a fast-paced environment. They will be passionate about staying current with sensing technologies and algorithms in the broader machine vision industry. They will enjoy working in a multi-disciplinary team of engineers, scientists and business leaders. They will seek to understand processes behind data so their recommendations are grounded. Key job responsibilities Your solutions will drive new system capabilities with global impact. You will design highly scalable, large enterprise software solutions involving computer vision. You will develop complex perception algorithms integrating across multiple sensing devices. You will develop metrics to quantify the benefits of a solution and influence project resources. You will validate system performance and use insights from your live models to drive the next generation of model development. Common tasks include: • Research, design, implement and evaluate complex perception and decision making algorithms integrating across multiple disciplines • Work closely with software engineering teams to drive scalable, real-time implementations • Collaborate closely with team members on developing systems from prototyping to production level • Collaborate with teams spread all over the world • Track general business activity and provide clear, compelling management reports on a regular basis We are open to hiring candidates to work out of one of the following locations: Madrid, ESP | Madrid, M, ESP
US, CA, San Diego
The Private Brands team is looking for an Applied Scientist to join the team in building science solutions at scale. Our team applies Optimization, Machine Learning, Statistics, Causal Inference, and Econometrics/Economics to derive actionable insights. We are an interdisciplinary team of Scientists, Engineers, and Economists and primary focus on building optimization and machine learning solutions in supply chain domain with specific focus on Amazon private brand products. Key job responsibilities You will work with business leaders, scientists, and economists to translate business and functional requirements into concrete deliverables, including the design, development, testing, and deployment of highly scalable optimization solutions and ML models. This is a unique, high visibility opportunity for someone who wants to have business impact, dive deep into large-scale problems, enable measurable actions on the consumer economy, and work closely with scientists and economists. As a scientist, you bring business and industry context to science and technology decisions. You set the standard for scientific excellence and make decisions that affect the way we build and integrate algorithms. Your solutions are exemplary in terms of algorithm design, clarity, model structure, efficiency, and extensibility. You tackle intrinsically hard problems, acquiring expertise as needed. You decompose complex problems into straightforward solutions. We are particularly interested in candidates with experience in predictive and machine learning models and working with distributed systems. Academic and/or practical background in Machine Learning are particularly relevant for this position. Familiarity and experience in applying Operations Research techniques to supply chain problems is a plus. To know more about Amazon science, Please visit https://www.amazon.science We are open to hiring candidates to work out of one of the following locations: San Diego, CA, USA | Seattle, WA, USA
LU, Luxembourg
Ops Integration: Concessions team is looking for a motivated, creative and customer obsessed Snr. Applied Scientist with a strong machine learning background, to develop advanced analytics models (Computer Vision, LLMs, etc.) that improve customer experiences We are the voice of the customer in Amazon’s operations, and we take that role very seriously. If you join this team, you will be a key contributor to delivering the Factory of the Future: leveraging Internet of Things (IoT) and advanced analytics to drive tangible, operational change on the ground. You will collaborate with a wide range of stakeholders (You will partner with Research and Applied Scientists, SDEs, Technical Program Managers, Product Managers and Business Leaders) across the business to develop and refine new ways of assessing challenges within Amazon operations. This role will combine Amazon’s oldest Leadership Principle, with the latest analytical innovations, to deliver business change at scale and efficiently The ideal candidate will have deep and broad experience with theoretical approaches and practical implementations of vision techniques for task automation. They will be a motivated self-starter who can thrive in a fast-paced environment. They will be passionate about staying current with sensing technologies and algorithms in the broader machine vision industry. They will enjoy working in a multi-disciplinary team of engineers, scientists and business leaders. They will seek to understand processes behind data so their recommendations are grounded. Key job responsibilities Your solutions will drive new system capabilities with global impact. You will design highly scalable, large enterprise software solutions involving computer vision. You will develop complex perception algorithms integrating across multiple sensing devices. You will develop metrics to quantify the benefits of a solution and influence project resources. You will validate system performance and use insights from your live models to drive the next generation of model development. Common tasks include: • Research, design, implement and evaluate complex perception and decision making algorithms integrating across multiple disciplines • Work closely with software engineering teams to drive scalable, real-time implementations • Collaborate closely with team members on developing systems from prototyping to production level • Collaborate with teams spread all over the world • Track general business activity and provide clear, compelling management reports on a regular basis We are open to hiring candidates to work out of one of the following locations: Luxembourg, LUX