The science behind Echo Show 10

A combination of audio and visual signals guide the device’s movement, so the screen is always in view.

The first Echo Show represented an entirely new way to interact with Alexa; she could show you things on a screen controlled by voice. Being able to easily see your favorite recipe, watch your flash briefing, or video call with a friend is delightful — but we thought we could add even more to the experience. Our screens are stationary, but we are not. So with Echo Show 10, we asked ourselves: how can we keep the screen in view, no matter where you are in the room? The answer: it has to move.

Creating a device that can move intelligently in a way that improves the Alexa experience and is not distracting was no easy task. We had to consider when, where, and how to incorporate motion into Echo Show to make it feel like a natural extension of how customers experience Alexa.

Combining audio and computer vision algorithms

When you say “Alexa” to any Echo Show device today, you’ll see a blue light bar on screen. The lighter part of that blue light bar approximates the direction the device chooses to focus; we call this beam selection. Echo devices try to select the beam that gives the best accuracy for recognizing what was said.

Cutaway view of Echo 10's motor with a brass disc at the bottom.
A cutaway view of Echo 10's motor (brass disc at bottom).

However, what works for beam selection doesn’t work best for guiding motion. Noises, multiple speakers, or sound reflections from walls and other surfaces can prevent these algorithms from selecting the beam that best represents the direction of the talker. And with audio-only output, it doesn’t matter if Echo’s input system has selected a different beam: the user still hears Alexa’s response. But a screen that’s constantly moving around to avoid these echoes and noises would be a severe distraction.

With Echo Show 10, we solve this problem by combining sound source localization (SSL) with computer vision (CV). Our implementation of SSL uses acoustic-wave-decomposition and machine-learning techniques to determine the direction in which the user is most probably located. Then, the raw SSL measurements are fused with our CV algorithms.

The intersection of design and science

Learn how a team of designers, scientists, and engineers worked together to overcome challenges and create Echo Show 10.

The CV algorithms can identify objects and humans in the field of view, enabling the device to differentiate between sounds coming from people and those coming from other sources and reflections off walls. Sometimes audio can reflect from behind the device, so we added a setup step in which customers set the device’s range of motion. If the device can ignore sounds originating outside its range of motion, it’s better able to avoid reflections and narrow down the direction of the wake word.

The CV algorithms turn the camera image into hundreds of data points representing shapes, edges, facial landmarks, and general coloring; then the image is deleted permanently. These data points cannot be reverse-engineered to the original input, and no facial-recognition technology is used. All of this processing happens in a matter of milliseconds, entirely on-device.

Visualization of the non-reversible process Echo 10 uses to convert images into a higher-level abstraction to support motion.
A visualization of the non-reversible process Echo 10 uses to convert images into a higher-level abstraction to support motion.

The device’s computer vision service (CVS) can dynamically vary the frame rate (the number of frames per second), and it operates with over 95% precision at distances of up to 10 feet. The CVS uses spatiotemporal filtering to suppress ephemeral false positives caused by camera motion and blur. In a multiuser environment, engagement detection — determining which user is facing the device — helps us further target the screen to the relevant user or users.

Defining the experience

With our algorithms built, the next step was to orchestrate the ideal customer experience. We started with capturing data from internal beta participants and product teams. Amazon employees tested Echo Show 10 in their homes, and before the hardware was even ready, we used virtual-reality to gather early input on what movements felt most natural, preferred speed of motion, and so on. What we learned was invaluable.

First, knowing when not to move is just as important as knowing when to move. We wanted customers to be able to manually redirect the screen. But that meant distinguishing between the pressure applied by someone scrolling through a recipe while making dinner and someone physically trying to move the device. The device also needed to know that if it turned in one direction and hit something — a wall, cabinet, etc. — it should not continue to go in that direction.

This required a motor resistance — or “back drive” — that could kick in, or not, depending on the user’s movement. A lot of fine-tuning went into getting that distinction and timing right.

We also had to determine a speed and acceleration that felt natural. The motor allows us to accelerate at up to 360 degrees/second2 to a speed of up to 180 degrees/second. However, at that speed, in a typical, in-home environment, you risk knocking over a glass or a picture frame that might be near the device. Move too slowly, on the other hand, and you might try the customer’s patience — and even risk spurious stall detection. We settled on a speed that was quick but also allowed the device to stop short if it bumped an object.

Lastly, we needed to define the types of movements that Echo Show 10 will make. As humans, we have an innate ability to know when to respond with our eyes versus a full move of the head. Echo Show 10, while not quite as adaptive as a human, tries to approximate this distinction with three zones of perception, defined by the camera’s field of view.

Within the “dead” zone, the center of the field of view, the device doesn’t move, even if the customers do. Within the “holding” zone, the regions of the field of view outside the center, the device turns only if the customer settles into a new position for long enough. And when the customer enters the “motion” zone, the edges of the field of view, the device moves, ensuring that the screen always remains visible.

The range of these zones, their dependency on your distance from the device, and the device’s speed and acceleration are tuned based on thousands of hours of lab and user testing. There are also certain situations where Echo Show 10 will not move — for instance, if the built-in camera shutter is closed or if SSL cannot differentiate between sounds in two very different directions.

Applications

Echo Show stationed on a kitchen counter.
Imagine, says Sajjadi, that as you were cooking the Echo Show 10 was watching you and could alert you if you missed an ingredient. That, he says, would be an example of taking procuedure monitoring from the shop floor to the kitchen.

After solving these scientific challenges came the fun part: what are some of the first features that will use motion? Video calling is a hugely popular feature for Echo Show customers, so the use of auto-framing and motion in calling was obvious. Customers also tend to place Echo Show devices in kitchens and use Alexa for recipes, so not requiring a busy cook to strain to see a recipe on-screen was also top of mind.

And because customers love Alexa Guard for helping keep their homes safe while they are away, remote access to the camera was high on the list as well. When Away Mode is turned on, Echo Show 10 will periodically pan the room and send a Smart Alert if someone is detected in its field of view. You can also remotely check in on your home for added peace of mind if you are on a trip or to see if your dog has snuck onto the couch while you’re at the grocery store.

In developing Echo Show 10, I have come to appreciate how complex, evolved, and adaptive we are as a species; the things we communicate with nonverbal cues are incredibly complex yet somehow globally understood. We believe that the potential of motion as a response modality is enormous, and we’re just scratching the surface of all the ways we can delight customers with Echo Show 10. For that reason, we’re inviting developers to build experiences for Echo Show 10, with motion APIs that they can use to unleash their creativity. To learn more about these new APIs, visit our developer blog.

Related content

US, CA, Santa Clara
About Amazon Health Amazon Health’s mission is to make it dramatically easier for customers to access the healthcare products and services they need to get and stay healthy. Towards this mission, we (Health Storefront and Shared Tech) are building the technology, products and services, that help customers find, buy, and engage with the healthcare solutions they need. Job summary We are seeking an exceptional Applied Scientist to join a team of experts in the field of machine learning, and work together to break new ground in the world of healthcare to make personalized and empathetic care accessible, convenient, and cost-effective. We leverage and train state-of-the-art large-language-models (LLMs) and develop entirely new experiences to help customers find the right products and services to address their health needs. We work on machine learning problems for intent detection, dialogue systems, and information retrieval. You will work in a highly collaborative environment where you can pursue both near-term productization opportunities to make immediate, meaningful customer impacts while pursuing ambitious, long-term research. You will work on hard science problems that have not been solved before, conduct rapid prototyping to validate your hypothesis, and deploy your algorithmic ideas at scale. You will get the opportunity to pursue work that makes people's lives better and pushes the envelop of science. Key job responsibilities - Translate product and CX requirements into science metrics and rigorous testing methodologies. - Invent and develop scalable methodologies to evaluate LLM outputs against metrics and guardrails. - Design and implement the best-in-class semantic retrieval system by creating high-quality knowledge base and optimizing embedding models and similarity measures. - Conduct tuning, training, and optimization of LLMs to achieve a compelling CX while reducing operational cost to be scalable. A day in the life In a fast-paced innovation environment, you work closely with product, UX, and business teams to understand customer's challenges. You translate product and business requirements into science problems. You dive deep into challenging science problems, enabling entirely new ML and LLM-driven customer experiences. You identify hypothesis and conduct rapid prototyping to learn quickly. You develop and deploy models at scale to pursue productizations. You mentor junior science team members and help influence our org in scientific best practices. About the team We are the ML Science and Engineering team, with a strong focus on Generative AI. The team consists of top-notch ML Scientists with diverse background in healthcare, robotics, customer analytics, and communication. We are committed to building and deploying the most advanced scientific capabilities and solutions for the products and services at Amazon Health. We are open to hiring candidates to work out of one of the following locations: Santa Clara, CA, USA
US, WA, Seattle
We are designing the future. If you are in quest of an iterative fast-paced environment, where you can drive innovation through scientific inquiry, and provide tangible benefit to hundreds of thousands of our associates worldwide, this is your opportunity. Come work on the Amazon Worldwide Fulfillment Design & Engineering Team! We are looking for an experienced and senior Research Scientist with background in Ergonomics and Industrial Human Factors, someone that is excited to work on complex real-world challenges for which a comprehensive scientific approach is necessary to drive solutions. Your investigations will define human factor / ergonomic thresholds resulting in design and implementation of safe and efficient workspaces and processes for our associates. Your role will entail assessment and design of manual material handling tasks throughout the entire Amazon network. You will identify fundamental questions pertaining to the human capabilities and tolerances in a myriad of work environments, and will initiate and lead studies that will drive decision making on an extreme scale. .You will provide definitive human factors/ ergonomics input and participate in design with every single design group in our network, including Amazon Robotics, Engineering R&D, and Operations Engineering. You will work closely with our Worldwide Health and Safety organization to gain feedback on designs and work tenaciously to continuously improve our associate’s experience. Key job responsibilities - Collaborating and designing work processes and workspaces that adhere to human factors / ergonomics standards worldwide. - Producing comprehensive and assessments of workstations and processes covering biomechanical, physiological, and psychophysical demands. - Effectively communicate your design rationale to multiple engineering and operations entities. - Identifying gaps in current human factors standards and guidelines, and lead comprehensive studies to redefine “industry best practices” based on solid scientific foundations. - Continuously strive to gain in-depth knowledge of your profession, as well as branch out to learn about intersecting fields, such as robotics and mechatronics. - Travelling to our various sites to perform thorough assessments and gain in-depth operational feedback, approximately 25%-50% of the time. We are open to hiring candidates to work out of one of the following locations: Seattle, WA, USA
US, WA, Seattle
This single-threaded leader will focus on designing experiences and optimizations to monetize Amazon Detail Pages, while improving shopper experience and returns for our advertising customers. This leader will own generating different widgets (thematic, blended, interactive prompt, hybrid merchandising), and the science, tech and signaling systems to enable them for the different category and BuyX teams. This leader will also own science and systems for bidding into ranking systems like Percolate, and for operating the marketplace through allocation and pricing methods. They will own identifying operating points for WW marketplaces in terms of entitlement, RoAS impact and other benchmarks, plus invent ways to operationalize this thinking, all while experimenting to learn from the marketplace. The leader will also own AI generation of shopping pages for monetization (these shopping pages are built on DP content). We are open to hiring candidates to work out of one of the following locations: Seattle, WA, USA
US, CA, Santa Monica
Amazon Advertising is looking for a motivated and analytical self-starter to help pave the way for the next generation of insights and advertising products. You will use large-scale data, advertising effectiveness knowledge and business information needs of our advertising clients to envision new advertising measurement products and tools. You will facilitate innovation on behalf of our customers through end-to-end delivery of measurement solutions leveraging experiments, machine learning and causal inference. You will partner with our engineering teams to develop and scale successful solutions to production. This role requires strong hands-on skills in terms of effectively working with data, coding, and MLOps. However, the ideal candidate will also bring strong interpersonal and communication skills to engage with cross-functional partners, as well as to stay connected to insights needs of account teams and advertisers. This is a truly exciting and versatile position in that it allows you to apply and develop your hands-on data modeling and coding skills, to work with other scientists on research in new measurement solutions while at the same time partner with cross-functional stakeholders to deliver product impact. Key job responsibilities As an Applied Scientist on the Advertising Incrementality Measurement team you will: - Create new analytical products from conception to prototyping and scaling the product end-to-end through to production. - Scope and define new business problems in the realm of advertising effectiveness. Use machine learning and experiments to develop effective and scalable solutions. - Partner closely with the Engineering team. - Partner with Economists, Data Scientists, and other Applied Scientists to conduct research on advertising effectiveness using machine learning and causal inference. Make findings available via white papers. - Act as a liaison to product teams to help productize new measurement solutions. About the team Advertising Incrementality Measurement combines experiments with econometric analysis and machine learning to provide rigorous causal measurement of advertising effectiveness to internal and external customers. We are open to hiring candidates to work out of one of the following locations: Arlington, VA, USA | Boulder, CO, USA | New York, NY, USA | Santa Monica, CA, USA
US, CA, Santa Clara
Amazon launched the Generative AI Innovation Center (GAIIC) in Jun 2023 to help AWS customers accelerate the use of Generative AI to solve business and operational problems and promote innovation in their organization (https://press.aboutamazon.com/2023/6/aws-announces-generative-ai-innovation-center). GAIIC provides opportunities to innovate in a fast-paced organization that contributes to game-changing projects and technologies that get deployed on devices and in the cloud. As an Applied Science Manager in GAIIC, you'll partner with technology and business teams to build new GenAI solutions that delight our customers. You will be responsible for directing a team of data/research/applied scientists, deep learning architects, and ML engineers to build generative AI models and pipelines, and deliver state-of-the-art solutions to customer’s business and mission problems. Your team will be working with terabytes of text, images, and other types of data to address real-world problems. The successful candidate will possess both technical and customer-facing skills that will allow you to be the technical “face” of AWS within our solution providers’ ecosystem/environment as well as directly to end customers. You will be able to drive discussions with senior technical and management personnel within customers and partners, as well as the technical background that enables them to interact with and give guidance to data/research/applied scientists and software developers. The ideal candidate will also have a demonstrated ability to think strategically about business, product, and technical issues. Finally, and of critical importance, the candidate will be an excellent technical team manager, someone who knows how to hire, develop, and retain high quality technical talent. About the team Here at AWS, it’s in our nature to learn and be curious about diverse perspectives. Our employee-led affinity groups foster a culture of inclusion that empower employees to feel proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences, inspire us to never stop embracing our uniqueness. We have a career path for you no matter what stage you’re in when you start here. We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career- advancing resources here to help you develop into a better-rounded professional. We are open to hiring candidates to work out of one of the following locations: San Francisco, CA, USA | San Jose, CA, USA | Santa Clara, CA, USA
GB, London
Amazon Advertising is looking for a Data Scientist to join its brand new initiative that powers Amazon’s contextual advertising products. Advertising at Amazon is a fast-growing multi-billion dollar business that spans across desktop, mobile and connected devices; encompasses ads on Amazon and a vast network of hundreds of thousands of third party publishers; and extends across US, EU and an increasing number of international geographies. The Supply Quality organization has the charter to solve optimization problems for ad-programs in Amazon and ensure high-quality ad-impressions. We develop advanced algorithms and infrastructure systems to optimize performance for our advertisers and publishers. We are focused on solving a wide variety of problems in computational advertising like traffic quality prediction (robot and fraud detection), Security forensics and research, Viewability prediction, Brand Safety, Contextual data processing and classification. Our team includes experts in the areas of distributed computing, machine learning, statistics, optimization, text mining, information theory and big data systems. We are looking for a dynamic, innovative and accomplished Data Scientist to work on data science initiatives for contextual data processing and classification that power our contextual advertising solutions. Are you an experienced user of sophisticated analytical techniques that can be applied to answer business questions and chart a sustainable vision? Are you exited by the prospect of communicating insights and recommendations to audiences of varying levels of technical sophistication? Above all, are you an innovator at heart and have a track record of resolving ambiguity to deliver result? As a data scientist, you help our data science team build cutting edge models and measurement solutions to power our contextual classification technology. As this is a new initiative, you will get an opportunity to act as a thought leader, work backwards from the customer needs, dive deep into data to understand the issues, define metrics, conceptualize and build algorithms and collaborate with multiple cross-functional teams. Key job responsibilities * Define a long-term science vision for contextual-classification tech, driven fundamentally from the needs of our advertisers and publishers, translating that direction into specific plans for the science team. Interpret complex and interrelated data points and anecdotes to build and communicate this vision. * Collaborate with software engineering teams to Identify and implement elegant statistical and machine learning solutions * Oversee the design, development, and implementation of production level code that handles billions of ad requests. Own the full development cycle: idea, design, prototype, impact assessment, A/B testing (including interpretation of results) and production deployment. * Promote the culture of experimentation and applied science at Amazon. * Demonstrated ability to meet deadlines while managing multiple projects. * Excellent communication and presentation skills working with multiple peer groups and different levels of management * Influence and continuously improve a sustainable team culture that exemplifies Amazon’s leadership principles. We are open to hiring candidates to work out of one of the following locations: London, GBR
JP, 13, Tokyo
We are seeking a Principal Economist to be the science leader in Amazon's customer growth and engagement. The wide remit covers Prime, delivery experiences, loyalty program (Amazon Points), and marketing. We look forward to partnering with you to advance our innovation on customers’ behalf. Amazon has a trailblazing track record of working with Ph.D. economists in the tech industry and offers a unique environment for economists to thrive. As an economist at Amazon, you will apply the frontier of econometric and economic methods to Amazon’s terabytes of data and intriguing customer problems. Your expertise in building reduced-form or structural causal inference models is exemplary in Amazon. Your strategic thinking in designing mechanisms and products influences how Amazon evolves. In this role, you will build ground-breaking, state-of-the-art econometric models to guide multi-billion-dollar investment decisions around the global Amazon marketplaces. You will own, execute, and expand a research roadmap that connects science, business, and engineering and contributes to Amazon's long term success. As one of the first economists outside North America/EU, you will make an outsized impact to our international marketplaces and pioneer in expanding Amazon’s economist community in Asia. The ideal candidate will be an experienced economist in empirical industrial organization, labour economics, or related structural/reduced-form causal inference fields. You are a self-starter who enjoys ambiguity in a fast-paced and ever-changing environment. You think big on the next game-changing opportunity but also dive deep into every detail that matters. You insist on the highest standards and are consistent in delivering results. Key job responsibilities - Work with Product, Finance, Data Science, and Data Engineering teams across the globe to deliver data-driven insights and products for regional and world-wide launches. - Innovate on how Amazon can leverage data analytics to better serve our customers through selection and pricing. - Contribute to building a strong data science community in Amazon Asia. We are open to hiring candidates to work out of one of the following locations: Tokyo, 13, JPN
DE, BE, Berlin
Ops Integration: Concessions team is looking for a motivated, creative and customer obsessed Snr. Applied Scientist with a strong machine learning background, to develop advanced analytics models (Computer Vision, LLMs, etc.) that improve customer experiences We are the voice of the customer in Amazon’s operations, and we take that role very seriously. If you join this team, you will be a key contributor to delivering the Factory of the Future: leveraging Internet of Things (IoT) and advanced analytics to drive tangible, operational change on the ground. You will collaborate with a wide range of stakeholders (You will partner with Research and Applied Scientists, SDEs, Technical Program Managers, Product Managers and Business Leaders) across the business to develop and refine new ways of assessing challenges within Amazon operations. This role will combine Amazon’s oldest Leadership Principle, with the latest analytical innovations, to deliver business change at scale and efficiently The ideal candidate will have deep and broad experience with theoretical approaches and practical implementations of vision techniques for task automation. They will be a motivated self-starter who can thrive in a fast-paced environment. They will be passionate about staying current with sensing technologies and algorithms in the broader machine vision industry. They will enjoy working in a multi-disciplinary team of engineers, scientists and business leaders. They will seek to understand processes behind data so their recommendations are grounded. Key job responsibilities Your solutions will drive new system capabilities with global impact. You will design highly scalable, large enterprise software solutions involving computer vision. You will develop complex perception algorithms integrating across multiple sensing devices. You will develop metrics to quantify the benefits of a solution and influence project resources. You will validate system performance and use insights from your live models to drive the next generation of model development. Common tasks include: • Research, design, implement and evaluate complex perception and decision making algorithms integrating across multiple disciplines • Work closely with software engineering teams to drive scalable, real-time implementations • Collaborate closely with team members on developing systems from prototyping to production level • Collaborate with teams spread all over the world • Track general business activity and provide clear, compelling management reports on a regular basis We are open to hiring candidates to work out of one of the following locations: Berlin, BE, DEU | Berlin, DEU
DE, BY, Munich
Ops Integration: Concessions team is looking for a motivated, creative and customer obsessed Snr. Applied Scientist with a strong machine learning background, to develop advanced analytics models (Computer Vision, LLMs, etc.) that improve customer experiences We are the voice of the customer in Amazon’s operations, and we take that role very seriously. If you join this team, you will be a key contributor to delivering the Factory of the Future: leveraging Internet of Things (IoT) and advanced analytics to drive tangible, operational change on the ground. You will collaborate with a wide range of stakeholders (You will partner with Research and Applied Scientists, SDEs, Technical Program Managers, Product Managers and Business Leaders) across the business to develop and refine new ways of assessing challenges within Amazon operations. This role will combine Amazon’s oldest Leadership Principle, with the latest analytical innovations, to deliver business change at scale and efficiently The ideal candidate will have deep and broad experience with theoretical approaches and practical implementations of vision techniques for task automation. They will be a motivated self-starter who can thrive in a fast-paced environment. They will be passionate about staying current with sensing technologies and algorithms in the broader machine vision industry. They will enjoy working in a multi-disciplinary team of engineers, scientists and business leaders. They will seek to understand processes behind data so their recommendations are grounded. Key job responsibilities Your solutions will drive new system capabilities with global impact. You will design highly scalable, large enterprise software solutions involving computer vision. You will develop complex perception algorithms integrating across multiple sensing devices. You will develop metrics to quantify the benefits of a solution and influence project resources. You will validate system performance and use insights from your live models to drive the next generation of model development. Common tasks include: • Research, design, implement and evaluate complex perception and decision making algorithms integrating across multiple disciplines • Work closely with software engineering teams to drive scalable, real-time implementations • Collaborate closely with team members on developing systems from prototyping to production level • Collaborate with teams spread all over the world • Track general business activity and provide clear, compelling management reports on a regular basis We are open to hiring candidates to work out of one of the following locations: Munich, BE, DEU | Munich, BY, DEU | Munich, DEU
IT, MI, Milan
Ops Integration: Concessions team is looking for a motivated, creative and customer obsessed Snr. Applied Scientist with a strong machine learning background, to develop advanced analytics models (Computer Vision, LLMs, etc.) that improve customer experiences We are the voice of the customer in Amazon’s operations, and we take that role very seriously. If you join this team, you will be a key contributor to delivering the Factory of the Future: leveraging Internet of Things (IoT) and advanced analytics to drive tangible, operational change on the ground. You will collaborate with a wide range of stakeholders (You will partner with Research and Applied Scientists, SDEs, Technical Program Managers, Product Managers and Business Leaders) across the business to develop and refine new ways of assessing challenges within Amazon operations. This role will combine Amazon’s oldest Leadership Principle, with the latest analytical innovations, to deliver business change at scale and efficiently The ideal candidate will have deep and broad experience with theoretical approaches and practical implementations of vision techniques for task automation. They will be a motivated self-starter who can thrive in a fast-paced environment. They will be passionate about staying current with sensing technologies and algorithms in the broader machine vision industry. They will enjoy working in a multi-disciplinary team of engineers, scientists and business leaders. They will seek to understand processes behind data so their recommendations are grounded. Key job responsibilities Your solutions will drive new system capabilities with global impact. You will design highly scalable, large enterprise software solutions involving computer vision. You will develop complex perception algorithms integrating across multiple sensing devices. You will develop metrics to quantify the benefits of a solution and influence project resources. You will validate system performance and use insights from your live models to drive the next generation of model development. Common tasks include: • Research, design, implement and evaluate complex perception and decision making algorithms integrating across multiple disciplines • Work closely with software engineering teams to drive scalable, real-time implementations • Collaborate closely with team members on developing systems from prototyping to production level • Collaborate with teams spread all over the world • Track general business activity and provide clear, compelling management reports on a regular basis We are open to hiring candidates to work out of one of the following locations: Milan, MI, ITA