Echo Show 10, Charcoal, UI.jpg
A a team of designers, engineers, software developers, and scientists spent many months hypothesizing, experimenting, learning, iterating, and ultimately creating Echo Show 10, which was released Thursday.

The intersection of design and science

How a team of designers, scientists, developers, and engineers worked together to create a truly unique device in Echo Show 10.

During the prototyping stages of the journey that brought Echo Show 10 to life, the design, engineering, and science teams behind it encountered a surprise: one of their early assumptions was proving to be wrong.

The feature that most distinguishes the current generation from its predecessors is the way the device utilizes motion to automatically face users as they move around a room and interact with Alexa. This allows users to move around in the kitchen while consulting a recipe, or to move freely when engaging in a video call, with the screen staying in view.

Naturally, or so the team thought, users would want the device to remain facing them, matching where they were at all times. “You walk from the sink to the fridge, say, while you're using the device for a recipe, the device moves with you,” David Rowell, principal UX designer said. Because no hardware existed, the team had to create a method of prototyping, so they turned to virtual reality (VR). That approach enabled Echo Show 10 teams to work together to test assumptions — including their assumption about how the screen should behave. In this case, what they experienced in VR made them change course.

Echo Show 10 animation

“We had a paradigm that we thought worked really well, but once we tested it, we quickly discovered that we don't want to be one-to-one accurate,” said David Jara, senior UX motion designer. In fact, he said, the feedback led them to a somewhat unexpected conclusion: the device should actually lag behind the user. “Even though, from a pragmatic standpoint, you would think, ‘Well, this thing is too slow. Why can't it keep up?’, once you experienced it, the slowed down version was so much more pleasant.”

This was just one instance of a class of feedback and assumption-changing research that required a team of designers, engineers, software developers, and scientists to constantly iterate and adapt. Those teams spent many months hypothesizing, experimenting, learning, iterating, and ultimately creating Echo Show 10, which was released Thursday. Amazon Science talked to some of those team members to find out how they collaborated to tackle the challenges of developing a motorized smart display and device that pairs sound localization technology and computer vision models.

From idea to iteration

“The idea came from the product team about ways we could differentiate Echo Show,” Rowell said. “The idea came up about this rotating device, but we didn't really know what we wanted to use it for, which is when design came in and started creating use cases for how we could take advantage of motion.”

The design team envisioned a device that moved with users in a way that was both smooth and provided utility.

Adding motion to Echo Show was a really big undertaking. There were a lot of challenges, including how do we make sure that the experience is natural.
Dinesh Nair, applied science manager

That presented some significant challenges for the scientists involved in the project. “Adding motion to Echo Show was a really big undertaking,” said Dinesh Nair, an applied science manager in Emerging Devices. “There were a lot of challenges, including how do we make sure that the experience is natural, and not perceived as creepy by the user.”

Not only did the team have to account for creating a motion experience that felt natural, they had to do it all on a relatively small device. "Building state-of-the-art computer vision algorithms that were processed locally on the device was the greatest challenge we faced," said Varsha Hedau, applied science manager.

The multi-faceted nature of the project also prompted the teams to test the device in a fairly new way. “When the project came along, we decided that that VR would be a great way to actually demonstrate Echo Show 10, particularly with motion,” Rowell noted. “How could it move with you? How does it frame you? How do we fine tune all the ways we want machine learning to move with the correct person?”

Behind each of those questions lay challenges for the design, science, and engineering teams. To identify and address those challenges, the far-flung teams collaborated regularly, even in the midst of a pandemic. “It was interesting because we’re spread over many different locations in the US,” Rowell said. “We had a lot of video calls and VR meant teams could very quickly iterate. There was a lot of sharing and VR was great for that.”

Clearing the hurdles

One of the first hurdles the teams had to clear was how to accurately and consistently locate a person.

“The way we initially thought about doing this was to use spatial cues from your voice to estimate where you are,” Nair said. “Using the direction given by Echo’s chosen beam, the idea was to move the device to face you, and then computer vision algorithms would kick in.”

The science behind Echo Show 10

A combination of audio and visual signals guide the device’s movement, so the screen is always in view. Learn more about the science that empowers that intelligent motion.

That approach presented dual challenges. Current Echo devices form beams in multiple directions and then choose the best beam for speech recognition. “One of the issues with beam selection is that the accuracy is plus or minus 30 degrees for our traditional Echo devices,” Nair observed. “Another is issues with interference noise and sound reflections, for example if you place the device in a corner or there is noise near the person.” The acoustic reflections were particularly vexing since they interfere with the direct sound from the person speaking, especially when the device is playing music. Traditional sound source localization algorithms are also susceptible to these problems.

The Audio Technology team addressed these challenges to determine the direction of sound by developing a new sound localization algorithm. “By breaking down sound waves into their fundamental components and training a model to detect the direct sound, we can accurately determine the direction that sound is coming from,” said Phil Hilmes, director of audio technology. That, along with other algorithm developments, led the team to deliver a sound direction algorithm that was more robust to reflections and interference from noise or music playback, even when it is louder than the person’s voice.

Rowell said, “When we originally conceived of the device, we envisioned it being placed in open space, like a kitchen island so you could use the device effectively from multiple rooms.” Customer feedback during beta testing showed this assumption ran into literal walls. “We found that people actually put the device closer to walls so the device had to work well in these positions.” In some of these more challenging positions, using only audio to find the direction is still insufficient for accurate localization and extra clues from other sensors are needed.

Echo Show 10, Charcoal, Living room.jpg
Echo Show 10 designers initially thought it would be placed in open space, like a kitchen island. Feedback during beta testing showed customers placed it closer to walls, so the teams adjusted.

The design team worked with the science teams so the device relied not just on sound, but also on computer vision. Computer vision algorithms allow the device to locate humans within its field of view, helping it improve accuracy and distinguish people from sounds reflecting off walls, or coming from other sources. The teams also developed fusion algorithms for combining computer vision and sound direction into a model that optimized the final movement.

That collaboration enabled the design team to work with the device engineers to limit the device’s rotation. “That approach prevented the device from turning and basically looking away from you or looking at the wall or never looking at you straight on,” Rowell said. “It really tuned in the algorithms and got better at working out where you were.”

The teams undertook a thorough review of every assumption made in the design phase and adapted based on actual customer interactions. That included the realization that the device’s tracking speed didn’t need to be slow so much as it needed to be intelligent.

“The biggest challenge with Echo Show 10 was to make motion work intelligently,” said Meeta Mishra, principal technical program manager for Echo Devices. “The science behind the device movement is based on fusion of various inputs like sound source, user presence, device placement, and lighting conditions, to name a few. The internal dog-fooding, coupled with the work from home situation, brought forward the real user environment for our testing and iterations. This gave us wider exposure of varied home conditions needed to formulate the right user experience that will work in typical households and also strengthened our science models to make this device a delight.”

Frame rates and bounding boxes

Responding to the user feedback about the preference for intelligent motion meant the science and design teams also had to navigate issues around detection. “Video calls often run at 24 frames a second,” Nair observed. “But a deep learning network that accurately detects where you are, those don't run as fast, they’re typically running at 10 frames per second on the device.”

That latency meant several teams had to find a way to bridge the difference between the frame rates. “We had to work with not just the design team, but also the team that worked on the framing software,” Nair said. “We had to figure out how we could give intermediate results between detections by tracking the person.”

By breaking down sound waves into their fundamental components and training a model ... we can accurately determine the direction that sound is coming from.
Phil Hilmes, director of audio technology

Hedau and her team helped deliver the answer in the form of bounding boxes and Kalman filtering, an algorithm that provides estimates of some unknown variables given the measurements observed over time. That approach allows the device to, essentially, make informed guesses about a user’s movement.

During testing, the teams also discovered that the device would need to account for the manner in which a person interacted with it. “We found that when people are on a call, there are two use cases,” Rowell observed. “They're either are very engaged with the call, where they’re close to the device and looking at the device and the other person on the other end, or they're multitasking.”

The solution was born, yet again, from collaboration. “We went through a lot of experiments to model which user experience really works the best,” Hedau said. Those experiments resulted in utilizing the device’s CV to determine the distance between a person and Echo Show 10.

“We have settings based on the distance that the customer is from the device, which is a way to roughly measure how engaged a customer is,” Rowell said. “When a person is really up close, we don't want the device to move too much because the screen just feels like it's fidgety. But if somebody is on a call and multitasking, they're moving a lot. In this instance, we want smoother transitions.”

Looking to the future

The teams behind the Echo Show 10 are, unsurprisingly, already pondering what’s next. Rowell suggested that, in the future, the Echo Show might show a bit of personality. "We can make the device more playful," Rowell said. "We could start to express a lot of personality with the hardware." [Editor’s note: Some of this is currently enabled via APIs; certain games can “take on new personality through the ability to make the device shake in concert with sound effects and on-screen animations.”]

Nair said his team will also focus on making the on-device processing even faster. “A significant portion of the overall on-device processing is CV and deep learning,” he noted. “Deep networks are always evolving, and we will keep pushing that frontier.”

“Our teams are working continuously to further push the performance of our deep learning models in corner cases such a multi-people, low lighting, fast motions, and more,” added Hedau.

Whatever route Echo Show goes next, the teams behind it already know one thing for certain: they can collaborate their way through just about anything. “With Echo Show 10, there were a lot of assumptions we had when we started, but we didn’t know which would prove true until we got there,” Jara said. “We were kind of building the plane as we were flying it.”

Related content

DE, Berlin
AWS AI is looking for passionate, talented, and inventive Applied Scientists with a strong machine learning background to help build industry-leading Conversational AI Systems. Our mission is to provide a delightful experience to Amazon’s customers by pushing the envelope in Natural Language Understanding (NLU), Dialog Systems including Generative AI with Large Language Models (LLMs) and Applied Machine Learning (ML). As part of our AI team in Amazon AWS, you will work alongside internationally recognized experts to develop novel algorithms and modeling techniques to advance the state-of-the-art in human language technology. Your work will directly impact millions of our customers in the form of products and services that make use language technology. You will gain hands on experience with Amazon’s heterogeneous text, structured data sources, and large-scale computing resources to accelerate advances in language understanding. We are hiring in all areas of human language technology and code generation. We are open to hiring candidates to work out of one of the following locations: Berlin, DEU
US, MA, North Reading
Working at Amazon Robotics Are you inspired by invention? Is problem solving through teamwork in your DNA? Do you like the idea of seeing how your work impacts the bigger picture? Answer yes to any of these and you’ll fit right in here at Amazon Robotics. We are a smart, collaborative team of doers that work passionately to apply cutting-edge advances in robotics and software to solve real-world challenges that will transform our customers’ experiences in ways we can’t even imagine yet. We invent new improvements every day. We are Amazon Robotics and we will give you the tools and support you need to invent with us in ways that are rewarding, fulfilling and fun. Position Overview The Amazon Robotics (AR) Software Research and Science team builds and runs simulation experiments and delivers analyses that are central to understanding the performance of the entire AR system. This includes operational and software scaling characteristics, bottlenecks, and robustness to “chaos monkey” stresses -- we inform critical engineering and business decisions about Amazon’s approach to robotic fulfillment. We are seeking an enthusiastic Data Scientist to design and implement state-of-the-art solutions for never-before-solved problems. The DS will collaborate closely with other research and robotics experts to design and run experiments, research new algorithms, and find new ways to improve Amazon Robotics analytics to optimize the Customer experience. They will partner with technology and product leaders to solve business problems using scientific approaches. They will build new tools and invent business insights that surprise and delight our customers. They will work to quantify system performance at scale, and to expand the breadth and depth of our analysis to increase the ability of software components and warehouse processes. They will work to evolve our library of key performance indicators and construct experiments that efficiently root cause emergent behaviors. They will engage with software development teams and warehouse design engineers to drive the evolution of the AR system, as well as the simulation engine that supports our work. Inclusive Team Culture Here at Amazon, we embrace our differences. We are committed to furthering our culture of inclusion. We have 12 affinity groups (employee resource groups) with more than 87,000 employees across hundreds of chapters around the world. We have innovative benefit offerings and host annual and ongoing learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences. Amazon’s culture of inclusion is reinforced within our 14 Leadership Principles, which reminds team members to seek diverse perspectives, learn and be curious, and earn trust. Flexibility It isn’t about which hours you spend at home or at work; it’s about the flow you establish that brings energy to both parts of your life. We offer flexibility and encourage you to find your own balance between your work and personal lives. Mentorship & Career Growth We care about your career growth too. Whether your goals are to explore new technologies, take on bigger opportunities, or get to the next level, we'll help you get there. Our business is growing fast and our people will grow with it. A day in the life Amazon offers a full range of benefits that support you and eligible family members, including domestic partners and their children. Benefits can vary by location, the number of regularly scheduled hours you work, length of employment, and job status such as seasonal or temporary employment. The benefits that generally apply to regular, full-time employees include: 1. Medical, Dental, and Vision Coverage 2. Maternity and Parental Leave Options 3. Paid Time Off (PTO) 4. 401(k) Plan If you are not sure that every qualification on the list above describes you exactly, we'd still love to hear from you! At Amazon, we value people with unique backgrounds, experiences, and skillsets. If you’re passionate about this role and want to make an impact on a global scale, please apply! A day in the life Amazon offers a full range of benefits that support you and eligible family members, including domestic partners and their children. Benefits can vary by location, the number of regularly scheduled hours you work, length of employment, and job status such as seasonal or temporary employment. The benefits that generally apply to regular, full-time employees include: 1. Medical, Dental, and Vision Coverage 2. Maternity and Parental Leave Options 3. Paid Time Off (PTO) 4. 401(k) Plan If you are not sure that every qualification on the list above describes you exactly, we'd still love to hear from you! At Amazon, we value people with unique backgrounds, experiences, and skillsets. If you’re passionate about this role and want to make an impact on a global scale, please apply! We are open to hiring candidates to work out of one of the following locations: North Reading, MA, USA
CN, Shanghai
亚马逊云科技上海人工智能实验室OpenSearch 研发团队正在招募应用科学实习生-多模态检索与生成方向实习生。OpenSearch是一个开源的搜索和数据分析套件, 它旨在为数据密集型应用构建解决方案,内置高性能、开发者友好的工具,并集成了强大的机器学习、数据处理功能,可以为客户提供灵活的数据探索、丰富和可视化功能,帮助客户从复杂的数据中发现有价值的信息。OpenSearch是现有AWS托管服务(AWS OpenSearch)的基础,OpenSearch核心团队负责维护OpenSearch代码库,他们的目标是使OpenSearch安全、高效、可扩展、可扩展并永远开源。 点击下方链接查看申请手册获得更多信息: https://amazonexteu.qualtrics.com/CP/File.php?F=F_55YI0e7rNdeoB6e Key job responsibilities 在这个实习期间,你将有机会: 1. 研究最新的搜索相关性人工智能算法。 2. 探索大模型技术在数据分析与可视化上的应用。 3. 了解主流搜索引擎Lucene的原理和应用。深入了解前沿自然语言处理技术和底层索引性能调优的结合。 4. 学习亚马逊云上的各种云服务。 5. 参与产品需求讨论,提出技术实现方案。 6. 与国内外杰出的开发团队紧密合作,学习代码开发和审查的流程。 We are open to hiring candidates to work out of one of the following locations: Shanghai, CHN
CN, Shanghai
亚马逊云科技上海人工智能实验室OpenSearch 研发团队正在招募应用科学家实习,方向是服务器端开发。OpenSearch是一个开源的搜索和数据分析套件, 它旨在为数据密集型应用构建解决方案,内置高性能、开发者友好的工具,并集成了强大的机器学习、数据处理功能,可以为客户提供灵活的数据探索、丰富和可视化功能,帮助客户从复杂的数据中发现有价值的信息。OpenSearch是现有AWS托管服务(AWS OpenSearch)的基础,OpenSearch核心团队负责维护OpenSearch代码库,他们的目标是使OpenSearch安全、高效、可扩展、可扩展并永远开源。 点击下方链接查看申请手册获得更多信息: https://amazonexteu.qualtrics.com/CP/File.php?F=F_55YI0e7rNdeoB6e Key job responsibilities 在这个实习期间,你将有机会: 1. 使用Java/Kotlin等服务器端技术编写高质量,高性能,安全,可维护和可测试的代码。 2. 了解主流搜索引擎Lucene的原理和应用。 3. 学习亚马逊云上的各种云服务。 4. 参与产品需求讨论,提出技术实现方案。 5. 与国内外杰出的开发团队紧密合作,学习代码开发和审查的流程。 6. 应用先进的人工智能和机器学习技术提升用户体验。 We are open to hiring candidates to work out of one of the following locations: Shanghai, CHN
CN, Shanghai
亚马逊云科技上海人工智能实验室OpenSearch 研发团队正在招募应用科学家实习,方向是服务器端开发。OpenSearch是一个开源的搜索和数据分析套件, 它旨在为数据密集型应用构建解决方案,内置高性能、开发者友好的工具,并集成了强大的机器学习、数据处理功能,可以为客户提供灵活的数据探索、丰富和可视化功能,帮助客户从复杂的数据中发现有价值的信息。OpenSearch是现有AWS托管服务(AWS OpenSearch)的基础,OpenSearch核心团队负责维护OpenSearch代码库,他们的目标是使OpenSearch安全、高效、可扩展、可扩展并永远开源。 点击下方链接查看申请手册获得更多信息: https://amazonexteu.qualtrics.com/CP/File.php?F=F_55YI0e7rNdeoB6e Key job responsibilities 在这个实习期间,你将有机会: • 使用HTML、CSS和TypeScript/Javascript等前端技术开发用户界面。 • 学习使用Node.js 为用户界面提供服务接口。 • 了解并实践工业级前端产品的开发/部署/安全审查/发布流程。 • 了解并实践前端框架React的使用。 • 参与产品需求讨论,提出技术实现方案。 • 与国内外杰出的开发团队紧密合作,学习代码开发和审查的流程。 • 编写高质量,高性能,安全,可维护和可测试的代码。 • 应用先进的人工智能和机器学习技术提升用户体验。 We are open to hiring candidates to work out of one of the following locations: Shanghai, CHN
US, WA, Bellevue
Are you excited about developing generative AI, reinforcement learning and foundation models? Are you looking for opportunities to build and deploy them on real problems at truly vast scale? At Amazon Fulfillment Technologies and Robotics, we are on a mission to build high-performance autonomous decision systems that perceive and act to further improve our world-class customer experience - at Amazon scale. We are looking for an Applied Scientist who will help us build next level simulation and optimization systems with the help of generative AI and LLMs. Together, we will be pushing beyond the state of the art in simulation and optimization of one of the most complex systems in the world: Amazon's Fulfillment Network. Key job responsibilities In this role, you will dive deep into our fulfillment network, understand complex processes and channel your insights to build large scale machine learning models (LLMs, graph neural nets and reinforcement learning) that will be able to understand and optimize the state and future of our buildings, network and orders. You will face a high level of research ambiguity and problems that require creative, ambitious, and inventive solutions. You will work with and in a team of applied scientists to solve cutting edge problems going beyond the published state of the art that will drive transformative change on a truly global scale. A day in the life In this role, you will dive deep into our fulfillment network, understand complex processes and channel your insights to build large scale machine learning models (LLMs, graph neural nets and reinforcement learning) that will be able to understand and optimize the state and future of our buildings, network and orders. You will face a high level of research ambiguity and problems that require creative, ambitious, and inventive solutions. You will work with and in a team of applied scientists to solve cutting edge problems going beyond the published state of the art that will drive transformative change on a truly global scale. A day in the life Amazon offers a full range of benefits that support you and eligible family members, including domestic partners and their children. Benefits can vary by location, the number of regularly scheduled hours you work, length of employment, and job status such as seasonal or temporary employment. The benefits that generally apply to regular, full-time employees include: 1. Medical, Dental, and Vision Coverage 2. Maternity and Parental Leave Options 3. Paid Time Off (PTO) 4. 401(k) Plan If you are not sure that every qualification on the list above describes you exactly, we'd still love to hear from you! At Amazon, we value people with unique backgrounds, experiences, and skillsets. If you’re passionate about this role and want to make an impact on a global scale, please apply! About the team Amazon Fulfillment Technologies (AFT) powers Amazon’s global fulfillment network. We invent and deliver software, hardware, and data science solutions that orchestrate processes, robots, machines, and people. We harmonize the physical and virtual world so Amazon customers can get what they want, when they want it. The AFT AI team has deep expertise developing cutting edge AI solutions at scale and successfully applying them to business problems in the Amazon Fulfillment Network. These solutions typically utilize machine learning and computer vision techniques, applied to text, sequences of events, images or video from existing or new hardware. We influence each stage of innovation from inception to deployment, developing a research plan, creating and testing prototype solutions, and shepherding the production versions to launch. We are open to hiring candidates to work out of one of the following locations: Bellevue, WA, USA
LU, Luxembourg
Pooling Req - JKU Linz Pooling Req - JKU Linz Pooling Req - JKU Linz Pooling Req - JKU Linz Pooling Req - JKU Linz Pooling Req - JKU Linz Pooling Req - JKU Linz Pooling Req - JKU Linz Pooling Req - JKU Linz Pooling Req - JKU Linz We are open to hiring candidates to work out of one of the following locations: Luxembourg, LUX
US, WA, Seattle
Amazon is one of the most popular sites in the US. Our product search engine, one of the most heavily used services in the world, indexes billions of products and serves hundreds of millions of customers world-wide. Our team leads the science and analytics efforts for the search page and we own multiple aspects of understanding how we can measure customer satisfaction with our experiences. This include building science based insights and novel metrics to define and track customer focused aspects. We are working on a new measurement framework to better quantify and qualify the quality of the search customer experience and are looking for a Senior Applied Scientist to lead the development and implementation of different signals for this framework and tackle new and uncharted territories for search engines using LLMs. Key job responsibilities We are looking for an experienced Sr. Applied Scientist to lead LLM based signals development and data analytics and drive critical product decisions for Amazon Search. In a fast-paced and ambiguous environment, you will perform multiple large, complex, and business critical analyses that will inform product design and business priorities. You will design and build AI based science solutions to allow routine inspection and deep business understanding as the search customer experience is being transformed. Keeping a department-wide view, you will focus on the highest priorities and constantly look for scale and automation, while making technical trade-offs between short term and long-term needs. With your drive to deliver results, you will quickly analyze data and understand the current business challenges to assess the feasibility of different science projects as well as help shape the analytics roadmap of the Science and Analytics team for Search CX. Your desire to learn and be curious will help us look around corners for improvement opportunities and more efficient metrics development. In this role, you will partner with data engineers, business intelligence engineers, product managers, software engineers, economists, and other scientists. A day in the life You are have expertise in Machine learning and statistical models. You are comfortable with a higher degree of ambiguity, knows when and how to be scrappy, build quick prototypes and proofs of concepts, innate ability to see around corners and know what is coming, define a long-term science vision, and relish the idea of solving problems that haven’t been solved at scale. As part of our journey to learn about our data, some opportunities may be a dead end and you will balancing unknowns with delivering results for our customers. Along the way, you’ll learn a ton, have fun and make a positive impact at scale. About the team Joining this team, you’ll experience the benefits of working in a dynamic, entrepreneurial environment, while leveraging the resources of Amazon.com (AMZN), Earth's most customer-centric company and one of the world's leading internet companies. We provide a highly customer-centric, and team-oriented environment. We are open to hiring candidates to work out of one of the following locations: Seattle, WA, USA
US, MA, Westborough
The Research Team at Amazon Robotics is seeking a passionate Applied Scientist, with a strong track record of industrial research, innovation leadership, and technology transfer, with a focus on ML Applications. At Amazon Robotics, we apply cutting edge advancements in robotics, software development, Big Data, ML and AI to solve real-world challenges that will transform our customers’ experiences in ways we can’t even imagine yet. We operate hundreds of buildings that employ hundreds of thousands of robots teaming up to perform sophisticated, large-scale missions. There are a lot of exciting opportunities ahead of us that can be unlocked by scientific research. Amazon Robotics has a dedicated focus on research and development to continuously explore new opportunities to extend its product lines into new areas. As you could imagine, data is at the heart of our innovation. This role will be participating in creating the ML and AI roadmap, leading science initiatives, and shipping ML products. Key job responsibilities You will be responsible for: - Thinking Big and ideating with Data Science team, other Science teams, and stakeholders across the organization to co-create the ML roadmap. - Collaborating with customers and cross-functional stakeholder teams to help the team identify, disambiguate, and define key problems. - Independently innovating, creating, and iterating ML solutions for given business problems. Especially, using techniques such as Computer Vision, Deep Learning, Causal Inference, etc. - Collaborating with other Science, Tech, Ops, and Business leaders to ship and iterate ML products. - Promoting best practices and mentoring junior team members on problem solving and communication. - Leading state-of-the-art research work and pursuing internal/external scientific publications. A day in the life You will co-create ML/AI roadmap. You will help team identify business opportunities. You will prototype, iterate ML/AI solutions. You will drive communication with stakeholders to implement and ship ML solutions. e.g., computer vision, deep learning, explainable AI, causal inference, reinforcement learning, etc. You will mentor and guide junior team members in delivering projects and business impact. You will work with the team and lead scientific publications. Amazon offers a full range of benefits that support you and eligible family members, including domestic partners and their children. Benefits can vary by location, the number of regularly scheduled hours you work, length of employment, and job status such as seasonal or temporary employment. The benefits that generally apply to regular, full-time employees include: 1. Medical, Dental, and Vision Coverage 2. Maternity and Parental Leave Options 3. Paid Time Off (PTO) 4. 401(k) Plan If you are not sure that every qualification on the list above describes you exactly, we'd still love to hear from you! At Amazon, we value people with unique backgrounds, experiences, and skillsets. If you’re passionate about this role and want to make an impact on a global scale, please apply! About the team You will join a scientifically and demographically diverse research/science team. Our multi-disciplinary team includes scientists with backgrounds in planning/scheduling, grasping/manipulation, machine learning, statistical analysis, and operations research. We develop novel algorithms and machine learning models and apply them to real-word robotic warehouses, including: - Planning/coordinating the paths of thousands of robtos - Dynamic task allocation to thousands of robots. - Learning how to manipulate products sold by Amazon. - Co-designing an optimizing robotic logistics processes. Our team also serves as a hub to foster innovation and support scientists across Amazon Robotics. In addition, we coordinate research engagements with academia. We are open to hiring candidates to work out of one of the following locations: Westborough, MA, USA
US, CA, Sunnyvale
Amazon is looking for a passionate, talented, and inventive Applied Scientists with a strong machine learning background to help build industry-leading Speech and Language technology. Our mission is to provide a delightful experience to Amazon’s customers by pushing the envelope in Automatic Speech Recognition (ASR), Machine Translation (MT), Natural Language Understanding (NLU), Machine Learning (ML) and Computer Vision (CV). As part of our AI team in Amazon AGI, you will work alongside internationally recognized experts to develop novel algorithms and modeling techniques to advance the state-of-the-art in human language technology. Your work will directly impact millions of our customers in the form of products and services that make use of speech and language technology. You will gain hands on experience with Amazon’s heterogeneous speech, text, and structured data sources, and large-scale computing resources to accelerate advances in spoken language understanding. We are hiring in all areas of human language technology: ASR, MT, NLU, text-to-speech (TTS), and Dialog Management, in addition to Computer Vision. We are open to hiring candidates to work out of one of the following locations: Bellevue, WA, USA | San Francisco, CA, USA | Seattle, WA, USA | Sunnyvale, CA, USA