The science behind Echo Frames

How the team behind Echo Frames delivered longer battery life and improved sound quality inside the slim form factor of a pair of eyeglasses.

When the team behind Amazon’s Echo Frames set out to improve the next generation of their product, they needed to strike a delicate balance. Customer feedback on earlier versions of the smart audio eyeglasses centered on three elements: longer battery life, more style options, and improved sound quality.

A man with a beard is seen wearing a pair of Echo Frames glasses. He is standing outside and is pictured in three-quarters view.
Echo Frames feature custom-built speech processing technology that drastically improves word recognition — key for interacting with Alexa in windy or noisy environments.

Achieving all three of those goals would be a challenge in itself; doing that inside the slim form factor of a pair of Alexa-enabled eyeglasses upped the ante.

“All three of those goals are in tension with one another,” says Adam Slaboski, senior manager of product management and product lead for Echo Frames. The easiest way to improve battery and audio would be to increase the size of the device, but that would conflict with feedback around the importance of design. Amping up bass to improve the audio experience would consume more battery, and so on.

Finding that sweet spot was a huge effort in engineering and customer understanding.
Adam Slaboski

“Finding that sweet spot was a huge effort in engineering and customer understanding," Slaboski says.

With Echo Frames (3rd Gen) and Carrera Smart Glasses with Alexa (designed in collaboration with Safilo, one of the world’s leading eyewear companies), the Smart Eyewear team met the challenge.

The smart glasses feature enhanced audio playback, with custom-built speech-processing technology that dramatically improves word recognition — key for interacting with Alexa in windy or noisy environments. The new range of frame styles come in a variety of sizes, and all come with a significant boost in battery life.

From the outside, Echo Frames still look like a pair of regular eyeglasses. “But we changed everything on the inside,” says Jean Wang, general manager and director of Smart Eyewear. “And we learned new lessons along the way.”

Here’s how Amazon engineers and product designers tackled all three customer demands.

Turning up the volume with open-ear audio

Like previous generations of Echo Frames, the current model uses open-ear audio. In addition to fitting the form factor of a pair of glasses, this allows users to maintain awareness of their surroundings while interacting with Alexa or enjoying audio entertainment.

Related content
Combining psychoacoustics, signal processing, and speaker beamforming enhances stereo audio and delivers an immersive sound experience for customers.

The open-ear audio design has been popular with users who are blind or have low vision, notes Jenai Akina, senior product manager for Echo Frames. “It’s really beneficial that it doesn’t obstruct a critical sense like hearing,” she explains. “That form factor is really helpful for daily interactions — especially when we want to be open to engage with our environment and the people around us. Open ear allows customers to maintain awareness, while providing access to a voice assistant.”

Open-ear audio brings a host of unique challenges to the engineering process. Typical headphones and earbuds block off the ear from the outside world, preventing air from escaping. That funnels more of the sound waves from the speakers into the user’s ears. With an open-ear design, sound has to travel farther, and there is less control over direction. That could lower the audio volume and reduce clarity — and importantly, audio could leak out to people standing nearby. The key is to drive the sound pressure as much as possible toward the user’s ears while minimizing the audio leakage.

By bringing people into the lab, we can simulate real environmental noise conditions like wind, background noise in a crowded restaurant, and the sound of cars on the road.
Scott Choi

In working to improve audio quality, the team continued to hone the directionality of the sound while also working to improve volume and bass. A technique called dipole speaker configuration helps to do both. In addition to a sound porthole located near the ear canal, the frames feature a second porthole that cancels unnecessary sound while amping up bass.

With input from in-house audio experts and instruments to analyze measurements like harmonic distortion, the team came up with a set of potential tuning solutions that met objective targets for audio quality. They then tested those “flavors” of tuning in the lab with several user groups.

“By bringing people into the lab, we can simulate real environmental-noise conditions like wind, background noise in a crowded restaurant, and the sound of cars on the road,” explains senior manager of audio Scott Choi. That allowed his team to understand environmental variables in a controlled setting.

With the feedback from those focus groups, the team then selected a few of the most popular tunings to push out to beta testing, where users could provide feedback on a weekly basis.

“We see how the feedback trends change with each tuning change, which gradually allows it to mature and converge into a certain tuning,” Choi says. The result is audio calibrated to maximize intelligibility and volume without leaking private conversations (or guilty-pleasure playlists).

The Echo Frame team used a rotating arch of microphones to lest leakage. This animation shows the array moving in circles around a mannequin wearing the Gen 3 prototype, creating a 3D sphere plot of audio leakage. Via this testing, the team was able to minimize leakage to the side and back.
The Echo Frame team used a rotating arch of microphones to lest leakage. The array moved in circles around a mannequin wearing the Gen 3 prototype, creating a 3D sphere plot of audio leakage. Via this testing, the team was able to minimize leakage to the side and back.

To test leakage, the audio team rigged up a rotating arch of microphones. The array moved in circles around a mannequin wearing the Gen 3 prototype, creating a 3-D sphere plot of audio leakage. Choi explains that they focused on minimizing leakage to the side and back, and ultimately, the speakers were moved much closer to the ear to help minimize leakage and improve loudness.

Leakage isn’t the only privacy consideration. The Echo Frame team also continues to innovate on protecting users from bad actors who may get hold of their smart glasses.

Related content
Amazon senior principal engineer Luu Tran is helping the Alexa team innovate by collaborating closely with scientist colleagues.

Gen 2 protected users by requiring them to authenticate their sessions using a trusted phone. Without authentication, a user can’t invoke sensitive commands like “navigate me home,” unlocking a smart lock, or making a purchase. But customers didn’t like the added friction.

Now customers who enroll in Alexa Voice ID will be able to use their vocal fingerprints for authentication to receive responses to smart-home utterances.

“We’re the first on-the-go Alexa device to use Voice ID for privacy authentication,” Slaboski says.

Boosting battery life without cramping style

Gen 3 improves continuous music playback time to six hours, versus the four hours offered by the previous generation of Echo Frames. It also bumps battery life to up to 14 hours of moderate usage spread across playback, talk time, notifications, and Alexa interactions.

Delivering the desired loudness, bass, and audio quality while optimizing for battery life was a careful balance.
Ravi Sanapala

The team couldn’t simply slap on a bigger battery without making the Echo Frames look less like normal glasses. And with sound quality high on the priority list as well, the devices were going to need as much juice as ever. The team focused on trimming power use in standby mode, ensuring that the overall battery consumption would go down without weakening the speakers when users needed them.

“Delivering the desired loudness, bass, and audio quality while optimizing for battery life was a careful balance,” says senior product manager Ravi Sanapala. “We need the battery to last throughout as much of the day as possible and for Alexa to be available whenever users need it.”

The architectural changes in speaker placement helped keep power needs low while improving audio. The team also tweaked the placement of the battery itself, distributing its capacity differently than in Gen 2. Sanapala adds that algorithmic changes were key in balancing idle-battery conservation and on-demand device usage.

“We had to collaborate with all of our cross-functional teams to optimize everything,” Sanapala says.

Gen 3 also features an all-new charging stand, which is designed for compatibility with all frame shapes and keeps lenses upright, protecting them from scratches while wirelessly charging.

Making smart eyewear look like eyewear

Making glasses that are suitable for everyday wear has always been a priority. “One of our goals has always been to develop technology that appears when you need it and disappears when you don’t,” says Wang.

Previous models of Echo Frames have come in a single, one-size-fits-all style.

A person is seen wearing Echo Frames sunglasses outside. The person carries a notebook and is looking down at it, and there are some buildings and blue sky in the background.
The Echo Frames team consulted with both internal and external eyewear designers to review common and popular styles of frames, and to survey potential customers about their preferences.

“That was a very intentional move,” Wang explains. “We wanted to start simply and learn from customer feedback.”

Gen 2’s flexible spring hinge and adjustable temple tips ensured that the single size fit many different faces. In fact, Wang says, while the goal was to fit around half of all potential users, they’ve found that 85 percent of the adult population can comfortably wear the Gen 2 design.

But with Gen 3, Wang says, the team needed to go beyond designing glasses that looked typical. Customers wanted glasses that looked stylish, too.

The team consulted with both internal and external eyewear designers to review common and popular styles of frames, as well as “edgier” designs, and to survey potential customers about their preferences. After testing options with beta customers, they settled on a variety of styles in various colors that cover a range of aesthetics. They also switched to an acetate material to match the feel of high-end eyewear.

Related content
How a team of designers, scientists, developers, and engineers worked together to create a truly unique device in Echo Show 10.

While each style will still come in a single size, the range of designs will accommodate even more faces than Gen 2, as the collection spans narrow, medium, and wide fits. Each style features adjustable temple tips constructed out of silicone around a lightweight titanium core for better fit. And despite the boost in battery life, the temples of Gen 3 frames have actually been slimmed down. Wang notes that competitive products often place large batteries behind a user’s ears. But presenting Echo Frames users with something that bulky and uncomfortable was never on the table.

“We were working with really heavy constraints,” Wang says. “So we have been very deliberate in making design choices in the service of our customer. That’s challenged us to be innovative and really push the limits of what’s possible in the architecture of our designs.”

Related content

US, CA, San Francisco
Amazon has launched a new research lab in San Francisco to develop foundational capabilities for useful AI agents. We’re enabling practical AI to make our customers more productive, empowered, and fulfilled. In particular, our work combines large language models (LLMs) with reinforcement learning (RL) to solve reasoning, planning, and world modeling in both virtual and physical environments. Our research builds on that of Amazon’s broader AGI organization, which recently introduced Amazon Nova, a new generation of state-of-the-art foundation models (FMs). Our lab is a small, talent-dense team with the resources and scale of Amazon. Each team in the lab has the autonomy to move fast and the long-term commitment to pursue high-risk, high-payoff research. We’re entering an exciting new era where agents can redefine what AI makes possible. We’d love for you to join our lab and build it from the ground up! Key job responsibilities You will contribute directly to AI agent development in a research engineering role: running experiments, building tools to accelerate scientific workflows, and scaling up AI systems. Key responsibilities include: * Design, maintain, and enhance tools and workflows that support cutting-edge research * Adapt quickly to evolving research priorities and team needs * Stay informed on the latest advancements in large language models and related research * Collaborate closely with researchers to develop new techniques and tools around emerging agent capabilities * Drive project execution, including scoping, prioritization, timeline management, and stakeholder communication * Thrive in a fast-paced, iterative environment, delivering high-quality software on tight schedules * Apply strong software engineering fundamentals to produce clean, reliable, and maintainable code About the team The Amazon AGI SF Lab is focused on developing new foundational capabilities for enabling useful AI agents that can take actions in the digital and physical worlds. In other words, we’re enabling practical AI that can actually do things for us and make our customers more productive, empowered, and fulfilled. The lab is designed to empower AI researchers and engineers to make major breakthroughs with speed and focus toward this goal. Our philosophy combines the agility of a startup with the resources of Amazon. By keeping the team lean, we’re able to maximize the amount of compute per person. Each team in the lab has the autonomy to move fast and the long-term commitment to pursue high-risk, high-payoff research.
US, CA, Sunnyvale
Prime Video is a first-stop entertainment destination offering customers a vast collection of premium programming in one app available across thousands of devices. Prime members can customize their viewing experience and find their favorite movies, series, documentaries, and live sports – including Amazon MGM Studios-produced series and movies; licensed fan favorites; and programming from Prime Video subscriptions such as Apple TV+, HBO Max, Peacock, Crunchyroll and MGM+. All customers, regardless of whether they have a Prime membership or not, can rent or buy titles via the Prime Video Store, and can enjoy even more content for free with ads. Are you interested in shaping the future of entertainment? Prime Video's technology teams are creating best-in-class digital video experience. As a Prime Video team member, you’ll have end-to-end ownership of the product, user experience, design, and technology required to deliver state-of-the-art experiences for our customers. You’ll get to work on projects that are fast-paced, challenging, and varied. You’ll also be able to experiment with new possibilities, take risks, and collaborate with remarkable people. We’ll look for you to bring your diverse perspectives, ideas, and skill-sets to make Prime Video even better for our customers. With global opportunities for talented technologists, you can decide where a career Prime Video Tech takes you! Key job responsibilities As an Applied Scientist at Prime Video, you will have end-to-end ownership of the product, related research and experimentation, applying advanced machine learning techniques in computer vision (CV), Generative AI, multimedia understanding and so on. You’ll work on diverse projects that enhance Prime Video’s content localization, image/video understanding, and content personalization, driving impactful innovations for our global audience. Other responsibilities include: - Research and develop generative models for controllable synthesis across images, video, vector graphics, and multimedia - Innovate in advanced diffusion and flow-based methods (e.g., inverse flow matching, parameter efficient training, guided sampling, test-time adaptation) to improve efficiency, controllability, and scalability. - Advance visual grounding, depth and 3D estimation, segmentation, and matting for integration into pre-visualization, compositing, VFX, and post-production pipelines. - Design multimodal GenAI workflows including visual-language model tooling, structured prompt orchestration, agentic pipelines. A day in the life Prime Video is pioneering the use of Generative AI to empower the next generation of creatives. Our mission is to make world-class media creation accessible, scalable, and efficient. We are seeking an Applied Scientist to advance the state of the art in Generative AI and to deliver these innovations as production-ready systems at Amazon scale. Your work will give creators unprecedented freedom and control while driving new efficiencies across Prime Video’s global content and marketing pipelines. This is a newly formed team within Prime Video Science!
US, CA, Sunnyvale
Prime Video is a first-stop entertainment destination offering customers a vast collection of premium programming in one app available across thousands of devices. Prime members can customize their viewing experience and find their favorite movies, series, documentaries, and live sports – including Amazon MGM Studios-produced series and movies; licensed fan favorites; and programming from Prime Video subscriptions such as Apple TV+, HBO Max, Peacock, Crunchyroll and MGM+. All customers, regardless of whether they have a Prime membership or not, can rent or buy titles via the Prime Video Store, and can enjoy even more content for free with ads. Are you interested in shaping the future of entertainment? Prime Video's technology teams are creating best-in-class digital video experience. As a Prime Video team member, you’ll have end-to-end ownership of the product, user experience, design, and technology required to deliver state-of-the-art experiences for our customers. You’ll get to work on projects that are fast-paced, challenging, and varied. You’ll also be able to experiment with new possibilities, take risks, and collaborate with remarkable people. We’ll look for you to bring your diverse perspectives, ideas, and skill-sets to make Prime Video even better for our customers. With global opportunities for talented technologists, you can decide where a career Prime Video Tech takes you! Key job responsibilities As an Applied Scientist at Prime Video, you will have end-to-end ownership of the product, related research and experimentation, applying advanced machine learning techniques in computer vision (CV), Generative AI, multimedia understanding and so on. You’ll work on diverse projects that enhance Prime Video’s content localization, image/video understanding, and content personalization, driving impactful innovations for our global audience. Other responsibilities include: - Research and develop generative models for controllable synthesis across images, video, vector graphics, and multimedia - Innovate in advanced diffusion and flow-based methods (e.g., inverse flow matching, parameter efficient training, guided sampling, test-time adaptation) to improve efficiency, controllability, and scalability. - Advance visual grounding, depth and 3D estimation, segmentation, and matting for integration into pre-visualization, compositing, VFX, and post-production pipelines. - Design multimodal GenAI workflows including visual-language model tooling, structured prompt orchestration, agentic pipelines. A day in the life Prime Video is pioneering the use of Generative AI to empower the next generation of creatives. Our mission is to make world-class media creation accessible, scalable, and efficient. We are seeking an Applied Scientist to advance the state of the art in Generative AI and to deliver these innovations as production-ready systems at Amazon scale. Your work will give creators unprecedented freedom and control while driving new efficiencies across Prime Video’s global content and marketing pipelines. This is a newly formed team within Prime Video Science!
US, MA, Boston
AI is the most transformational technology of our time, capable of tackling some of humanity’s most challenging problems. That is why Amazon is investing in generative AI (GenAI) and the responsible development and deployment of large language models (LLMs) across all of our businesses. Come build the future of human-technology interaction with us. We are looking for an Applied Scientist with strong technical skills which includes coding and natural language processing experience in dataset construction, training and evaluating models, and automatic processing of large datasets. You will play a critical role in driving innovation and advancing the state-of-the-art in natural language processing and machine learning. You will work closely with cross-functional teams, including product managers, language engineers, and other scientists. Key job responsibilities Specifically, the Applied Scientist will: • Ensure quality of speech/language/other data throughout all stages of acquisition and processing, including data sourcing/collection, ground truth generation, normalization, transformation, cross-lingual alignment/mapping, etc. • Clean, analyze and select speech/language/other data to achieve goals • Build and test models that elevate the customer experience • Collaborate with colleagues from science, engineering and business backgrounds • Present proposals and results in a clear manner backed by data and coupled with actionable conclusions • Work with engineers to develop efficient data querying infrastructure for both offline and online use cases
US, CA, San Francisco
The Artificial General Intelligence (AGI) team is looking for a passionate, talented, and inventive Member of Technical Staff with a strong deep learning background, to build industry-leading Generative Artificial Intelligence (GenAI) technology with Large Language Models (LLMs) and multimodal systems. Key job responsibilities As a Member of Technical Staff with the AGI team, you will lead the development of algorithms and modeling techniques, to advance the state of the art with LLMs. You will lead the foundational model development in an applied research role, including model training, dataset design, and pre- and post-training optimization. Your work will directly impact our customers in the form of products and services that make use of GenAI technology. You will leverage Amazon’s heterogeneous data sources and large-scale computing resources to accelerate advances in LLMs. About the team The AGI team has a mission to push the envelope in GenAI with LLMs and multimodal systems, in order to provide the best-possible experience for our customers.
US, MA, Boston
AI is the most transformational technology of our time, capable of tackling some of humanity’s most challenging problems. That is why Amazon is investing in generative AI (GenAI) and the responsible development and deployment of large language models (LLMs) across all of our businesses. Come build the future of human-technology interaction with us. We are looking for an Applied Scientist with strong technical skills which includes coding and natural language processing experience in dataset construction, training and evaluating models, and automatic processing of large datasets. You will play a critical role in driving innovation and advancing the state-of-the-art in natural language processing and machine learning. You will work closely with cross-functional teams, including product managers, language engineers, and other scientists. Key job responsibilities Specifically, the Applied Scientist will: • Ensure quality of speech/language/other data throughout all stages of acquisition and processing, including data sourcing/collection, ground truth generation, normalization, transformation, cross-lingual alignment/mapping, etc. • Clean, analyze and select speech/language/other data to achieve goals • Build and test models that elevate the customer experience • Collaborate with colleagues from science, engineering and business backgrounds • Present proposals and results in a clear manner backed by data and coupled with actionable conclusions • Work with engineers to develop efficient data querying infrastructure for both offline and online use cases
US, NY, New York
Do you want to leverage your expertise in translating innovative science into impactful products to improve the lives and work of over a million people worldwide? If so, People eXperience Technology Central Science (PXTCS) would love to discuss how you can make that a reality. PXTCS is an interdisciplinary team that uses economics, behavioral science, statistics, and machine learning to identify products, mechanisms, and process improvements that enhance Amazonians' well-being and their ability to deliver value for Amazon's customers. We collaborate with HR teams across Amazon to make Amazon PXT the most scientific human resources organization in the world. In this role, you will spearhead science design and technical implementation innovations across our predictive modeling and forecasting work-streams. You'll enhance existing models and create new ones, empowering leaders throughout Amazon to make data-driven business decisions. You'll collaborate with scientists and engineers to deliver solutions while working closely with business stakeholders to address their specific needs. Your work will span various business domains (corporate, operations, safety) and analysis levels (individual, group, organizational), utilizing a range of modeling approaches (linear, tree-based, deep neural networks, and LLM-based). You'll develop end-to-end ML solutions from problem formulation to deployment, maintaining high scientific standards and technical excellence throughout the process. As a Sr. Applied Scientist, you'll also contribute to the team's science strategy, keeping pace with emerging AI/ML trends. You'll mentor junior scientists, fostering their growth by identifying high-impact opportunities. Your guidance will span different analysis levels and modeling approaches, enabling stakeholders to make informed, strategic decisions. If you excel at building advanced scientific solutions and are passionate about developing technologies that drive organizational change in the AI era, join us as we work hard, have fun, and make history.
US, NY, New York
We are seeking a motivated and talented Applied Scientist to join our team at Amazon Advertising, where we are on a mission to make Amazon the best in class destination for shoppers to discover, engage and build affinity with brands, making shopping beautiful, delightful, and personal. Our team builds the central Brand Understanding foundation for Amazon ads and beyond. We focus on enabling the Amazon brand ads businesses to align the customer's brand shopping intent with the brand's unique value (e.g., intelligent query/shopper-to-brand understanding, brand value/differentiator attribute extraction, and brand profile building). We provide large-scale offline and online Brand Understanding data services, powered by the latest Machine Learning technologies (e.g., Large Language Models, Multi-Modal Deep Neural Networks, Statistical Modeling). We also enable customer-brand engagement enhancement through intelligent UX and efficient ads serving. About Amazon Advertising: Amazon Advertising operates at the intersection of eCommerce and advertising, offering a rich array of digital display advertising solutions with the goal of helping our customers find and discover anything they want to buy. We help advertisers of all types to reach Amazon customers on Amazon.com, across our other owned and operated sites, on other high quality sites across the web, and on millions of mobile devices. We start with the customer and work backwards in everything we do, including advertising. If you’re interested in joining a rapidly growing team working to build a unique, world-class advertising group with a relentless focus on the customer, you’ve come to the right place. Key job responsibilities - Leverage Large Language Models (LLMs) and transformer-based models, and apply machine learning and natural language understanding techniques to improve the shopper and advertiser experience at Amazon. - Perform hands-on data analysis and modeling with large data sets to develop insights. - Run A/B experiments, evaluate the impact of your optimizations and communicate your results to various business stakeholders - Work closely with product managers and software engineers to design experiments and implement end-to-end solutions - Be a member of the Amazon-wide machine learning community, participating in internal and external hackathons and conferences - Help attract and recruit technical talent
US, CA, Sunnyvale
Prime Video is a first-stop entertainment destination offering customers a vast collection of premium programming in one app available across thousands of devices. Prime members can customize their viewing experience and find their favorite movies, series, documentaries, and live sports – including Amazon MGM Studios-produced series and movies; licensed fan favorites; and programming from Prime Video subscriptions such as Apple TV+, HBO Max, Peacock, Crunchyroll and MGM+. All customers, regardless of whether they have a Prime membership or not, can rent or buy titles via the Prime Video Store, and can enjoy even more content for free with ads. Are you interested in shaping the future of entertainment? Prime Video's technology teams are creating best-in-class digital video experience. As a Prime Video team member, you’ll have end-to-end ownership of the product, user experience, design, and technology required to deliver state-of-the-art experiences for our customers. You’ll get to work on projects that are fast-paced, challenging, and varied. You’ll also be able to experiment with new possibilities, take risks, and collaborate with remarkable people. We’ll look for you to bring your diverse perspectives, ideas, and skill-sets to make Prime Video even better for our customers. With global opportunities for talented technologists, you can decide where a career Prime Video Tech takes you! Key job responsibilities As an Applied Scientist at Prime Video, you will have end-to-end ownership of the product, related research and experimentation, applying advanced machine learning techniques in computer vision (CV), Generative AI, multimedia understanding and so on. You’ll work on diverse projects that enhance Prime Video’s content localization, image/video understanding, and content personalization, driving impactful innovations for our global audience. Other responsibilities include: - Research and develop generative models for controllable synthesis across images, video, vector graphics, and multimedia - Innovate in advanced diffusion and flow-based methods (e.g., inverse flow matching, parameter efficient training, guided sampling, test-time adaptation) to improve efficiency, controllability, and scalability. - Advance visual grounding, depth and 3D estimation, segmentation, and matting for integration into pre-visualization, compositing, VFX, and post-production pipelines. - Design multimodal GenAI workflows including visual-language model tooling, structured prompt orchestration, agentic pipelines. A day in the life Prime Video is pioneering the use of Generative AI to empower the next generation of creatives. Our mission is to make world-class media creation accessible, scalable, and efficient. We are seeking an Applied Scientist to advance the state of the art in Generative AI and to deliver these innovations as production-ready systems at Amazon scale. Your work will give creators unprecedented freedom and control while driving new efficiencies across Prime Video’s global content and marketing pipelines. This is a newly formed team within Prime Video Science!
US, CA, Sunnyvale
As a Principal Scientist in the Artificial General Intelligence (AGI) organization, you are a trusted part of the technical leadership. You bring business and industry context to science and technology decisions. You set the standard for scientific excellence and make decisions that affect the way we build and integrate algorithms. You solicit differing views across the organization and are willing to change your mind as you learn more. Your artifacts are exemplary and often used as reference across organization. You are a hands-on scientific leader. Your solutions are exemplary in terms of algorithm design, clarity, model structure, efficiency, and extensibility. You tackle intrinsically hard problems, acquiring expertise as needed. You decompose complex problems into straightforward solutions. You amplify your impact by leading scientific reviews within your organization or at your location. You scrutinize and review experimental design, modeling, verification and other research procedures. You probe assumptions, illuminate pitfalls, and foster shared understanding. You align teams toward coherent strategies. You educate, keeping the scientific community up to date on advanced techniques, state of the art approaches, the latest technologies, and trends. You help managers guide the career growth of other scientists by mentoring and play a significant role in hiring and developing scientists and leads. You will play a critical role in driving the development of Generative AI (GenAI) technologies that can handle Amazon-scale use cases and have a significant impact on our customers' experiences. Key job responsibilities You will be responsible for defining key research directions, adopting or inventing new machine learning techniques, conducting rigorous experiments, publishing results, and ensuring that research is translated into practice. You will develop long-term strategies, persuade teams to adopt those strategies, propose goals and deliver on them. You will also participate in organizational planning, hiring, mentorship and leadership development. You will be technically exceptional with a passion for building scalable science and engineering solutions. You will serve as a key scientific resource in full-cycle development (conception, design, implementation, testing to documentation, delivery, and maintenance).