How Amazon Chime's noise cancellation works

Combining classic signal processing with deep learning makes method efficient enough to run on a phone.

PercepNet is one of the core technologies of Amazon Chime's Voice Focus feature. It is designed to suppress noise and reverberation in the speech signal, in real time, without using too many CPU cycles. This makes it usable in cellphones and other power-constrained devices. 

At Interspeech 2020, PercepNet finished second in its category (real-time processing) in the Deep Noise Suppression Challenge, despite using only 4% of a CPU core, while another Amazon Chime algorithm, PoCoNet, finished first in the offline-processing category. In this post, we'll look into the principles that make PercepNet work. For more details, you can also refer to our Interspeech paper.

Despite operating in real time, with low complexity, PercepNet can still provide state-of-the-art speech enhancement. Like most recent speech enhancement algorithms, PercepNet uses deep learning, but it applies it in a different way. Rather than have a deep neural network (DNN) do all the work, PercepNet tries to have it do as little work as possible.

Speech enhancement and STFT

Before getting into any deep learning, let's look at the job we'll be asking our machine learning model to perform. Let's consider a simple synthetic example. We start from the clean speech sample below:

We then add some non-stationary car noise on top of it:

The goal here is to take the noisy audio and make it sound as good as possible — ideally, close to the original clean audio. The standard way to represent the problem — both pre-deep learning and post-deep learning — is to use the short-time Fourier transform (STFT).

That means chopping up the signal into overlapping windows and computing the frequency content for each window. For each window of N samples (N discrete measurements of the signal amplitude), we obtain N/2 spectral magnitudes, along with their associated phases. We will refer to each output point as a frequency bin. Let's see what the magnitude of the STFT looks like for our clean signal (top) and noisy signal (bottom).

percepnet_spectrograms.jpg
The spectrograms above show the frequency content of an audio clip. The horizontal axis is time, the vertical axis is frequency, and the color represents the amount of energy at a particular time, for a particular frequency, using a log scale.

From the noisy STFT, many algorithms try to estimate the clean magnitude of each frequency while retaining the phase — which is much harder to estimate — from the noisy signal. For now, let's assume we have a magic model (an oracle) that's able to do a perfect mapping from noisy spectral magnitudes to clean. This is why we started from a synthetic example, so we can compute the oracle output. Based on oracle magnitudes but using the noisy phase, we can reconstruct the speech signal:

Certainly not bad, but also far from perfect. The noise is still audible as a form of roughness in the speech. This is due to the error in the phase, which we took from the noisy signal. While the ear is essentially insensitive to the absolute phase, what we perceive here is the inconsistency of the phase across frames. In other words, the way in which the phase changes over time still does matter.

Another issue for real-time, power-constrained operation is the number of frequency bins whose amplitudes we need to estimate. Assuming we use 20-millisecond windows, the STFT bins will be spaced 50 Hz apart. If we want to enhance all frequencies up to 20 kHz (the upper limit of human hearing), then our neural network will have to estimate 400 amplitudes, which is very computationally expensive.

Where do we go from here? If we want to improve quality, then we could also estimate phase. This is the no-compromise route taken by PoCoNet, which can get around the added complexity because it’s optimized to run on a GPU. For real-time applications on power-constrained devices, however, we can't realistically expect to have a very good phase estimator.

A perceptually relevant representation

If we want good speech quality, and we want our algorithm to run in real time on a CPU without instantly draining the battery, then we need to find a way to simplify the problem. We can do that by making the following assumptions:

  1. the general shape of the speech spectrum (a.k.a. the spectral envelope) is smooth; and 
  2. we perceive it with a nonlinear frequency resolution, corresponding to the human ear’s auditory filters (a.k.a. critical bands)

In other words, (1) the speech spectrum tends not to have sharp discontinuities, and (2) the human auditory system perceives low frequencies with higher resolution than high frequencies.

We can follow both of those assumptions by representing the speech spectrum using bands spaced according to equivalent rectangular bandwidth (ERB). ERB-spaced bands divide the spectrum into bands of increasing width, capturing coarser spectral information as frequency increases, much the way the human auditory system does.

Because multiple STFT bins are assigned to each band, the spectral representation is smoother: any discontinuity in frequency is averaged out.

Nonlinearly spaced bands make our model much simpler. Instead of 400 frequency bins, we need only 34 bands. In practice, we model these bands as overlapping filters, which are most responsive to the frequencies at the centers of the bands (the tips of the triangles below) and decreasingly responsive to frequencies farther from the center (the sides of the triangles; note the 50% overlap between bands):

bands.png

For each of the bands above, we compute a gain between 0 and 1; then, all we need to do is interpolate those band gains and we're done. Now, let's listen to how this would sound — still using the oracle for band magnitudes:

Our complexity went down, but so did the quality. The roughness we noticed previously is now even more obvious and sounds a bit like heavy distortion. It's not that surprising, since we are still changing only the magnitude spectrum, but with only 34 degrees of freedom rather than 400.

So what are we missing here? The missing piece is that the ear doesn't only perceive the spectral envelope of the signal; it also perceives whether the signal is made of tones (voiced sounds), noise (unvoiced sounds), or a mix of the two. Vowels are mostly composed of tones (harmonics) at multiples of a fundamental frequency (the pitch), whereas many consonants (such as the /s/ phoneme) are mostly noise-like. 

Our enhanced speech sounds rough because the tonal vowels contain more noise than they should. To enhance our tones, we can use a time-domain technique called comb filtering. Comb filtering is often an undesired effect in which room reverberation boosts or attenuates frequencies at regular intervals. But by carefully tuning our comb filter to the pitch of the voice we're trying to enhance, we can keep all the tones and remove most of the noise. Below is an example of the frequency response of the comb filter for a pitch of 200 Hz.

pitch.png

The pitch is the period at which a periodic signal (nearly) repeats itself. Pitch estimation is a hard problem, especially in the noisy conditions we have here. To estimate the pitch, we try to match a signal with past versions of itself, finding the period T that maximizes the correlation between x(n) and x(n-T). We then use dynamic programming (the Viterbi algorithm) to find a pitch trajectory that is consistent (e.g. no large jumps) over time.

Since we often want to retain at least some of the noise, we can simply do a mix between the noisy audio and the comb-filtered audio to get exactly the tone/noise ratio we want. By doing the mixing in the frequency domain, we can control that mix on a band-by-band basis, even though the comb filter is computed in the time domain. The exact ratios (or filtering strengths) to use for the mixing can be adjusted in such a way that the ratio of tones to noise in the output is about the same as it was in the clean speech. This is what our oracle (using the optimal strengths) now sounds like with comb filtering:

There’s still a little roughness, but our quality is already better than that of our spectral-magnitude oracle, despite using far fewer parameters. It now seems that we're as close to the original properties of the speech as we could get with our model. So what else can we do to further improve quality? The answer is simple: we cheat! 

To be more specific, we can cheat the human auditory system a bit by further attenuating the frequency bands that are still too noisy. Our speech will deviate slightly from the correct spectral envelope, but the ear will not notice that too much. It will just notice the noise less. This kind of post-filtering has been used in speech codecs since the 1980s but (as far as we know) not in speech enhancement systems. Adding the post-filter to our oracle gives us the following:

We're now quite close to the perfect clean speech. At this point, our limiting factor will most certainly be the DNN model and not the representation we use. The good thing is that our DNN has to estimate only 34 band gains (between 0 and 1) and 34 comb-filtering strengths (also between 0 and 1). This is much easier than estimating 400 magnitudes/gains — and possibly also 400 phases.

Adding a DNN

So far, we’ve assumed a perfect model for predicting band gains (the oracle). In practice, we need to use a DNN. But all the work we did in the previous section was meant to make the DNN design as boring as possible.

Since we replaced our initial 400 frequency bins with just 34 bands, there's no reason to use convolutional layers across frequency. Instead, we just go with convolutional layers across time and — most importantly — recurrent layers that provide longer-term memory to the system. We found that simple gated recurrent units (GRUs) work well, but long-short-term-memory networks (LSTMs) would probably have worked as well.

dnn_model.png
DNN model

In our DNN modelf is an input feature vector that contains all the band-based spectral information we need. The outputs are the band gains b and the comb-filtering strengths b. Now all we need to do is train our network using hours of clean speech to which we add various levels of noise and reverberation. Since we have the clean speech, we can compute the optimal (oracle) gains and filtering strengths and use them as training targets. Our complete system using the trained DNN sounds like this:

Obviously, it does not sound as good as the last oracle — no enhancement DNN is perfect — but it's still a big improvement over the noisy input speech. Our Interspeech 2020 Deep Noise Suppression Challenge samples page provides some examples of how PercepNet performs in real conditions.

Using it in real time

The DNN model above contains about eight million weights. For each new window, we use each weight exactly once, which means eight million multiply-add operations per window. With 20-millisecond windows and 50% overlap, we have 100 windows per second of speech, so 800 million multiply-add operations per second. 

Thankfully, DNNs tend to be quite robust to small perturbations, so we can quantize all our weights to just eight bits with a negligible effect on perceived audio quality. Thanks to SIMD instructions on modern CPUs, this makes it possible to run our network really efficiently. On a modern laptop CPU, it takes less than 5% of one core to run PercepNet in real time.

To be useful in real-time communications applications, PercepNet should not add too much delay. The seemingly arbitrary choice of 20-millisecond windows with 50% overlap means that it consumes audio 10 milliseconds at a time. This is good because most audio codecs (including Opus, which is used in WebRTC) encode audio in 20-millisecond packets. So we can run the algorithm exactly twice per packet without the PercepNet block size causing an increase in delay. 

There are, of course, some delays we cannot avoid. The overlap between windows means that the STFT itself requires 10 milliseconds for reconstruction. On top of that, we typically allow the DNN to look two windows (20 millseconds) into the future, so it can make better decisions. This gives us a total of 30 milliseconds extra delay from the algorithm, which is acceptable in most scenarios.

If you would like to know more about the details of PercepNet, you can read our Interspeech 2020 paper. The idea behind PercepNet is quite versatile and could be applied to other problems, including acoustic echo control and beamforming post-filtering. In future posts, we will see how we can make PercepNet very efficient on CPUs and even how to run it as Web Assembly (WASM) code inside web browsers for WebRTC-based applications.

Research areas

Related content

US, WA, Seattle
The Sponsored Products and Brands team at Amazon Ads is re-imagining the advertising landscape through novel generative AI technologies, revolutionizing how millions of customers discover products and engage with brands across Amazon.com and beyond. We are at the forefront of re-inventing advertising experiences, bridging human creativity with artificial intelligence to transform every aspect of the advertising lifecycle from ad creation and optimization to performance analysis and customer insights. We are a passionate group of innovators dedicated to developing responsible and intelligent AI technologies that balance the needs of advertisers, enhance the shopping experience, and strengthen the marketplace ecosystem. If you're energized by solving complex challenges and pushing the boundaries of what's possible with AI, join us in shaping the future of advertising. Key job responsibilities As an applied scientist on our team, you will * Develop AI solutions for Sponsored Brands advertiser and shopper experiences. Build recommendation systems that leverage generative models to develop and improve campaigns. * You invent and design new solutions for scientifically-complex problem areas and/or opportunities in new business initiatives. * You drive or heavily influence the design of scientifically-complex software solutions or systems, for which you personally write significant parts of the critical scientific novelty. You take ownership of these components, providing a system-wide view and design guidance. These systems or solutions can be brand new or evolve from existing ones. * Define a long-term science vision and roadmap for our Sponsored Brands advertising business, driven from our customers' needs, translating that direction into specific plans for applied scientists and engineering teams. This role combines science leadership, organizational ability, technical strength, product focus, and business understanding. * Work closely with engineers and product managers to design, implement and launch AI solutions end-to-end; * Design and conduct A/B experiments to evaluate proposed solutions based on in-depth data analyses; * Think big about the arc of development of Gen AI over a multi-year horizon, and identify new opportunities to apply these technologies to solve real-world problems * Effectively communicate technical and non-technical ideas with teammates and stakeholders; * Translate complex scientific challenges into clear and impactful solutions for business stakeholders. * Mentor and guide junior scientists, fostering a collaborative and high-performing team culture. * Stay up-to-date with advancements and the latest modeling techniques in the field About the team The Sponsored Brands Impressions-based Offerings team is responsible for evolving the value proposition of Sponsored Brands to drive brand advertising in retail media at scale, helping brands get discovered, acquire new customers and sustainably grow customer lifetime value. We build end-to-end solutions that enable brands to drive discovery, visibility and share of voice. This includes building advertiser controls, shopper experiences, monetization strategies and optimization features. We succeed when (1) shoppers discover, engage and build affinity with brands and (2) brands can grow their business at scale with our advertising products. #GenAI
US, CA, San Diego
The Private Brands team is looking for a Sr. Research Scientist to join the team in building science solutions at scale. Our team applies Optimization, Machine Learning, Statistics, Causal Inference, and Econometrics/Economics to derive actionable insights about the complex economy of Amazon’s retail business and develop Statistical Models and Algorithms to drive strategic business decisions and improve operations. We are an interdisciplinary team of Scientists, Engineers, PMTs and Economists. Key job responsibilities You will work with business leaders, scientists, and economists to translate business and functional requirements into concrete deliverables, including the design, development, testing, and deployment of highly scalable optimization solutions and ML models. This is a unique, high visibility opportunity for someone who wants to have business impact, dive deep into large-scale problems, enable measurable actions on the consumer economy, and work closely with scientists and economists. As a Sr Scientist, you bring business and industry context to science and technology decisions. You set the standard for scientific excellence and make decisions that affect the way we build and integrate algorithms. Your solutions are exemplary in terms of algorithm design, clarity, model structure, efficiency, and extensibility. You tackle intrinsically hard problems, acquiring expertise as needed. You decompose complex problems into straightforward solutions. We are particularly interested in candidates with experience in Operations Research, ML and predictive models and working with distributed systems. Academic and/or practical background in Operations Research and Machine Learning specifically Reinforcement Learning are particularly relevant for this position. To know more about Amazon science, Please visit https://www.amazon.science About the team We are a one pizza, agile team of scientists focused on solving supply chain challenges for Amazon Private Brands products. We collaborate with Amazon central teams like SCOT and develop both central as well as APB-specific solutions to address various challenges, including sourcing, demand forecasting, ordering optimization, inventory distribution, and inventory health management. Working closely with business stakeholders, Product Management Teams (PMTs), and engineering partners, we drive projects from initial concept through production deployment and ongoing monitoring.
US, MA, Boston
The Artificial General Intelligence (AGI) team is looking for a highly skilled and experienced Sr. Applied Scientist, to support the development and implementation of state-of-the-art algorithms and models for supervised fine-tuning and reinforcement learning through human feedback and complex reasoning; with a focus across text, image, and video modalities. As an Sr. Applied Scientist, you will play a critical role in supporting the development of Generative AI (Gen AI) technologies that can handle Amazon-scale use cases and have a significant impact on our customers' experiences. Key job responsibilities Collaborate with cross-functional teams of engineers, product managers, and scientists to identify and solve complex problems in Gen AI Design and execute experiments to evaluate the performance of different algorithms (PT, SFT, RL) and models, and iterate quickly to improve results Think big about the arc of development of Gen AI over a multi-year horizon, and identify new opportunities to apply these technologies to solve real-world problems Communicate results and insights to both technical and non-technical audiences, including through presentations and written reports About the team We are passionate scientists dedicated to pushing the boundaries of innovation in Gen AI with focus on Software Development use cases.
IN, HR, Gurugram
Do you want to join an innovative team of scientists who use machine learning and statistical techniques to create state-of-the-art solutions for providing better value to Amazon’s customers? Do you want to build and deploy advanced ML systems that help optimize millions of transactions every day? Are you excited by the prospect of analyzing and modeling terabytes of data to solve real-world problems? Do you like to own end-to-end business problems/metrics and directly impact the profitability of the company? Do you like to innovate and simplify? If yes, then you may be a great fit to join the Machine Learning team for India Consumer Businesses. Machine Learning, Big Data and related quantitative sciences have been strategic to Amazon from the early years. Amazon has been a pioneer in areas such as recommendation engines, ecommerce fraud detection and large-scale optimization of fulfillment center operations. As Amazon has rapidly grown and diversified, the opportunity for applying machine learning has exploded. We have a very broad collection of practical problems where machine learning systems can dramatically improve the customer experience, reduce cost, and drive speed and automation. These include product bundle recommendations for millions of products, safeguarding financial transactions across by building the risk models, improving catalog quality via extracting product attribute values from structured/unstructured data for millions of products, enhancing address quality by powering customer suggestions We are developing state-of-the-art machine learning solutions to accelerate the Amazon India growth story. Amazon India is an exciting place to be at for a machine learning practitioner. We have the eagerness of a fresh startup to absorb machine learning solutions, and the scale of a mature firm to help support their development at the same time. As part of the India Machine Learning team, you will get to work alongside brilliant minds motivated to solve real-world machine learning problems that make a difference to millions of our customers. We encourage thought leadership and blue ocean thinking in ML. Key job responsibilities Use machine learning and analytical techniques to create scalable solutions for business problems Analyze and extract relevant information from large amounts of Amazon’s historical business data to help automate and optimize key processes Design, develop, evaluate and deploy, innovative and highly scalable ML models Work closely with software engineering teams to drive real-time model implementations Work closely with business partners to identify problems and propose machine learning solutions Establish scalable, efficient, automated processes for large scale data analyses, model development, model validation and model maintenance Work proactively with engineering teams and product managers to evangelize new algorithms and drive the implementation of large-scale complex ML models in production Leading projects and mentoring other scientists, engineers in the use of ML techniques About the team International Machine Learning Team is responsible for building novel ML solutions that attack India first (and other Emerging Markets across MENA and LatAm) problems and impact the bottom-line and top-line of India business. Learn more about our team from https://www.amazon.science/working-at-amazon/how-rajeev-rastogis-machine-learning-team-in-india-develops-innovations-for-customers-worldwide
US, CA, Sunnyvale
The Artificial General Intelligence (AGI) team is looking for a passionate, talented, and inventive Principal Applied Scientist with a strong deep learning background, to lead the development of industry-leading technology with multimodal systems. As a Principal Scientist within the Artificial General Intelligence (AGI) organization, you are a trusted part of the technical leadership. You bring business and industry context to science and technology decisions, set the standard for scientific excellence, and make decisions that affect the way we build and integrate algorithms. A Principal Applied Scientist will solicit differing views across the organization and are willing to change your mind as you learn more. Your artifacts are exemplary and often used as reference across organization. You are a hands-on scientific leader; develop solutions that are exemplary in terms of algorithm design, clarity, model structure, efficiency, and extensibility; and tackle intrinsically hard problems, acquiring expertise as needed. Principal Applied Scientists are expected to decompose complex problems into straightforward solutions. You amplify your impact by leading scientific reviews within your organization or at your location; and scrutinize and review experimental design, modeling, verification and other research procedures. You also probe assumptions, illuminate pitfalls, and foster shared understanding; align teams toward coherent strategies; and educate keeping the scientific community up to date on advanced techniques, state of the art approaches, the latest technologies, and trends. AGI Principal Applied Scientists help managers guide the career growth of other scientists by mentoring and play a significant role in hiring and developing scientists and leads. You will play a critical role in driving the development of Generative AI (GenAI) technologies that can handle Amazon-scale use cases and have a significant impact on our customers' experiences. Key job responsibilities You will be responsible for defining key research directions, inventing new machine learning techniques, conducting rigorous experiments, and ensuring that research is translated into practice. You will develop long-term strategies, persuade teams to adopt those strategies, propose goals and deliver on them. A Principal Applied Scientist will participate in organizational planning, hiring, mentorship and leadership development. You will also be build scalable science and engineering solutions, and serve as a key scientific resource in full-cycle development (conception, design, implementation, testing to documentation, delivery, and maintenance).
US, WA, Seattle
Prime Video is a first-stop entertainment destination offering customers a vast collection of premium programming in one app available across thousands of devices. Prime members can customize their viewing experience and find their favorite movies, series, documentaries, and live sports – including Amazon MGM Studios-produced series and movies; licensed fan favorites; and programming from Prime Video add-on subscriptions such as Apple TV+, Max, Crunchyroll and MGM+. All customers, regardless of whether they have a Prime membership or not, can rent or buy titles via the Prime Video Store, and can enjoy even more content for free with ads. Are you interested in shaping the future of entertainment? Prime Video's technology teams are creating best-in-class digital video experience. As a Prime Video technologist, you’ll have end-to-end ownership of the product, user experience, design, and technology required to deliver state-of-the-art experiences for our customers. You’ll get to work on projects that are fast-paced, challenging, and varied. You’ll also be able to experiment with new possibilities, take risks, and collaborate with remarkable people. We’ll look for you to bring your diverse perspectives, ideas, and skill-sets to make Prime Video even better for our customers. With global opportunities for talented technologists, you can decide where a career Prime Video Tech takes you! Key job responsibilities - Develop ML models for various recommendation & search systems using deep learning, online learning, and optimization methods - Work closely with other scientists, engineers and product managers to expand the depth of our product insights with data, create a variety of experiments to determine the high impact projects to include in planning roadmaps - Stay up-to-date with advancements and the latest modeling techniques in the field - Publish your research findings in top conferences and journals A day in the life We're using advanced approaches such as foundation models to connect information about our videos and customers from a variety of information sources, acquiring and processing data sets on a scale that only a few companies in the world can match. This will enable us to recommend titles effectively, even when we don't have a large behavioral signal (to tackle the cold-start title problem). It will also allow us to find our customer's niche interests, helping them discover groups of titles that they didn't even know existed. We are looking for creative & customer obsessed machine learning scientists who can apply the latest research, state of the art algorithms and ML to build highly scalable page personalization solutions. You'll be a research leader in the space and a hands-on ML practitioner, guiding and collaborating with talented teams of engineers and scientists and senior leaders in the Prime Video organization. You will also have the opportunity to publish your research at internal and external conferences. About the team Prime Video Recommendation Science team owns science solution to power recommendation and personalization experience on various Prime Video surfaces and devices. We work closely with the engineering teams to launch our solutions in production.
US, WA, Seattle
Do you enjoy solving challenging problems and driving innovations in research? Do you want to create scalable optimization models and apply machine learning techniques to guide real-world decisions? We are looking for builders, innovators, and entrepreneurs who want to bring their ideas to reality and improve the lives of millions of customers. As a Research Science intern focused on Operations Research and Optimization intern, you will be challenged to apply theory into practice through experimentation and invention, develop new algorithms using modeling software and programming techniques for complex problems, implement prototypes and work with massive datasets. As you navigate through complex algorithms and data structures, you'll find yourself at the forefront of innovation, shaping the future of Amazon's fulfillment, logistics, and supply chain operations. Imagine waking up each morning, fueled by the excitement of solving intricate puzzles that have a direct impact on Amazon's operational excellence. Your day might begin by collaborating with cross-functional teams, exchanging ideas and insights to develop innovative solutions. You'll then immerse yourself in a world of data, leveraging your expertise in optimization, causal inference, time series analysis, and machine learning to uncover hidden patterns and drive operational efficiencies. Throughout your journey, you'll have access to unparalleled resources, including state-of-the-art computing infrastructure, cutting-edge research papers, and mentorship from industry luminaries. This immersive experience will not only sharpen your technical skills but also cultivate your ability to think critically, communicate effectively, and thrive in a fast-paced, innovative environment where bold ideas are celebrated. Amazon has positions available for Operations Research Science Internships in, but not limited to, Bellevue, WA; Boston, MA; Cambridge, MA; New York, NY; Santa Clara, CA; Seattle, WA; Sunnyvale, CA. Key job responsibilities We are particularly interested in candidates with expertise in: Optimization, Causal Inference, Time Series, Algorithms and Data Structures, Statistics, Operations Research, Machine Learning, Programming/Scripting Languages, LLMs In this role, you will gain hands-on experience in applying cutting-edge analytical techniques to tackle complex business challenges at scale. If you are passionate about using data-driven insights to drive operational excellence, we encourage you to apply. The ideal candidate should possess the ability to work collaboratively with diverse groups and cross-functional teams to solve complex business problems. A successful candidate will be a self-starter, comfortable with ambiguity, with strong attention to detail and the ability to thrive in a fast-paced, ever-changing environment. A day in the life Develop and apply optimization, causal inference, and time series modeling techniques to drive operational efficiencies and improve decision-making across Amazon's fulfillment, logistics, and supply chain operations Design and implement scalable algorithms and data structures to support complex optimization systems Leverage statistical methods and machine learning to uncover insights and patterns in large-scale operations data Prototype and validate new approaches through rigorous experimentation and analysis Collaborate closely with cross-functional teams of researchers, engineers, and business stakeholders to translate research outputs into tangible business impact
US, CA, San Francisco
Are you a brilliant mind seeking to push the boundaries of what's possible with intelligent robotics? Join our elite team of researchers and engineers - led by Pieter Abeel, Rocky Duan, and Peter Chen - at the forefront of applied science, where we're harnessing the latest advancements in large language models (LLMs) and generative AI to reshape the world of robotics and unlock new realms of innovation. As an Applied Science Intern, you'll have the unique opportunity to work alongside world-renowned experts, gaining invaluable hands-on experience with cutting-edge robotics technologies. You'll dive deep into exciting research projects at the intersection of AI and robotics. This internship is not just about executing tasks – it's about being a driving force behind groundbreaking discoveries. You'll collaborate with cross-functional teams, leveraging your expertise in areas such as deep learning, reinforcement learning, computer vision, and motion planning to tackle real-world problems and deliver impactful solutions. Throughout your journey, you'll have access to unparalleled resources, including state-of-the-art computing infrastructure, cutting-edge research papers, and mentorship from industry luminaries. This immersive experience will not only sharpen your technical skills but also cultivate your ability to think critically, communicate effectively, and thrive in a fast-paced, innovative environment where bold ideas are celebrated. Join us at the forefront of applied robotics and AI, where your contributions will shape the future of intelligent systems and propel humanity forward. Seize this extraordinary opportunity to learn, grow, and leave an indelible mark on the world of technology. Must be eligible and available for a full-time (40h/ week) 12 week internship between May 2026 and September 2026. Amazon has positions available in San Francisco, CA and Seattle, WA. The ideal candidate should possess: - Strong background in machine learning, deep learning, and/or robotics - Publication record at science conferences such as NeurIPS, CVPR, ICRA, RSS, CoRL, and ICLR. - Experience in areas such as multimodal LLMs, world models, image/video tokenization, real2Sim/Sim2real transfer, bimanual manipulation, open-vocabulary panoptic scene understanding, scaling up multi-modal LLMs, and end-to-end vision-language-action models. - Proficiency in Python, Experience with PyTorch or JAX - Excellent problem-solving skills, attention to detail, and the ability to work collaboratively in a team Apply now and embark on an extraordinary journey of discovery and innovation! Key job responsibilities - Develop novel, scalable algorithms and modeling techniques that advance the state-of-the-art in areas at the intersection of LLMs and generative AI for robotics - Tackle challenging, groundbreaking research problems on production-scale data, with a focus on robotic perception, manipulation, and control - Collaborate with cross-functional teams to solve complex business problems, leveraging your expertise in areas such as deep learning, reinforcement learning, computer vision, and motion planning - Demonstrate the ability to work independently, thrive in a fast-paced, ever-changing environment, and communicate effectively with diverse stakeholders
US, WA, Seattle
Unleash Your Potential at the Forefront of AI Innovation At Amazon, we're on a mission to revolutionize the way the world leverages machine learning. Amazon is seeking graduate student scientists who can turn revolutionary theory into awe-inspiring reality. As an Applied Science Intern focused on Information and Knowledge Management in Machine Learning, you will play a critical role in developing the systems and frameworks that power Amazon's machine learning capabilities. You'll be at the epicenter of this transformation, shaping the systems and frameworks that power our cutting-edge AI capabilities. Imagine a role where you develop intuitive tools and workflows that empower machine learning teams to discover, reuse, and build upon existing models and datasets, accelerating innovation across the company. You'll leverage natural language processing and information retrieval techniques to unlock insights from vast repositories of unstructured data, fueling the next generation of AI applications. Throughout your journey, you'll have access to unparalleled resources, including state-of-the-art computing infrastructure, cutting-edge research papers, and mentorship from industry luminaries. This immersive experience will not only sharpen your technical skills but also cultivate your ability to think critically, communicate effectively, and thrive in a fast-paced, innovative environment where bold ideas are celebrated. Join us at the forefront of applied science, where your contributions will shape the future of AI and propel humanity forward. Seize this extraordinary opportunity to learn, grow, and leave an indelible mark on the world of technology. Amazon has positions available for Machine Learning Applied Science Internships in, but not limited to Arlington, VA; Bellevue, WA; Boston, MA; New York, NY; Palo Alto, CA; San Diego, CA; Santa Clara, CA; Seattle, WA. Key job responsibilities We are particularly interested in candidates with expertise in: Knowledge Graphs and Extraction, Neural Networks/GNNs, Data Structures and Algorithms, Time Series, Machine Learning, Natural Language Processing, Deep Learning, Large Language Models, Graph Modeling, Knowledge Graphs and Extraction, Programming/Scripting Languages In this role, you'll collaborate with brilliant minds to develop innovative frameworks and tools that streamline the lifecycle of machine learning assets, from data to deployed models in areas at the intersection of Knowledge Management within Machine Learning. You will conduct groundbreaking research into emerging best practices and innovations in the field of ML operations, knowledge engineering, and information management, proposing novel approaches that could further enhance Amazon's machine learning capabilities. The ideal candidate should possess the ability to work collaboratively with diverse groups and cross-functional teams to solve complex business problems. A successful candidate will be a self-starter, comfortable with ambiguity, with strong attention to detail and the ability to thrive in a fast-paced, ever-changing environment. A day in the life - Develop scalable, efficient, automated processes for large scale data analyses, model development, model validation and model implementation. - Design, development and evaluation of highly innovative ML models for solving complex business problems. - Research and apply the latest ML techniques and best practices from both academia and industry. - Think about customers and how to improve the customer delivery experience. - Use and analytical techniques to create scalable solutions for business problems.
US, CA, Sunnyvale
As a Principal Scientist within the Artificial General Intelligence (AGI) organization, you are a trusted part of the technical leadership. You bring business and industry context to science and technology decisions, set the standard for scientific excellence, and make decisions that affect the way we build and integrate algorithms. A Principal Applied Scientist will solicit differing views across the organization and are willing to change your mind as you learn more. Your artifacts are exemplary and often used as reference across organization. You are a hands-on scientific leader; develop solutions that are exemplary in terms of algorithm design, clarity, model structure, efficiency, and extensibility; and tackle intrinsically hard problems, acquiring expertise as needed. Principal Applied Scientists are expected to decompose complex problems into straightforward solutions. You amplify your impact by leading scientific reviews within your organization or at your location; and scrutinize and review experimental design, modeling, verification and other research procedures. You also probe assumptions, illuminate pitfalls, and foster shared understanding; align teams toward coherent strategies; and educate keeping the scientific community up to date on advanced techniques, state of the art approaches, the latest technologies, and trends. AGI Principal Applied Scientists help managers guide the career growth of other scientists by mentoring and play a significant role in hiring and developing scientists and leads. You will play a critical role in driving the development of Generative AI (GenAI) technologies that can handle Amazon-scale use cases and have a significant impact on our customers' experiences. Key job responsibilities You will be responsible for defining key research directions, inventing new machine learning techniques, conducting rigorous experiments, and ensuring that research is translated into practice. You will develop long-term strategies, persuade teams to adopt those strategies, propose goals and deliver on them. A Principal Applied Scientist will participate in organizational planning, hiring, mentorship and leadership development. You will also be build scalable science and engineering solutions, and serve as a key scientific resource in full-cycle development (conception, design, implementation, testing to documentation, delivery, and maintenance). A day in the life About the team Amazon’s AGI team is focused on building foundational AI to solve real-world problems at scale, delivering value to all existing businesses in Amazon, and enabling entirely new services and products for people and enterprises around the world.