Neural encoding enables more-efficient recovery of lost audio packets

By leveraging neural vocoding, Amazon Chime SDK’s new deep-redundancy (DRED) technology can reconstruct long sequences of lost packets with little bandwidth overhead.

Packet loss is a big problem for real-time voice communication over the Internet. Everyone has been in the situation where the network is becoming unreliable and enough packets are getting lost that it's hard — or impossible — to make out what the other person is saying.

One way to fight packet loss is through redundancy, in which each new packet includes information about prior packets. But existing redundancy schemes either have limited scope — carrying information only about the immediately preceding packet, for instance — or scale inefficiently.

The Deep REDundancy (DRED) technology from the Amazon Chime SDK team significantly improves quality and intelligibility under packet loss by efficiently transmitting large amounts of redundant information. Our approach leverages the ability of neural vocoders to reconstruct informationally rich speech signals from informationally sparse frequency spectrum snapshots, and we use a neural encoder to compress those snapshots still further. With this approach, we are able to load a single packet with information about as many as 50 prior packets (one second of speech) with minimal increase in bandwidth.

We describe our approach in a paper that we will present at this year’s ICASSP.

Redundant audio

All modern codecs (coder/decoders) have so-called packet-loss-concealment (PLC) algorithms that attempt to guess the content of lost packets. Those algorithms work fine for infrequent, short losses, as they can extrapolate phonemes to fill in gaps of a few tens of milliseconds. However, they cannot (and certainly should not try to) predict the next phoneme or word from the conversation. To deal with significantly degraded networks, we need more than just PLC.

Related content
Combining classic signal processing with deep learning makes method efficient enough to run on a phone.

One option is the 25-year-old spec for REDundant audio data (often referred to as just RED). Despite its age, RED is still in use today and is one of the few ways of transmitting redundant data for WebRTC, a popular open-source framework for real-time communication over the Web. RED has the advantage of being flexible and simple to use, but it is not very efficient. Transmitting two copies of the audio requires ... twice the bitrate.

The Opus audio codec — which is the default codec for WebRTC — introduced a more efficient scheme for redundancy called low-bit-rate redundancy (LBRR). With LBRR, each new audio packet can include a copy of the previous packet, encoded at a lower bit rate. That has the advantage of lowering the bit rate overhead. Also, because the scheme is deeply integrated into Opus, it can be simpler to use than RED.

That being said, the Opus LBRR is limited to just one frame of redundancy, so it cannot do much in the case of a long burst of lost packets. RED does not have that limitation, but transmitting a large number of copies would be impractical due to the overhead. There is always the risk that the extra redundancy will end up causing congestion and more losses.

LBRR and PLC.png
With every new voice packet (blue), Opus’s low-bit-rate-redundancy (LBRR) mechanism includes a compressed copy of the previous packet (green). When three consecutive packets are lost (red x’s), two of them are unrecoverable, and a packet-loss-concealment (PLC) algorithm must fill in the gaps.

Deep REDundancy (DRED)

In the past few years, we have seen neural speech codecs that can produce good quality speech at only a fraction of the bit rate required by traditional speech codecs — typically less than three kilobits per second (3 kb/s). That was unthinkable just a few years ago. But for most real-time-communication applications, neural codecs aren't that useful, because just the packet headers required by the IP/UDP/RTP protocols take up 16 kb/s.

However, for the purpose of transmitting a large amount of redundancy, a neural speech codec can be very useful, and we propose a Deep REDundancy codec that has been specifically designed for that purpose. It has a different set of constraints than a regular speech codec:

  • The redundancy in different packets needs to be independent (that's why we call it redundancy in the first place). However, within each packet, we can use as much prediction and other redundancy elimination as we like since IP packets are all-or-nothing (no corrupted packets).
  • We want to encode meaningful acoustic features rather than abstract (latent) ones to avoid having to standardize more than needed and to leave room for future technology improvements.
  • There is a large degree of overlap between consecutive redundancy packets. The encoder should leverage this overlap and should not need to encode each redundancy packet from scratch. The encoding complexity should remain constant even as we increase the amount of redundancy.
  • Since short bursts are more common than long ones, the redundancy decoder should be able to decode the most recent audio quickly but may take longer to decode older signals.
  • The Opus decoder has to be able to switch between decoding DRED, PLC, LBRR, and regular packets at any time.

Neural vocoders

Let's take a brief detour and discuss neural vocoders. A vocoder is an algorithm that takes in acoustic features that describe the spectrum of a speech signal over a short span of time and generates the corresponding (continuous) speech signal. Vocoders can be used in text-to-speech, where acoustic features are generated from text, and for speech compression, where the encoder transmits acoustic features, and a vocoder generates speech from the features.

Related content
A text-to-speech system, which converts written text into synthesized speech, is what allows Alexa to respond verbally to requests or commands...

Vocoders have been around since the ’70s, but none had ever achieved acceptable speech quality — until neural vocoders like WaveNet came about and changed everything. WaveNet itself was all but impossible to implement in real time (even on a GPU), but it led to lower-complexity neural vocoders, like the LPCNet vocoder we're using here.

Like many (but not all) neural vocoders, LPCNet is autoregressive, in that it produces the audio samples that best fit the previous samples — whether the previous samples are real speech or speech synthesized by LPCNet itself. As we will see below, that property can be very useful.

DRED architecture

The vocoder’s inputs — the acoustic features — don't describe the full speech waveform, but they do describe how the speech sounds to the human ear. That makes them lightweight and predictable and thus ideal for transmitting large amounts of redundancy.

The idea behind DRED is to compress the features as much as possible while ensuring that the recovered speech is still intelligible. When multiple packets go missing, we wait for the first packet to arrive and decode the features it contains. We then send those features to a vocoder — in our case, LPCNet — which re-synthesizes the missing speech for us from the point where the loss occurred. Once the "hole" is filled, we resume with Opus decoding as usual.

Combining the constraints listed earlier leads to the encoder architecture depicted below, which enables efficient encoding of highly redundant acoustic features — so that extended holes can be filled at the decoder.

Codec.png
Every 20 milliseconds, the DRED encoder encodes the last 40 milliseconds of speech. The decoder works backward, as the most recently transmitted audio is usually the most important.

The DRED encoder works as follows. Every 20 milliseconds (ms), it produces a new vector that contains information about the last 40 ms of speech. Given this overlap, we need only half of the vectors to reconstruct the complete speech. To avoid our redundancy’s being itself redundant, in a given 20 ms packet, we include only every other redundancy coding vector, so the redundancy encoded in a given packet covers nonoverlapping segments of the past speech. In terms of the figure above, the signal can be recovered from just the odd/purple blocks or just the even/blue blocks.

Related content
The team’s non-real-time system is the top performer, while its real-time system finishes third overall and second among real-time systems — despite using only 4% of a CPU core.

The degree of redundancy is determined by the number of past chunks included in each packet; each chunk included in the redundancy coding corresponds to 40 ms of speech that can be recovered. Furthermore, rather than representing each chunk independently, the encoder takes advantage of the correlation between successive chunks and extracts a sort of interchunk difference to encode.

For decoding, to be able to synthesize the whole sequence, all we need is a starting point. But rather than decoding forward in time, as would be intuitive, we choose an initial state that corresponds to the most recent chunk; from there, we decode going backward in time. That means we can get quickly to the most recent audio, which is more likely to be useful. It also means that we can transmit as much — or as little — redundancy as we want just by choosing how many chunks to include in a packet.

Rate-distortion-optimized variational autoencoder

Now let's get into the details of how we minimize the bit rate to code our redundancy. Here we turn to a widely used method in the video coding world, rate distortion optimization (RDO), which means trying to simultaneously reduce the bit rate and the distortion we cause to the speech. In a regular autoencoder, we train an encoder to find a simple — typically, low-dimensional — vector representation of an input that can then be decoded back to something close to the original.

In our rate-distortion-optimized variational autoencoder (RDO-VAE), instead of imposing a limit on the dimensionality of the representation, we directly limit the number of bits required to code that representation. We can estimate the actual rate (in bits) required to code the latent representation, assuming entropy coding of a quantized Laplace distribution. As a result, not only do we automatically optimize the feature representation, but the training process automatically discards any useless dimensions by setting them to zero. We don't need to manually choose the number of dimensions.

Moreover, by varying the rate-distortion trade-off, we can train a rate-controllable quantizer. That allows us to use better quality for the most recent speech (which is more likely to be used) and a lower quality for older speech that would be used only for a long burst of loss. In the end, we use an average bit rate of around 500 bits/second (0.5 kb/s) and still have enough information to reconstruct intelligible speech.

Once we include DRED, this is what the packet loss scenario described above would look like:

DRED vs. LBRR.png
With LBRR, each new packet (blue) includes a compressed copy of the previous packet (green); with DRED, it includes highly compressed versions of up to 50 prior packets (red). In this case, DRED's redundancy is set at 140 ms (seven packets).

Although it is illustrated for just 70 milliseconds of redundancy, we scale this up to one full second of redundancy contained in each 20-millisecond packet. That's 50 copies of the information being sent, on the assumption that at least one will make it to its destination and enable reconstruction of the original speech.

Revisiting packet loss concealment

So what happens when we lose a packet and don't have any DRED data for it? We still need to play out something — and ideally not zeros. In that case, we can just guess. Over a short period of time, we can still predict acoustic features reasonably well and then ask LPCNet to fill in the missing audio based on those features. That is essentially what PLC does, and doing it with a neural vocoder like LPCNet works better than using traditional PLC algorithms like the one that's currently integrated into Opus. In fact, our neural PLC algorithm recently placed second in the Interspeech 2022 Audio Deep Packet Loss Concealment Challenge.

Results

How much does DRED improve speech quality and intelligibility under lossy network conditions? Let's start with a clip compressed with Opus wideband at 24 kb/s, plus 16 kb/s of LBRR redundancy (40 kb/s total). This is what we get without loss:

Clean audio

To show what happens in lossy conditions, let's use a particularly difficult — but real — loss sequence taken from the PLC Challenge. If we use the standard Opus redundancy (LBRR) and PLC, the resulting audio is missing large chunks that just cannot be filled:

Lossy audio with LBRR and PLC

If we add our DRED coding with one full second of redundancy included in each packet, at a cost of about 32 kb/s, the missing speech can be entirely recovered:

Lossy audio with DRED
Results.png
Overall results of DRED's evaluation on the full dataset for the original PLC Challenge, using mean opinion score (MOS).

The example above is based on just one speech sequence, but we evaluated DRED on the full dataset for the original PLC Challenge, using mean opinion score (MOS) to aggregate the judgments of human reviewers. The results show that DRED alone (no LBRR) can reduce the impact of packet loss by about half even compared to our previous neural PLC. Also interesting is the fact that LBRR still provides a benefit even when DRED is used. With both LBRR and DRED, the impact of packet loss becomes very small, with just a 0.1 MOS degradation compared to the original, uncompressed speech.

This work is only one example of how Amazon is contributing to improving Opus. Our open-source neural PLC and DRED implementations are available on this development branch, and we welcome feedback and outside collaboration. We are also engaging with the IETF with the goal of updating the Opus standard in a fully compatible way. Our two Internet drafts (draft 1 | draft 2) offer more details on what we are proposing.

Research areas

Related content

CA, BC, Vancouver
We are looking for a senior audio applied scientist with experience and expertise in speech and audio signal processing, machine learning, automatic speech recognition, and/or natural language processing to work on state-of-the-art solutions for applications including speech enhancement, voice analytics, and real-time transcription of conversational audio. Amazon Connect is a highly disruptive cloud-based contact center that enables businesses to deliver engaging, dynamic, and personal customer service experiences. Amazon Connect is the result of the ten years of development that went into building the tools Amazon uses to provide its award winning customer service at massive and launching it as a publicly available service. With Amazon Connect, you can create your own cloud-based contact center and be taking calls in minutes. Our team’s charter as part of the Amazon Connect organization is to think big, re-imagine, innovate, and deliver novel, state-of-the-art solutions to audio and video problems. We are interested in all aspects of audio, video, and media technology, and we leverage the latest machine learning and signal processing techniques to surprise and delight our customers. Our applications include real-time audio/video communications, audio/video scene analysis, anomaly detection, audio/speech/music/image/video processing, enhancement, analysis, synthesis and coding. We have the nimbleness of a small startup but, at the same time, the immense resources of AWS - the world leader in cloud computing - behind us as well. If you want to innovate on the cutting edge while having a profound and direct impact on the end customer experience, this is the team to be on! About the team AWS Applications and Higher Level Abstractions (Apps) provides horizontal and industry vertical applications for business users with the same on-demand scalability, reliability, pay-as-you-go pricing, and machine learning expertise that drive AWS services. The AWS Applications group includes services such as Amazon Connect (a cost-effective cloud contact center), our End User Computing (including Amazon Workspaces, AppStream, etc.), Marketing Tech (Amazon Pinpoint), and Autonomous Checkout and Biometric Identity Services (Just Walk Out, Amazon One) for retail, sports, travel, and other verticals. Why AWS Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Utility Computing (UC) AWS Utility Computing (UC) provides product innovations — from foundational services such as Amazon’s Simple Storage Service (S3) and Amazon Elastic Compute Cloud (EC2), to consistently released new product innovations that continue to set AWS’s services and features apart in the industry. As a member of the UC organization, you’ll support the development and management of Compute, Database, Storage, Internet of Things (IoT), Platform, and Productivity Apps services in AWS, including support for customers who require specialized security solutions for their cloud services. Inclusive Team Culture Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences, inspire us to never stop embracing our uniqueness. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. Mentorship and Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Diverse Experiences Amazon values diverse experiences. Even if you do not meet all of the preferred qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying.
US, WA, Seattle
We are seeking an entrepreneurial, innovative and self-driven Senior Data Scientist to join our team. Your mission will be to leverage science, technology, and data analysis to help advertisers and hundreds of thousands of independent sellers grow their business on WW Amazon marketplaces by understanding how brand ads are working for them and coming up with scaled recommendations. You can change the life of local business owners while taking ownership to solve scientific challenges from analyzing millions of global advertising campaigns and generating brand insights and recommendations for all our advertisers. The Sponsored Brands Advertiser Control team is a versatile environment, with a wide variety of challenges. We guide advertisers to make informed decisions by recommendations, sharing insights, and forecasts. We help advertisers deliver effective campaigns automatically by optimizing campaign settings on behalf of them. We enable advertisers to achieve brand advertising goals with maximum efficiency. We have the opportunity to deliver social impact, own technical problems, thought diversity, and business impact. Why you will love this opportunity: Amazon is investing heavily in building a world-class advertising business. This team defines and delivers a collection of advertising products that drive discovery and sales. Our solutions generate billions in revenue and drive long-term growth for Amazon’s Retail and Marketplace businesses. We deliver billions of ad impressions, millions of clicks daily, and break fresh ground to create world-class products. We are a highly motivated, collaborative, and fun-loving team with an entrepreneurial spirit - with a broad mandate to experiment and innovate. Impact and Career Growth: You will invent new experiences and influence customer-facing shopping experiences to help suppliers grow their retail business and the auction dynamics that leverage native advertising; this is your opportunity to work within the fastest-growing businesses across all of Amazon! Define a long-term science vision for our advertising business, driven from our customers' needs, translating that direction into specific plans for scientists, as well as engineering and product teams. This role combines science leadership, organizational ability, technical strength, product focus, and business understanding. Amazon Advertising is one of Amazon's fastest growing and most profitable businesses, responsible for defining and delivering a collection of advertising products that drive discovery and sales. Our products and solutions are strategically important to enable our Retail and Marketplace businesses to drive long-term growth. We deliver billions of ad impressions and millions of clicks and break fresh ground in product and technical innovations every day! As a Senior Data Scientist on this team you will: - Lead Data Science solutions from beginning to end. - Deliver with independence on challenging large-scale problems with ambiguity and complexity. - Influence multiple teams and able to work closely with business teams, build consensus, and advise business leaders. - Write code (Python, R, Scala, SQL, etc.) to obtain, manipulate, analyze data, and build dashboards. - Build Statistical and Machine Learning models to solve specific business problems. - Retrieve, synthesize, and present critical data in a format that is immediately useful to answering specific questions or improving system performance. - Analyze historical data to identify trends and support optimal decision making. - Apply statistical and machine learning knowledge to specific business problems and data. - Formalize assumptions about how our systems should work, create statistical definitions of outliers, and develop methods to systematically identify outliers. Work out why such examples are outliers and define if any actions needed. - Given anecdotes about anomalies or generate automatic scripts to define anomalies, deep dive to explain why they happen, and identify fixes. - Build decision-making models and propose effective solutions for the business problems you define. - Conduct written and verbal presentations to share insights to audiences of varying levels of technical sophistication. Team video https://youtu.be/zD_6Lzw8raE
US, CA, Palo Alto
Amazon Advertising is one of Amazon's fastest growing and most profitable businesses. As a core product offering within our advertising portfolio, Sponsored Products (SP) helps merchants, retail vendors, and brand owners succeed via native advertising, which grows incremental sales of their products sold through Amazon. The SP team's primary goals are to help shoppers discover new products they love, be the most efficient way for advertisers to meet their business objectives, and build a sustainable business that continuously innovates on behalf of customers. Our products and solutions are strategically important to enable our Retail and Marketplace businesses to drive long-term growth. We deliver billions of ad impressions and millions of clicks and break fresh ground in product and technical innovations every day! Why you love this opportunity Amazon is investing heavily in building a world-class advertising business. This team is responsible for defining and delivering a collection of advertising products that drive discovery and sales. Our solutions generate billions in revenue and drive long-term growth for Amazon’s Retail and Marketplace businesses. We deliver billions of ad impressions, millions of clicks daily, and break fresh ground to create world-class products. We are highly motivated, collaborative, and fun-loving team with an entrepreneurial spirit - with a broad mandate to experiment and innovate. Impact and Career Growth You will invent new experiences and influence customer-facing shopping experiences to help suppliers grow their retail business and the auction dynamics that leverage native advertising; this is your opportunity to work within the fastest-growing businesses across all of Amazon! Define a long-term science vision for our advertising business, driven fundamentally from our customers' needs, translating that direction into specific plans for research and applied scientists, as well as engineering and product teams. This role combines science leadership, organizational ability, technical strength, product focus, and business understanding. Key job responsibilities Key job responsibilities As an Applied Scientist II on this team you will: * Lead complex and ambiguous projects to deliver bidding recommendation products to advertisers. * Build machine learning models and utilize data analysis to deliver scalable solutions to business problems. * Perform hands-on analysis and modeling with very large data sets to develop insights that increase traffic monetization and merchandise sales without compromising shopper experience. * Work closely with software engineers on detailed requirements, technical designs and implementation of end-to-end solutions in production. * Design and run A/B experiments that affect hundreds of millions of customers, evaluate the impact of your optimizations and communicate your results to various business stakeholders. * Work with scientists and economists to model the interaction between organic sales and sponsored content and to further evolve Amazon's marketplace. * Establish scalable, efficient, automated processes for large-scale data analysis, machine-learning model development, model validation and serving. * Research new predictive learning approaches for the sponsored products business. * Write production code to bring models into production.
US, WA, Bellevue
Conversational AI ModEling and Learning (CAMEL) team's mission is to create a best-in-class Conversational AI that is intuitive, intelligent, and responsive, by developing superior Large Language Models (LLM) solutions and services which increase the capabilities built into the model and which enable utilizing thousands of APIs and external knowledge sources to provide the best experience for each request across millions of customers and endpoints. We are looking for a passionate, talented, and resourceful Applied Scientist in the field of LLM, Artificial Intelligence (AI), Natural Language Processing (NLP) and/or Information Retrieval, to invent and build scalable solutions for a state-of-the-art context-aware conversational AI. A successful candidate will have strong machine learning background and a desire to push the envelope in one or more of the above areas. The ideal candidate would also have hands-on experiences in developing LLM solution, enjoy operating in dynamic environments, be self-motivated to take on challenging problems to deliver big customer impact, moving fast to ship solutions and then iterating on user feedback and interactions. Key job responsibilities As an Applied Scientist, you will leverage your technical expertise and experience to collaborate with other talented applied scientists and engineers to research and develop novel algorithms and modeling techniques to reduce friction and enable natural and contextual conversations. You will analyze, understand and improve user experiences by leveraging Amazon’s heterogeneous data sources and large-scale computing resources to accelerate advances in artificial intelligence. You will work on core LLM technologies, including developing best-in-class modeling, prompt optimization algorithms to enable Conversation AI use cases. Your work will directly impact our customers in the form of novel products and services .
US, GA, Atlanta
Are you looking to work at the forefront of Machine Learning and AI? Would you be excited to apply cutting edge Generative AI algorithms to solve real world problems with significant impact? The Generative AI Innovation Center at AWS is a new strategic team that helps AWS customers implement Generative AI solutions and realize transformational business opportunities. This is a team of strategists, data scientists, engineers, and solution architects working step-by-step with customers to build bespoke solutions that harness the power of generative AI. The team helps customers imagine and scope the use cases that will create the greatest value for their businesses, select and train and fine tune the right models, define paths to navigate technical or business challenges, develop proof-of-concepts, and make plans for launching solutions at scale. The GenAI Innovation Center team provides guidance on best practices for applying generative AI responsibly and cost efficiently. You will work directly with customers and innovate in a fast-paced organization that contributes to game-changing projects and technologies. You will design and run experiments, research new algorithms, and find new ways of optimizing risk, profitability, and customer experience. We’re looking for Data Scientists capable of using GenAI and other techniques to design, evangelize, and implement state-of-the-art solutions for never-before-solved problems. Key job responsibilities As an Data Scientist, you will - Collaborate with AI/ML scientists and architects to Research, design, develop, and evaluate cutting-edge generative AI algorithms to address real-world challenges - Interact with customers directly to understand the business problem, help and aid them in implementation of generative AI solutions, deliver briefing and deep dive sessions to customers and guide customer on adoption patterns and paths to production - Create and deliver best practice recommendations, tutorials, blog posts, sample code, and presentations adapted to technical, business, and executive stakeholder - Provide customer and market feedback to Product and Engineering teams to help define product direction About the team ABOUT AWS: Diverse Experiences Amazon values diverse experiences. Even if you do not meet all of the preferred qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why AWS Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. Inclusive Team Culture Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences, inspire us to never stop embracing our uniqueness. Mentorship and Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional.
GB, London
Are you a MS or PhD student interested in a 2025 Internship in the field of machine learning, deep learning, speech, robotics, computer vision, optimization, quantum computing, automated reasoning, or formal methods? If so, we want to hear from you! We are looking for students interested in using a variety of domain expertise to invent, design and implement state-of-the-art solutions for never-before-solved problems. You can find more information about the Amazon Science community as well as our interview process via the links below; https://www.amazon.science/ https://amazon.jobs/content/en/career-programs/university/science https://amazon.jobs/content/en/how-we-hire/university-roles/applied-science Key job responsibilities As an Applied Science Intern, you will own the design and development of end-to-end systems. You’ll have the opportunity to write technical white papers, create roadmaps and drive production level projects that will support Amazon Science. You will work closely with Amazon scientists, and other science interns to develop solutions and deploy them into production. You will have the opportunity to design new algorithms, models, or other technical solutions whilst experiencing Amazon’s customer focused culture. The ideal intern must have the ability to work with diverse groups of people and cross-functional teams to solve complex business problems. A day in the life At Amazon, you will grow into the high impact, visionary person you know you’re ready to be. Every day will be filled with developing new skills and achieving personal growth. How often can you say that your work changes the world? At Amazon, you’ll say it often. Join us and define tomorrow. Some more benefits of an Amazon Science internship include; • All of our internships offer a competitive stipend/salary • Interns are paired with an experienced manager and mentor(s) • Interns receive invitations to different events such as intern program initiatives or site events • Interns can build their professional and personal network with other Amazon Scientists • Interns can potentially publish work at top tier conferences each year About the team Applicants will be reviewed on a rolling basis and are assigned to teams aligned with their research interests and experience prior to interviews. Start dates are available throughout the year and durations can vary in length from 3-6 months for full time internships. This role may available across multiple locations in the EMEA region (Austria, Estonia, France, Germany, Ireland, Israel, Italy, Luxembourg, Netherlands, Poland, Romania, Spain, UAE, and UK). Please note these are not remote internships.
US, WA, Seattle
Revolutionize the Future of AI at the Frontier of Applied Science Are you a brilliant mind seeking to push the boundaries of what's possible with artificial intelligence? Join our elite team of researchers and engineers at the forefront of applied science, where we're harnessing the latest advancements in natural language processing, deep learning, and generative AI to reshape industries and unlock new realms of innovation. As an Applied Science Intern, you'll have the unique opportunity to work alongside world-renowned experts, gaining invaluable hands-on experience with cutting-edge technologies such as large language models, transformers, and neural networks. You'll dive deep into complex challenges, fine-tuning state-of-the-art models, developing novel algorithms for named entity recognition, and exploring the vast potential of generative AI. This internship is not just about executing tasks – it's about being a driving force behind groundbreaking discoveries. You'll collaborate with cross-functional teams, leveraging your expertise in statistics, recommender systems, and question answering to tackle real-world problems and deliver impactful solutions. Throughout your journey, you'll have access to unparalleled resources, including state-of-the-art computing infrastructure, cutting-edge research papers, and mentorship from industry luminaries. This immersive experience will not only sharpen your technical skills but also cultivate your ability to think critically, communicate effectively, and thrive in a fast-paced, innovative environment where bold ideas are celebrated.. Join us at the forefront of applied science, where your contributions will shape the future of AI and propel humanity forward. Seize this extraordinary opportunity to learn, grow, and leave an indelible mark on the world of technology. Amazon has positions available for LLM & GenAI Applied Science Internships in, but not limited to, Bellevue, WA; Boston, MA; Cambridge, MA; New York, NY; Santa Clara, CA; Seattle, WA; Sunnyvale, CA. Key job responsibilities We are particularly interested in candidates with expertise in: LLMs, NLP/NLU, Gen AI, Transformers, Fine-Tuning, Recommendation Systems, Deep Learning, NER, Statistics, Neural Networks, Question Answering. In this role, you will work alongside global experts to develop and implement novel, scalable algorithms and modeling techniques that advance the state-of-the-art in areas at the intersection of LLMs and GenAI. You will tackle challenging, groundbreaking research problems on production-scale data, with a focus on recommendation systems, question answering, deep learning and generative AI. The ideal candidate should possess the ability to work collaboratively with diverse groups and cross-functional teams to solve complex business problems. A successful candidate will be a self-starter, comfortable with ambiguity, with strong attention to detail and the ability to thrive in a fast-paced, ever-changing environment. A day in the life - Collaborate with cross-functional teams to tackle complex challenges in natural language processing, computer vision, and generative AI. - Fine-tune state-of-the-art models and develop novel algorithms to push the boundaries of what's possible. - Explore the vast potential of generative AI and its applications across industries. - Attend cutting-edge research seminars and engage in thought-provoking discussions with industry luminaries. - Leverage state-of-the-art computing infrastructure and access to the latest research papers to fuel your innovation. - Present your groundbreaking work and insights to the team, fostering a culture of knowledge-sharing and continuous learning
US, WA, Seattle
Shape the Future of Visual Intelligence Are you passionate about pushing the boundaries of computer vision and shaping the future of visual intelligence? Join Amazon and embark on an exciting journey where you'll develop cutting-edge algorithms and models that power our groundbreaking computer vision services, including Amazon Rekognition, Amazon Go, Visual Search, and more! At Amazon, we're combining computer vision, mobile robots, advanced end-of-arm tooling, and high-degree of freedom movement to solve real-world problems at an unprecedented scale. As an intern, you'll have the opportunity to build innovative solutions where visual input helps customers shop, anticipate technological advances, work with leading-edge technology, focus on highly targeted customer use-cases, and launch products that solve problems for Amazon customers worldwide. Throughout your journey, you'll have access to unparalleled resources, including state-of-the-art computing infrastructure, cutting-edge research papers, and mentorship from industry luminaries. This immersive experience will not only sharpen your technical skills but also cultivate your ability to think critically, communicate effectively, and thrive in a fast-paced, innovative environment where bold ideas are celebrated.. Join us at the forefront of applied science, where your contributions will shape the future of AI and propel humanity forward. Seize this extraordinary opportunity to learn, grow, and leave an indelible mark on the world of technology. Amazon has positions available for Computer Vision Applied Science Internships in, but not limited to, Arlington, VA; Boston, MA; Cupertino, CA; Minneapolis, MN; New York, NY; Portland, OR; Santa Clara, CA; Seattle, WA; Bellevue, WA; Santa Clara, CA; Sunnyvale, CA. Key job responsibilities We are particularly interested in candidates with expertise in: Vision - Language Models, Object Recognition/Detection, Computer Vision, Large Language Models (LLMs), Programming/Scripting Languages, Facial Recognition, Image Retrieval, Deep Learning, Ranking, Video Understanding, Robotics In this role, you will work alongside global experts to develop and implement novel, scalable algorithms and modeling techniques that advance the state-of-the-art in areas of visual intelligence. You will tackle challenging, groundbreaking research problems to help build solutions where visual input helps the customers shop, anticipate technological advances, work with leading edge technology, focus on highly targeted customer use-cases, and launch products that solve problems for Amazon customers. The ideal candidate should possess the ability to work collaboratively with diverse groups and cross-functional teams to solve complex business problems. A successful candidate will be a self-starter, comfortable with ambiguity, with strong attention to detail and the ability to thrive in a fast-paced, ever-changing environment. A day in the life - Collaborate with Amazon scientists and cross-functional teams to develop and deploy cutting-edge computer vision solutions into production. - Dive into complex challenges, leveraging your expertise in areas such as Vision-Language Models, Object Recognition/Detection, Large Language Models (LLMs), Facial Recognition, Image Retrieval, Deep Learning, Ranking, Video Understanding, and Robotics. - Contribute to technical white papers, create technical roadmaps, and drive production-level projects that will support Amazon Science. - Embrace ambiguity, strong attention to detail, and a fast-paced, ever-changing environment as you own the design and development of end-to-end systems. - Engage in knowledge-sharing, mentorship, and career-advancing resources to grow as a well-rounded professional.
US, WA, Seattle
Shape the Future of Cloud Computing Are you a graduate student passionate about Automated Reasoning and its real-world applications? Join our team of innovators and embark on a journey to revolutionize cloud computing through cutting-edge automated reasoning techniques.Our tools are called billions of times daily, powering the backbone of Amazon's products and services. We are changing the way computer systems are developed and operated, raising the bar for security, durability, availability, and quality. As an Applied Science Intern, you'll have the opportunity to work alongside our brilliant scientists and contribute to groundbreaking projects. From distributed proof search and SAT/SMT solvers to program analysis, synthesis, and verification, you'll tackle complex challenges at the intersection of theory and practice, driving innovation and delivering tangible value to our customers. This internship is not just about executing tasks – you'll explore novel approaches to solving intricate automated reasoning problems. You'll dive deep into cutting-edge research, leveraging your expertise to develop innovative solutions. You'll work on deploying your solutions into production, witnessing the real-world impact of your contributions. Throughout your journey, you'll have access to unparalleled resources, including state-of-the-art computing infrastructure, cutting-edge research papers, and mentorship from industry luminaries. This immersive experience will not only sharpen your technical skills but also cultivate your ability to think critically, communicate effectively, and thrive in a fast-paced, innovative environment. Join us and be part of a team that is shaping the future of cloud computing through the power of Automated Reasoning. Apply now and unlock your potential! Amazon has positions available for Automated Reasoning Applied Science Internships in, but not limited to, Arlington, VA; Boston, MA; Cupertino, CA; Minneapolis, MN; New York, NY; Portland, OR; Santa Clara, CA; Seattle, WA; Bellevue, WA; Santa Clara, CA; Sunnyvale, CA. Key job responsibilities We are particularly interested in candidates with expertise in: Theorem Proving, Boolean Satisfiability Solvers, Bounded Model Checking, Deductive Verification, Programming/Scripting Languages, Abstract Interpretation, Automated Reasoning, Static/Program Analysis, Program Synthesis In this role, you will work alongside global experts to develop and implement novel, scalable algorithms and modeling techniques that advance the state-of-the-art in areas at the intersection of Natural Language Processing and Speech Technologies. You will tackle challenging, groundbreaking research problems on production-scale data, with a focus on natural language processing, speech recognition, text-to-speech (TTS), text recognition, question answering, NLP models (e.g., LSTM, transformer-based models), signal processing, information extraction, conversational modeling, audio processing, speaker detection, large language models, multilingual modeling, and more. The ideal candidate should possess the ability to work collaboratively with diverse groups and cross-functional teams to solve complex business problems. A successful candidate will be a self-starter, comfortable with ambiguity, with strong attention to detail and the ability to thrive in a fast-paced, ever-changing environment. Key job responsibilities We are particularly interested in candidates with expertise in: Theorem Proving, Boolean Satisfiability Solvers, Bounded Model Checking, Deductive Verification, Programming/Scripting Languages, Abstract Interpretation, Automated Reasoning, Static/Program Analysis, Program Synthesis In this role, you will work alongside global experts to develop and implement novel, scalable algorithms and modeling techniques that advance the state-of-the-art in areas at the intersection of Natural Language Processing and Speech Technologies. You will tackle challenging, groundbreaking research problems on production-scale data, with a focus on natural language processing, speech recognition, text-to-speech (TTS), text recognition, question answering, NLP models (e.g., LSTM, transformer-based models), signal processing, information extraction, conversational modeling, audio processing, speaker detection, large language models, multilingual modeling, and more. The ideal candidate should possess the ability to work collaboratively with diverse groups and cross-functional teams to solve complex business problems. A successful candidate will be a self-starter, comfortable with ambiguity, with strong attention to detail and the ability to thrive in a fast-paced, ever-changing environment.
US, WA, Seattle
Unleash Your Potential as an AI Trailblazer At Amazon, we're on a mission to revolutionize the way people discover and access information. Our Applied Science team is at the forefront of this endeavor, pushing the boundaries of recommender systems and information retrieval. We're seeking brilliant minds to join us as interns and contribute to the development of cutting-edge AI solutions that will shape the future of personalized experiences. As an Applied Science Intern focused on Recommender Systems and Information Retrieval in Machine Learning, you'll have the opportunity to work alongside renowned scientists and engineers, tackling complex challenges in areas such as deep learning, natural language processing, and large-scale distributed systems. Your contributions will directly impact the products and services used by millions of Amazon customers worldwide. Imagine a role where you immerse yourself in groundbreaking research, exploring novel machine learning models for product recommendations, personalized search, and information retrieval tasks. You'll leverage natural language processing and information retrieval techniques to unlock insights from vast repositories of unstructured data, fueling the next generation of AI applications. Throughout your journey, you'll have access to unparalleled resources, including state-of-the-art computing infrastructure, cutting-edge research papers, and mentorship from industry luminaries. This immersive experience will not only sharpen your technical skills but also cultivate your ability to think critically, communicate effectively, and thrive in a fast-paced, innovative environment where bold ideas are celebrated. Join us at the forefront of applied science, where your contributions will shape the future of AI and propel humanity forward. Seize this extraordinary opportunity to learn, grow, and leave an indelible mark on the world of technology. Amazon has positions available for Machine Learning Applied Science Internships in, but not limited to Arlington, VA; Bellevue, WA; Boston, MA; New York, NY; Palo Alto, CA; San Diego, CA; Santa Clara, CA; Seattle, WA. Key job responsibilities We are particularly interested in candidates with expertise in: Knowledge Graphs and Extraction, Programming/Scripting Languages, Time Series, Machine Learning, Natural Language Processing, Deep Learning,Neural Networks/GNNs, Large Language Models, Data Structures and Algorithms, Graph Modeling, Collaborative Filtering, Learning to Rank, Recommender Systems In this role, you'll collaborate with brilliant minds to develop innovative frameworks and tools that streamline the lifecycle of machine learning assets, from data to deployed models in areas at the intersection of Knowledge Management within Machine Learning. You will conduct groundbreaking research into emerging best practices and innovations in the field of ML operations, knowledge engineering, and information management, proposing novel approaches that could further enhance Amazon's machine learning capabilities. The ideal candidate should possess the ability to work collaboratively with diverse groups and cross-functional teams to solve complex business problems. A successful candidate will be a self-starter, comfortable with ambiguity, with strong attention to detail and the ability to thrive in a fast-paced, ever-changing environment. A day in the life - Design, implement, and experimentally evaluate new recommendation and search algorithms using large-scale datasets - Develop scalable data processing pipelines to ingest, clean, and featurize diverse data sources for model training - Conduct research into the latest advancements in recommender systems, information retrieval, and related machine learning domains - Collaborate with cross-functional teams to integrate your innovative solutions into production systems, impacting millions of Amazon customers worldwide - Communicate your findings through captivating presentations, technical documentation, and potential publications, sharing your knowledge with the global AI community