Neural encoding enables more-efficient recovery of lost audio packets

By leveraging neural vocoding, Amazon Chime SDK’s new deep-redundancy (DRED) technology can reconstruct long sequences of lost packets with little bandwidth overhead.

Packet loss is a big problem for real-time voice communication over the Internet. Everyone has been in the situation where the network is becoming unreliable and enough packets are getting lost that it's hard — or impossible — to make out what the other person is saying.

One way to fight packet loss is through redundancy, in which each new packet includes information about prior packets. But existing redundancy schemes either have limited scope — carrying information only about the immediately preceding packet, for instance — or scale inefficiently.

The Deep REDundancy (DRED) technology from the Amazon Chime SDK team significantly improves quality and intelligibility under packet loss by efficiently transmitting large amounts of redundant information. Our approach leverages the ability of neural vocoders to reconstruct informationally rich speech signals from informationally sparse frequency spectrum snapshots, and we use a neural encoder to compress those snapshots still further. With this approach, we are able to load a single packet with information about as many as 50 prior packets (one second of speech) with minimal increase in bandwidth.

We describe our approach in a paper that we will present at this year’s ICASSP.

Redundant audio

All modern codecs (coder/decoders) have so-called packet-loss-concealment (PLC) algorithms that attempt to guess the content of lost packets. Those algorithms work fine for infrequent, short losses, as they can extrapolate phonemes to fill in gaps of a few tens of milliseconds. However, they cannot (and certainly should not try to) predict the next phoneme or word from the conversation. To deal with significantly degraded networks, we need more than just PLC.

Related content
Combining classic signal processing with deep learning makes method efficient enough to run on a phone.

One option is the 25-year-old spec for REDundant audio data (often referred to as just RED). Despite its age, RED is still in use today and is one of the few ways of transmitting redundant data for WebRTC, a popular open-source framework for real-time communication over the Web. RED has the advantage of being flexible and simple to use, but it is not very efficient. Transmitting two copies of the audio requires ... twice the bitrate.

The Opus audio codec — which is the default codec for WebRTC — introduced a more efficient scheme for redundancy called low-bit-rate redundancy (LBRR). With LBRR, each new audio packet can include a copy of the previous packet, encoded at a lower bit rate. That has the advantage of lowering the bit rate overhead. Also, because the scheme is deeply integrated into Opus, it can be simpler to use than RED.

That being said, the Opus LBRR is limited to just one frame of redundancy, so it cannot do much in the case of a long burst of lost packets. RED does not have that limitation, but transmitting a large number of copies would be impractical due to the overhead. There is always the risk that the extra redundancy will end up causing congestion and more losses.

LBRR and PLC.png
With every new voice packet (blue), Opus’s low-bit-rate-redundancy (LBRR) mechanism includes a compressed copy of the previous packet (green). When three consecutive packets are lost (red x’s), two of them are unrecoverable, and a packet-loss-concealment (PLC) algorithm must fill in the gaps.

Deep REDundancy (DRED)

In the past few years, we have seen neural speech codecs that can produce good quality speech at only a fraction of the bit rate required by traditional speech codecs — typically less than three kilobits per second (3 kb/s). That was unthinkable just a few years ago. But for most real-time-communication applications, neural codecs aren't that useful, because just the packet headers required by the IP/UDP/RTP protocols take up 16 kb/s.

However, for the purpose of transmitting a large amount of redundancy, a neural speech codec can be very useful, and we propose a Deep REDundancy codec that has been specifically designed for that purpose. It has a different set of constraints than a regular speech codec:

  • The redundancy in different packets needs to be independent (that's why we call it redundancy in the first place). However, within each packet, we can use as much prediction and other redundancy elimination as we like since IP packets are all-or-nothing (no corrupted packets).
  • We want to encode meaningful acoustic features rather than abstract (latent) ones to avoid having to standardize more than needed and to leave room for future technology improvements.
  • There is a large degree of overlap between consecutive redundancy packets. The encoder should leverage this overlap and should not need to encode each redundancy packet from scratch. The encoding complexity should remain constant even as we increase the amount of redundancy.
  • Since short bursts are more common than long ones, the redundancy decoder should be able to decode the most recent audio quickly but may take longer to decode older signals.
  • The Opus decoder has to be able to switch between decoding DRED, PLC, LBRR, and regular packets at any time.

Neural vocoders

Let's take a brief detour and discuss neural vocoders. A vocoder is an algorithm that takes in acoustic features that describe the spectrum of a speech signal over a short span of time and generates the corresponding (continuous) speech signal. Vocoders can be used in text-to-speech, where acoustic features are generated from text, and for speech compression, where the encoder transmits acoustic features, and a vocoder generates speech from the features.

Related content
A text-to-speech system, which converts written text into synthesized speech, is what allows Alexa to respond verbally to requests or commands...

Vocoders have been around since the ’70s, but none had ever achieved acceptable speech quality — until neural vocoders like WaveNet came about and changed everything. WaveNet itself was all but impossible to implement in real time (even on a GPU), but it led to lower-complexity neural vocoders, like the LPCNet vocoder we're using here.

Like many (but not all) neural vocoders, LPCNet is autoregressive, in that it produces the audio samples that best fit the previous samples — whether the previous samples are real speech or speech synthesized by LPCNet itself. As we will see below, that property can be very useful.

DRED architecture

The vocoder’s inputs — the acoustic features — don't describe the full speech waveform, but they do describe how the speech sounds to the human ear. That makes them lightweight and predictable and thus ideal for transmitting large amounts of redundancy.

The idea behind DRED is to compress the features as much as possible while ensuring that the recovered speech is still intelligible. When multiple packets go missing, we wait for the first packet to arrive and decode the features it contains. We then send those features to a vocoder — in our case, LPCNet — which re-synthesizes the missing speech for us from the point where the loss occurred. Once the "hole" is filled, we resume with Opus decoding as usual.

Combining the constraints listed earlier leads to the encoder architecture depicted below, which enables efficient encoding of highly redundant acoustic features — so that extended holes can be filled at the decoder.

Codec.png
Every 20 milliseconds, the DRED encoder encodes the last 40 milliseconds of speech. The decoder works backward, as the most recently transmitted audio is usually the most important.

The DRED encoder works as follows. Every 20 milliseconds (ms), it produces a new vector that contains information about the last 40 ms of speech. Given this overlap, we need only half of the vectors to reconstruct the complete speech. To avoid our redundancy’s being itself redundant, in a given 20 ms packet, we include only every other redundancy coding vector, so the redundancy encoded in a given packet covers nonoverlapping segments of the past speech. In terms of the figure above, the signal can be recovered from just the odd/purple blocks or just the even/blue blocks.

Related content
The team’s non-real-time system is the top performer, while its real-time system finishes third overall and second among real-time systems — despite using only 4% of a CPU core.

The degree of redundancy is determined by the number of past chunks included in each packet; each chunk included in the redundancy coding corresponds to 40 ms of speech that can be recovered. Furthermore, rather than representing each chunk independently, the encoder takes advantage of the correlation between successive chunks and extracts a sort of interchunk difference to encode.

For decoding, to be able to synthesize the whole sequence, all we need is a starting point. But rather than decoding forward in time, as would be intuitive, we choose an initial state that corresponds to the most recent chunk; from there, we decode going backward in time. That means we can get quickly to the most recent audio, which is more likely to be useful. It also means that we can transmit as much — or as little — redundancy as we want just by choosing how many chunks to include in a packet.

Rate-distortion-optimized variational autoencoder

Now let's get into the details of how we minimize the bit rate to code our redundancy. Here we turn to a widely used method in the video coding world, rate distortion optimization (RDO), which means trying to simultaneously reduce the bit rate and the distortion we cause to the speech. In a regular autoencoder, we train an encoder to find a simple — typically, low-dimensional — vector representation of an input that can then be decoded back to something close to the original.

In our rate-distortion-optimized variational autoencoder (RDO-VAE), instead of imposing a limit on the dimensionality of the representation, we directly limit the number of bits required to code that representation. We can estimate the actual rate (in bits) required to code the latent representation, assuming entropy coding of a quantized Laplace distribution. As a result, not only do we automatically optimize the feature representation, but the training process automatically discards any useless dimensions by setting them to zero. We don't need to manually choose the number of dimensions.

Moreover, by varying the rate-distortion trade-off, we can train a rate-controllable quantizer. That allows us to use better quality for the most recent speech (which is more likely to be used) and a lower quality for older speech that would be used only for a long burst of loss. In the end, we use an average bit rate of around 500 bits/second (0.5 kb/s) and still have enough information to reconstruct intelligible speech.

Once we include DRED, this is what the packet loss scenario described above would look like:

DRED vs. LBRR.png
With LBRR, each new packet (blue) includes a compressed copy of the previous packet (green); with DRED, it includes highly compressed versions of up to 50 prior packets (red). In this case, DRED's redundancy is set at 140 ms (seven packets).

Although it is illustrated for just 70 milliseconds of redundancy, we scale this up to one full second of redundancy contained in each 20-millisecond packet. That's 50 copies of the information being sent, on the assumption that at least one will make it to its destination and enable reconstruction of the original speech.

Revisiting packet loss concealment

So what happens when we lose a packet and don't have any DRED data for it? We still need to play out something — and ideally not zeros. In that case, we can just guess. Over a short period of time, we can still predict acoustic features reasonably well and then ask LPCNet to fill in the missing audio based on those features. That is essentially what PLC does, and doing it with a neural vocoder like LPCNet works better than using traditional PLC algorithms like the one that's currently integrated into Opus. In fact, our neural PLC algorithm recently placed second in the Interspeech 2022 Audio Deep Packet Loss Concealment Challenge.

Results

How much does DRED improve speech quality and intelligibility under lossy network conditions? Let's start with a clip compressed with Opus wideband at 24 kb/s, plus 16 kb/s of LBRR redundancy (40 kb/s total). This is what we get without loss:

Clean audio

To show what happens in lossy conditions, let's use a particularly difficult — but real — loss sequence taken from the PLC Challenge. If we use the standard Opus redundancy (LBRR) and PLC, the resulting audio is missing large chunks that just cannot be filled:

Lossy audio with LBRR and PLC

If we add our DRED coding with one full second of redundancy included in each packet, at a cost of about 32 kb/s, the missing speech can be entirely recovered:

Lossy audio with DRED
Results.png
Overall results of DRED's evaluation on the full dataset for the original PLC Challenge, using mean opinion score (MOS).

The example above is based on just one speech sequence, but we evaluated DRED on the full dataset for the original PLC Challenge, using mean opinion score (MOS) to aggregate the judgments of human reviewers. The results show that DRED alone (no LBRR) can reduce the impact of packet loss by about half even compared to our previous neural PLC. Also interesting is the fact that LBRR still provides a benefit even when DRED is used. With both LBRR and DRED, the impact of packet loss becomes very small, with just a 0.1 MOS degradation compared to the original, uncompressed speech.

This work is only one example of how Amazon is contributing to improving Opus. Our open-source neural PLC and DRED implementations are available on this development branch, and we welcome feedback and outside collaboration. We are also engaging with the IETF with the goal of updating the Opus standard in a fully compatible way. Our two Internet drafts (draft 1 | draft 2) offer more details on what we are proposing.

Research areas

Related content

US, WA, Bellevue
Amazon Leo is an initiative to increase global broadband access through a constellation of 3,236 satellites in low Earth orbit (LEO). Its mission is to bring fast, affordable broadband to unserved and underserved communities around the world. Amazon Leo will help close the digital divide by delivering fast, affordable broadband to a wide range of customers, including consumers, businesses, government agencies, and other organizations operating in places without reliable connectivity. Do you get excited by aerospace, space exploration, and/or satellites? Do you want to help build solutions at Amazon Leo to transform the space industry? If so, then we would love to talk! Key job responsibilities Work cross-functionally with product, business development, and various technical teams (engineering, science, simulations, etc.) to execute on the long-term vision, strategy, and architecture for the science-based global demand forecast. Design and deliver modern, flexible, scalable solutions to integrate data from a variety of sources and systems (both internal and external) and develop Bandwidth Usage models at granular temporal and geographic grains, deployable to Leo traffic management systems. Work closely with the capacity planning science team to ensure that demand forecasts feed seamlessly into their systems to deliver continuous optimization of resources. Lead short and long terms technical roadmap definition efforts to deliver solutions that meet business needs in pre-launch, early-launch, and mature business phases. Synthesize and communicate insights and recommendations to audiences of varying levels of technical sophistication to drive change across Amazon Leo. Export Control Requirement: Due to applicable export control laws and regulations, candidates must be a U.S. citizen or national, U.S. permanent resident (i.e., current Green Card holder), or lawfully admitted into the U.S. as a refugee or granted asylum. About the team The Amazon Leo Global Demand Planning team's mission is to map customer demand across space and time. We enable Amazon Leo's long-term success by delivering actionable insights and scientific forecasts across geographies and customer segments to empower long range planning, capacity simulations, business strategy, and hardware manufacturing recommendations through scalable tools and durable mechanisms.
US, CA, Pasadena
Do you enjoy solving challenging problems and driving innovations in research? As a Research Science intern with the Quantum Algorithms Team at CQC, you will work alongside global experts to develop novel quantum algorithms, evaluate prospective applications of fault-tolerant quantum computers, and strengthen the long-term value proposition of quantum computing. A strong candidate will have experience applying methods of mathematical and numerical analysis to assess the performance of quantum algorithms and establish their advantage over classical algorithms. Key job responsibilities We are particularly interested in candidates with expertise in any of the following subareas related to quantum algorithms: quantum chemistry, many-body physics, quantum machine learning, cryptography, optimization theory, quantum complexity theory, quantum error correction & fault tolerance, quantum sensing, and scientific computing, among others. A day in the life Throughout your journey, you'll have access to unparalleled resources, including state-of-the-art computing infrastructure, cutting-edge research papers, and mentorship from industry luminaries. This immersive experience will not only sharpen your technical skills but also cultivate your ability to think critically, communicate effectively, and thrive in a fast-paced, innovative environment where bold ideas are celebrated. Diverse Experiences AWS values diverse experiences. Even if you do not meet all of the qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why AWS? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Inclusive Team Culture Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences, inspire us to never stop embracing our uniqueness. Mentorship & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. Hybrid Work We value innovation and recognize this sometimes requires uninterrupted time to focus on a build. We also value in-person collaboration and time spent face-to-face. Our team affords employees options to work in the office every day or in a flexible, hybrid work model near one of our U.S. Amazon offices. This is not a remote internship opportunity. About the team Amazon Web Services (AWS) Center for Quantum Computing (CQC) is a multi-disciplinary team of theoretical and experimental physicists, materials scientists, and hardware and software engineers on a mission to develop a fault-tolerant quantum computer.
US, CA, Pasadena
We’re on the lookout for the curious, those who think big and want to define the world of tomorrow. At Amazon, you will grow into the high impact, visionary person you know you’re ready to be. Every day will be filled with exciting new challenges, developing new skills, and achieving personal growth. How often can you say that your work changes the world? At Amazon, you’ll say it often. Join us and define tomorrow. The Amazon Web Services (AWS) Center for Quantum Computing (CQC) in Pasadena, CA, is looking for a Quantum Research Scientist Intern in the Device and Architecture Theory group. You will be joining a multi-disciplinary team of scientists, engineers, and technicians, all working at the forefront of quantum computing to innovate for the benefit of our customers. Key job responsibilities As an intern with the Device and Architecture Theory team, you will conduct pathfinding theoretical research to inform the development of next-generation quantum processors. Potential focus areas include device physics of superconducting circuits, novel qubits and gate schemes, and physical implementations of error-correcting codes. You will work closely with both theorists and experimentalists to explore these directions. We are looking for candidates with excellent problem-solving and communication skills who are eager to work collaboratively in a team environment. Amazon Science gives you insight into the company’s approach to customer-obsessed scientific innovation. Amazon fundamentally believes that scientific innovation is essential to being the most customer-centric company in the world. It’s the company’s ability to have an impact at scale that allows us to attract some of the brightest minds in quantum computing and related fields. Our scientists continue to publish, teach, and engage with the academic community, in addition to utilizing our working backwards method to enrich the way we live and work. A day in the life Why AWS? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. AWS Utility Computing (UC) provides product innovations — from foundational services such as Amazon’s Simple Storage Service (S3) and Amazon Elastic Compute Cloud (EC2), to consistently released new product innovations that continue to set AWS’s services and features apart in the industry. As a member of the UC organization, you’ll support the development and management of Compute, Database, Storage, Internet of Things (Iot), Platform, and Productivity Apps services in AWS. Within AWS UC, Amazon Dedicated Cloud (ADC) roles engage with AWS customers who require specialized security solutions for their cloud services. Inclusive Team Culture Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences, inspire us to never stop embracing our uniqueness. Diverse Experiences AWS values diverse experiences. Even if you do not meet all of the qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Mentorship & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. Export Control Requirement: Due to applicable export control laws and regulations, candidates must be either a U.S. citizen or national, U.S. permanent resident (i.e., current Green Card holder), or lawfully admitted into the U.S. as a refugee or granted asylum, or be able to obtain a US export license. If you are unsure if you meet these requirements, please apply and Amazon will review your application for eligibility.
US, MA, N.reading
Amazon Industrial Robotics is seeking exceptional talent to help develop the next generation of advanced robotics systems that will transform automation at Amazon's scale. We're building revolutionary robotic systems that combine cutting-edge AI, sophisticated control systems, and advanced mechanical design to create adaptable automation solutions capable of working safely alongside humans in dynamic environments. This is a unique opportunity to shape the future of robotics and automation at an unprecedented scale, working with world-class teams pushing the boundaries of what's possible in robotic dexterous manipulation, locomotion, and human-robot interaction. We are seeking a talented Applied Scientist to join our advanced robotics team, focusing on developing and applying cutting-edge simulation methodologies for advanced robotics systems. This role centers on research and development of physics-based simulation techniques, sim-to-real transfer methods, and machine learning approaches that enable rapid development, testing, and validation of robotic systems operating in complex, real-world environments. Key job responsibilities - Advance physics-based simulation fidelity for contact-rich manipulation and locomotion - Design and build high-performance simulation tools integrated into a production robotics stack - Translate research ideas into robust, scalable software pipelines - Develop methods to quantify and reduce simulation-to-reality gaps across design, safety, and control - Architect scalable simulation solutions for rigid and deformable body dynamics - Build simulation pipelines optimized for large-scale reinforcement and policy learning - Establish frameworks for continuous simulation improvement using real-world deployment data - Collaborate with engineering, science, and safety teams on simulation requirements and validation About the team Our team is building a comprehensive simulation platform for advanced robotics development, combining locomotion and manipulation capabilities. We operate at the cutting edge of physics simulation, reinforcement learning, and sim-to-real transfer, collaborating with world-class robotics engineers, applied scientists, and mechanical designers in a fast-paced, innovation-driven environment. This role uniquely combines fundamental research with real-world deployment. You will pursue core research questions in physics-based simulation while seeing your work translated into production systems, validated on real hardware, and informed by deployment data. Working alongside Simulation Software Engineers, you will help transform research ideas into scalable, production-grade simulation capabilities that directly impact how robots are designed, trained, and deployed.
US, WA, Redmond
Amazon Leo is Amazon’s low Earth orbit satellite network. Our mission is to deliver fast, reliable internet connectivity to customers beyond the reach of existing networks. From individual households to schools, hospitals, businesses, and government agencies, Amazon Leo will serve people and organizations operating in locations without reliable connectivity. Export Control Requirement: Due to applicable export control laws and regulations, candidates must be a U.S. citizen or national, U.S. permanent resident (i.e., current Green Card holder), or lawfully admitted into the U.S. as a refugee or granted asylum. This position is part of the Satellite Attitude Determination and Control team. You will design and analyze the control system and algorithms, support development of our flight hardware and software, help integrate the satellite in our labs, participate in flight operations, and see a constellation of satellites flow through the production line into orbit. Key job responsibilities - Design and analyze algorithms for estimation, flight control, and precise pointing using linear methods and simulation. - Develop and apply models and simulations, with various levels of fidelity, of the satellite and our constellation. - Component level environmental testing, functional and performance checkout, subsystem integration, satellite integration, and in space operations. - Manage the spacecraft constellation as it grows and evolves. - Continuously improve our ability to serve customers by maximizing payload operations time. - Develop autonomy for Fault Detection and Isolation on board the spacecraft. A day in the life This is an opportunity to play a significant role in the design of an entirely new satellite system with challenging performance requirements. The large, integrated constellation brings opportunities for advanced capabilities that need investigation and development. The constellation size also puts emphasis on engineering excellence so our tools and methods, from conceptualization through manufacturing and all phases of test, will be state of the art as will the satellite and supporting infrastructure on the ground. You will find that Amazon Leo's mission is compelling, so our program is staffed with some of the top engineers in the industry. Our daily collaboration with other teams on the program brings constant opportunity for discovery, learning, and growth. About the team Our team has lots of experience with various satellite systems and many other flight vehicles. We have bench strength in both our mission and core GNC disciplines. We design, prototype, test, iterate and learn together. Because GNC is central to safe flight, we tend to drive Concepts of Operation and many system level analyses.
IN, KA, Bengaluru
The Amazon Alexa AI team in India is seeking a talented, self-driven Applied Scientist to work on prototyping, optimizing, and deploying ML algorithms within the realm of Generative AI. Key responsibilities include: - Research, experiment and build Proof Of Concepts advancing the state of the art in AI & ML for GenAI. - Collaborate with cross-functional teams to architect and execute technically rigorous AI projects. - Thrive in dynamic environments, adapting quickly to evolving technical requirements and deadlines. - Engage in effective technical communication (written & spoken) with coordination across teams. - Conduct thorough documentation of algorithms, methodologies, and findings for transparency and reproducibility. - Publish research papers in internal and external venues of repute - Support on-call activities for critical issues
US, WA, Redmond
Amazon Leo is Amazon’s Low Earth Orbit satellite network. Our mission is to deliver fast, reliable internet connectivity to customers beyond the reach of existing networks. From individual households to schools, hospitals, businesses, and government agencies, Amazon Leo will serve people and organizations operating in locations without reliable connectivity. The Leo Software Defined Networking (SDN) team designs, implements and operates the network virtualization stack and SDN control plane signaling. Our scope spans over beam planning, routing, and forwarding through our SDN Controller, Agents, and Applications that provides a high throughput telecom service comprised of Low Earth Orbit satellites, customer terminals, gateways, cloud services and terrestrial network infrastructure that connects into public and private networks. We are looking for a talented Senior Applied Scientist to design and develop Network Observability solutions for an advanced global telecom service via both space and terrestrial networks. As a scientist on this team, you will collaborate with a mix of network engineers and software engineers to create novel mechanisms that increase our end-to-end observability tools and deliver high quality, secure and fault tolerant software used in Low Earth Orbit (LEO) satellites, ground gateways, and Consumer/Enterprise class customer terminals. You will define the long-term science roadmap for the team and its products. The candidate must have expertise with modern development practices and will have demonstrated the capability to deliver best-in-class software systems that solve some of today's hardest problems. Key job responsibilities * Take responsibility for designing and delivering modern, flexible, scalable science solutions to complex challenges for operating and planning satellite constellations * Work with peer teams and customers to design innovative science solutions to fulfill the business needs * Write code for production cloud native software systems * Utilize AWS and other Amazon technologies to deliver highly-available science solutions * Help on-board and mentor new science team members * Lead science roadmap definition efforts and decide what solutions to build A day in the life You will collaborate with various stakeholders to create the world’s most innovative products. You will understand operational challenges and existing blind-spots for network observability and be part of a team of scientists and engineers developing tools that fill these gaps. You will join our development and integration efforts and deliver high qualify software for production environments.
IN, KA, Bengaluru
Selection Monitoring team is responsible for making the biggest catalog on the planet even bigger. In order to drive expansion of the Amazon catalog, we develop advanced ML/AI technologies to process billions of products and algorithmically find products not already sold on Amazon. We work with structured, semi-structured and Visually Rich Documents using deep learning, NLP and image processing. The role demands a high-performing and flexible candidate who can take responsibility for success of the system and drive solutions from research, prototype, design, coding and deployment. We are looking for Applied Scientists to tackle challenging problems in the areas of Information Extraction, Efficient crawling at internet scale, developing ML models for website comprehension and agents to take multi-step decisions. You should have depth and breadth of knowledge in text mining, information extraction from Visually Rich Documents, semi structured data (HTML) and advanced machine learning. You should also have programming and design skills to manipulate Semi-Structured and unstructured data and systems that work at internet scale. You will encounter many challenges, including: - Scale (build models to handle billions of pages), - Accuracy (requirements for precision and recall) - Speed (generate predictions for millions of new or changed pages with low latency) - Diversity (models need to work across different languages, market places and data sources) You will help us to - Build a scalable system which can algorithmically extract information from world wide web. - Intelligently cluster web pages, segment and classify regions, extract relevant information and structure the data available on semi-structured web. - Build systems that will use existing Knowledge Base to perform open information extraction at scale from visually rich documents. Key job responsibilities - Use AI, NLP and advances in LLMs/SLMs and agentic systems to create scalable solutions for business problems. - Efficiently Crawl web, Automate extraction of relevant information from large amounts of Visually Rich Documents and optimize key processes. - Design, develop, evaluate and deploy, innovative and highly scalable ML models, esp. leveraging latest advances in RL-based fine tuning methods like DPO, GRPO etc. - Work closely with software engineering teams to drive real-time model implementations. - Establish scalable, efficient, automated processes for large scale model development, model validation and model maintenance. - Lead projects and mentor other scientists, engineers in the use of ML techniques. - Publish innovation in research forums.
US, WA, Seattle
Prime Video is a first-stop entertainment destination offering customers a vast collection of premium programming in one app available across thousands of devices. Prime members can customize their viewing experience and find their favorite movies, series, documentaries, and live sports – including Amazon MGM Studios-produced series and movies; licensed fan favorites; and programming from Prime Video add-on subscriptions such as Apple TV+, Max, Crunchyroll and MGM+. All customers, regardless of whether they have a Prime membership or not, can rent or buy titles via the Prime Video Store, and can enjoy even more content for free with ads. Are you interested in shaping the future of entertainment? Prime Video's technology teams are creating best-in-class digital video experience. As a Prime Video technologist, you’ll have end-to-end ownership of the product, user experience, design, and technology required to deliver state-of-the-art experiences for our customers. You’ll get to work on projects that are fast-paced, challenging, and varied. You’ll also be able to experiment with new possibilities, take risks, and collaborate with remarkable people. We’ll look for you to bring your diverse perspectives, ideas, and skill-sets to make Prime Video even better for our customers. With global opportunities for talented technologists, you can decide where a career Prime Video Tech takes you! As an Applied Scientist in the Prime Video Playback Intelligence Organization, you will have deep subject matter expertise in applied machine learning and data science, with specializations in video streaming optimization, information retrieval, anomaly detection and root-causing systems, large language models and generative AI across various modalities. Key job responsibilities - Work with multiple teams of scientists, engineers, and product managers to translate business and functional requirements into concrete deliverables leading strategic efforts to enhance customer quality of experiences. - Work on problems spaces such as: improving the customer playback quality of experience across Video on Demand, Live Events and Linear Content. - Reduce the time/cost/effort to optimize the customer experience as well as detect, root-cause, and mitigate defects in the customer experience. You’ll seek to understand the depth and nuance of streaming video at scale and identify opportunities to grow our business and improve customer quality of experience via principled ML/AI solutions. - Lead integration of new algorithms and processes into existing modeling stacks, simplify and streamline the existing modeling stacks, and develop testing and evaluation strategies. Ultimately, you'll work backwards from the desired outcomes and lead the way on determining the ideal solution (statistical techniques, traditional ML, GenAI, etc). A day in the life We love solving challenging and hard problems in our quest to innovate on behalf of our customers and provide the best video streaming experience. We push the boundaries to leverage and invent technologies which help create unrivaled experiences for our customers to help us move fast in a growing and changing environment. We use data to guide our decisions, work closely with our engineering and product counterparts, and partner with other Science teams as well as academic institutions to learn and guide in an environment of innovation.
BR, SP, Sao Paulo
Do you like working on projects that are highly visible and are tied closely to Amazon’s growth? Are you seeking an environment where you can drive innovation leveraging the scalability and innovation with Amazon's AWS cloud services? The Amazon International Technology Team is hiring Applied Scientists to work in our Machine Learning team in Mexico City. The Intech team builds International extensions and new features of the Amazon.com web site for individual countries and creates systems to support Amazon operations. We have already worked in Germany, France, UK, India, China, Italy, Brazil and more. Key job responsibilities About you You want to make changes that help millions of customers. You don’t want to make something 10% better as a part of an enormous team. Rather, you want to innovate with a small community of passionate peers. You have experience in analytics, machine learning, LLMs and Agentic AI, and a desire to learn more about these subjects. You want a trusted role in strategy and product design. You put the customer first in your thinking. You have great problem solving skills. You research the latest data technologies and use them to help you innovate and keep costs low. You have great judgment and communication skills, and a history of delivering results. Your Responsibilities - Define and own complex machine learning solutions in the consumer space, including targeting, measurement, creative optimization, and multivariate testing. - Design, implement, and evolve Agentic AI systems that can autonomously perceive their environment, reason about context, and take actions across business workflows—while ensuring human-in-the-loop oversight for high-stakes decisions. - Influence the broader team's approach to integrating machine learning into business workflows. - Advise leadership, both tech and non-tech. - Support technical trade-offs between short-term needs and long-term goals.