Neural encoding enables more-efficient recovery of lost audio packets

By leveraging neural vocoding, Amazon Chime SDK’s new deep-redundancy (DRED) technology can reconstruct long sequences of lost packets with little bandwidth overhead.

Packet loss is a big problem for real-time voice communication over the Internet. Everyone has been in the situation where the network is becoming unreliable and enough packets are getting lost that it's hard — or impossible — to make out what the other person is saying.

One way to fight packet loss is through redundancy, in which each new packet includes information about prior packets. But existing redundancy schemes either have limited scope — carrying information only about the immediately preceding packet, for instance — or scale inefficiently.

The Deep REDundancy (DRED) technology from the Amazon Chime SDK team significantly improves quality and intelligibility under packet loss by efficiently transmitting large amounts of redundant information. Our approach leverages the ability of neural vocoders to reconstruct informationally rich speech signals from informationally sparse frequency spectrum snapshots, and we use a neural encoder to compress those snapshots still further. With this approach, we are able to load a single packet with information about as many as 50 prior packets (one second of speech) with minimal increase in bandwidth.

We describe our approach in a paper that we will present at this year’s ICASSP.

Redundant audio

All modern codecs (coder/decoders) have so-called packet-loss-concealment (PLC) algorithms that attempt to guess the content of lost packets. Those algorithms work fine for infrequent, short losses, as they can extrapolate phonemes to fill in gaps of a few tens of milliseconds. However, they cannot (and certainly should not try to) predict the next phoneme or word from the conversation. To deal with significantly degraded networks, we need more than just PLC.

Related content
Combining classic signal processing with deep learning makes method efficient enough to run on a phone.

One option is the 25-year-old spec for REDundant audio data (often referred to as just RED). Despite its age, RED is still in use today and is one of the few ways of transmitting redundant data for WebRTC, a popular open-source framework for real-time communication over the Web. RED has the advantage of being flexible and simple to use, but it is not very efficient. Transmitting two copies of the audio requires ... twice the bitrate.

The Opus audio codec — which is the default codec for WebRTC — introduced a more efficient scheme for redundancy called low-bit-rate redundancy (LBRR). With LBRR, each new audio packet can include a copy of the previous packet, encoded at a lower bit rate. That has the advantage of lowering the bit rate overhead. Also, because the scheme is deeply integrated into Opus, it can be simpler to use than RED.

That being said, the Opus LBRR is limited to just one frame of redundancy, so it cannot do much in the case of a long burst of lost packets. RED does not have that limitation, but transmitting a large number of copies would be impractical due to the overhead. There is always the risk that the extra redundancy will end up causing congestion and more losses.

LBRR and PLC.png
With every new voice packet (blue), Opus’s low-bit-rate-redundancy (LBRR) mechanism includes a compressed copy of the previous packet (green). When three consecutive packets are lost (red x’s), two of them are unrecoverable, and a packet-loss-concealment (PLC) algorithm must fill in the gaps.

Deep REDundancy (DRED)

In the past few years, we have seen neural speech codecs that can produce good quality speech at only a fraction of the bit rate required by traditional speech codecs — typically less than three kilobits per second (3 kb/s). That was unthinkable just a few years ago. But for most real-time-communication applications, neural codecs aren't that useful, because just the packet headers required by the IP/UDP/RTP protocols take up 16 kb/s.

However, for the purpose of transmitting a large amount of redundancy, a neural speech codec can be very useful, and we propose a Deep REDundancy codec that has been specifically designed for that purpose. It has a different set of constraints than a regular speech codec:

  • The redundancy in different packets needs to be independent (that's why we call it redundancy in the first place). However, within each packet, we can use as much prediction and other redundancy elimination as we like since IP packets are all-or-nothing (no corrupted packets).
  • We want to encode meaningful acoustic features rather than abstract (latent) ones to avoid having to standardize more than needed and to leave room for future technology improvements.
  • There is a large degree of overlap between consecutive redundancy packets. The encoder should leverage this overlap and should not need to encode each redundancy packet from scratch. The encoding complexity should remain constant even as we increase the amount of redundancy.
  • Since short bursts are more common than long ones, the redundancy decoder should be able to decode the most recent audio quickly but may take longer to decode older signals.
  • The Opus decoder has to be able to switch between decoding DRED, PLC, LBRR, and regular packets at any time.

Neural vocoders

Let's take a brief detour and discuss neural vocoders. A vocoder is an algorithm that takes in acoustic features that describe the spectrum of a speech signal over a short span of time and generates the corresponding (continuous) speech signal. Vocoders can be used in text-to-speech, where acoustic features are generated from text, and for speech compression, where the encoder transmits acoustic features, and a vocoder generates speech from the features.

Related content
A text-to-speech system, which converts written text into synthesized speech, is what allows Alexa to respond verbally to requests or commands...

Vocoders have been around since the ’70s, but none had ever achieved acceptable speech quality — until neural vocoders like WaveNet came about and changed everything. WaveNet itself was all but impossible to implement in real time (even on a GPU), but it led to lower-complexity neural vocoders, like the LPCNet vocoder we're using here.

Like many (but not all) neural vocoders, LPCNet is autoregressive, in that it produces the audio samples that best fit the previous samples — whether the previous samples are real speech or speech synthesized by LPCNet itself. As we will see below, that property can be very useful.

DRED architecture

The vocoder’s inputs — the acoustic features — don't describe the full speech waveform, but they do describe how the speech sounds to the human ear. That makes them lightweight and predictable and thus ideal for transmitting large amounts of redundancy.

The idea behind DRED is to compress the features as much as possible while ensuring that the recovered speech is still intelligible. When multiple packets go missing, we wait for the first packet to arrive and decode the features it contains. We then send those features to a vocoder — in our case, LPCNet — which re-synthesizes the missing speech for us from the point where the loss occurred. Once the "hole" is filled, we resume with Opus decoding as usual.

Combining the constraints listed earlier leads to the encoder architecture depicted below, which enables efficient encoding of highly redundant acoustic features — so that extended holes can be filled at the decoder.

Codec.png
Every 20 milliseconds, the DRED encoder encodes the last 40 milliseconds of speech. The decoder works backward, as the most recently transmitted audio is usually the most important.

The DRED encoder works as follows. Every 20 milliseconds (ms), it produces a new vector that contains information about the last 40 ms of speech. Given this overlap, we need only half of the vectors to reconstruct the complete speech. To avoid our redundancy’s being itself redundant, in a given 20 ms packet, we include only every other redundancy coding vector, so the redundancy encoded in a given packet covers nonoverlapping segments of the past speech. In terms of the figure above, the signal can be recovered from just the odd/purple blocks or just the even/blue blocks.

Related content
The team’s non-real-time system is the top performer, while its real-time system finishes third overall and second among real-time systems — despite using only 4% of a CPU core.

The degree of redundancy is determined by the number of past chunks included in each packet; each chunk included in the redundancy coding corresponds to 40 ms of speech that can be recovered. Furthermore, rather than representing each chunk independently, the encoder takes advantage of the correlation between successive chunks and extracts a sort of interchunk difference to encode.

For decoding, to be able to synthesize the whole sequence, all we need is a starting point. But rather than decoding forward in time, as would be intuitive, we choose an initial state that corresponds to the most recent chunk; from there, we decode going backward in time. That means we can get quickly to the most recent audio, which is more likely to be useful. It also means that we can transmit as much — or as little — redundancy as we want just by choosing how many chunks to include in a packet.

Rate-distortion-optimized variational autoencoder

Now let's get into the details of how we minimize the bit rate to code our redundancy. Here we turn to a widely used method in the video coding world, rate distortion optimization (RDO), which means trying to simultaneously reduce the bit rate and the distortion we cause to the speech. In a regular autoencoder, we train an encoder to find a simple — typically, low-dimensional — vector representation of an input that can then be decoded back to something close to the original.

In our rate-distortion-optimized variational autoencoder (RDO-VAE), instead of imposing a limit on the dimensionality of the representation, we directly limit the number of bits required to code that representation. We can estimate the actual rate (in bits) required to code the latent representation, assuming entropy coding of a quantized Laplace distribution. As a result, not only do we automatically optimize the feature representation, but the training process automatically discards any useless dimensions by setting them to zero. We don't need to manually choose the number of dimensions.

Moreover, by varying the rate-distortion trade-off, we can train a rate-controllable quantizer. That allows us to use better quality for the most recent speech (which is more likely to be used) and a lower quality for older speech that would be used only for a long burst of loss. In the end, we use an average bit rate of around 500 bits/second (0.5 kb/s) and still have enough information to reconstruct intelligible speech.

Once we include DRED, this is what the packet loss scenario described above would look like:

DRED vs. LBRR.png
With LBRR, each new packet (blue) includes a compressed copy of the previous packet (green); with DRED, it includes highly compressed versions of up to 50 prior packets (red). In this case, DRED's redundancy is set at 140 ms (seven packets).

Although it is illustrated for just 70 milliseconds of redundancy, we scale this up to one full second of redundancy contained in each 20-millisecond packet. That's 50 copies of the information being sent, on the assumption that at least one will make it to its destination and enable reconstruction of the original speech.

Revisiting packet loss concealment

So what happens when we lose a packet and don't have any DRED data for it? We still need to play out something — and ideally not zeros. In that case, we can just guess. Over a short period of time, we can still predict acoustic features reasonably well and then ask LPCNet to fill in the missing audio based on those features. That is essentially what PLC does, and doing it with a neural vocoder like LPCNet works better than using traditional PLC algorithms like the one that's currently integrated into Opus. In fact, our neural PLC algorithm recently placed second in the Interspeech 2022 Audio Deep Packet Loss Concealment Challenge.

Results

How much does DRED improve speech quality and intelligibility under lossy network conditions? Let's start with a clip compressed with Opus wideband at 24 kb/s, plus 16 kb/s of LBRR redundancy (40 kb/s total). This is what we get without loss:

Clean audio

To show what happens in lossy conditions, let's use a particularly difficult — but real — loss sequence taken from the PLC Challenge. If we use the standard Opus redundancy (LBRR) and PLC, the resulting audio is missing large chunks that just cannot be filled:

Lossy audio with LBRR and PLC

If we add our DRED coding with one full second of redundancy included in each packet, at a cost of about 32 kb/s, the missing speech can be entirely recovered:

Lossy audio with DRED
Results.png
Overall results of DRED's evaluation on the full dataset for the original PLC Challenge, using mean opinion score (MOS).

The example above is based on just one speech sequence, but we evaluated DRED on the full dataset for the original PLC Challenge, using mean opinion score (MOS) to aggregate the judgments of human reviewers. The results show that DRED alone (no LBRR) can reduce the impact of packet loss by about half even compared to our previous neural PLC. Also interesting is the fact that LBRR still provides a benefit even when DRED is used. With both LBRR and DRED, the impact of packet loss becomes very small, with just a 0.1 MOS degradation compared to the original, uncompressed speech.

This work is only one example of how Amazon is contributing to improving Opus. Our open-source neural PLC and DRED implementations are available on this development branch, and we welcome feedback and outside collaboration. We are also engaging with the IETF with the goal of updating the Opus standard in a fully compatible way. Our two Internet drafts (draft 1 | draft 2) offer more details on what we are proposing.

Research areas

Related content

US, NJ, Newark
Employer: Audible, Inc. Title: Data Scientist II Location: One Washington Park, Newark, NJ, 07102 Duties: Design and implement scalable and reliable approaches to support or automate decision making throughout the business. Apply a range of data science techniques and tools combined with subject matter expertise to solve difficult business problems and cases in which the solution approach is unclear. Acquire data by building the necessary SQL / ETL queries. Import processes through various company specific interfaces for accessing RedShift, and S3 / edX storage systems. Build relationships with stakeholders and counterparts, and communicate model outputs, observations, and key performance indicators (KPIs) to the management to develop sustainable and consumable products. Explore and analyze data by inspecting univariate distributions and multivariate interactions, constructing appropriate transformations, and tracking down the source and meaning of anomalies. Build production-ready models using statistical modeling, mathematical modeling, econometric modeling, machine learning algorithms, network modeling, social network modeling, natural language processing, or genetic algorithms. Validate models against alternative approaches, expected and observed outcome, and other business defined key performance indicators. Implement models that comply with evaluations of the computational demands, accuracy, and reliability of the relevant ETL processes at various stages of production. Position reports into Newark, NJ office; however, telecommuting from a home office may be allowed. Requirements: Requires a Master’s in Statistics, Computer Science, Data Science, Machine Learning, Applied Math, Operations Research, Economics, or a related field plus two (2) years of experience as a Data Scientist, Data Engineer, or other occupation/position/job title involving research and data analysis. Experience may be gained concurrently and must include one (1) year in each of the following: - Building statistical models and machine learning models using large datasets from multiple resources - Working with Customer, Content, or Product data modeling and extraction - Using database technologies such as SQL or ETL - Applying specialized modelling software including Python, R, SAS, MATLAB, or Stata. Alternatively, will accept a Bachelor's and four (4) years of experience. Multiple positions. Apply online: www.amazon.jobs Job Code: ADBL157. We are open to hiring candidates to work out of one of the following locations: Newark, NJ, USA
US, CA, Pasadena
The Amazon Web Services (AWS) Center for Quantum Computing (CQC) is a multi-disciplinary team of scientists, engineers, and technicians on a mission to develop a fault-tolerant quantum computer. You will be joining a team located in Pasadena, CA that conducts materials research to improve the performance of quantum processors. We are looking to hire a Quantum Research Scientist who will apply their expertise in materials characterization to the optimization of fabricated superconducting quantum devices. In this role, you are expected to lead and assist research projects that are aligned with our Center’s technical roadmap. You will develop new ideas and design experiments aimed at identifying the most promising material systems, characterization techniques, and integration processes for superconducting circuit applications. Key job responsibilities - Conduct experimental studies on the fundamental properties of superconducting, semiconducting, and dielectric thin films - Develop and implement multi-technique materials characterization workflows for thin films and devices, with a focus on the surfaces and interfaces - Work closely with other research scientists on the Materials team to develop material processes directed toward optimizing thin film properties, controlling the surface chemistry and morphology, and impacting device performance - Identify materials properties (chemical, structural, electronic, electrical) that can be a reliable proxy for the performance of superconducting qubits and microwave resonators - Communicate engineering and scientific findings to teammates, the broader CQC and, when appropriate, publish findings in scientific journals A day in the life AWS Utility Computing (UC) provides product innovations — from foundational services such as Amazon’s Simple Storage Service (S3) and Amazon Elastic Compute Cloud (EC2), to consistently released new product innovations that continue to set AWS’s services and features apart in the industry. As a member of the UC organization, you’ll support the development and management of Compute, Database, Storage, Internet of Things (Iot), Platform, and Productivity Apps services in AWS, including support for customers who require specialized security solutions for their cloud services. Diverse Experiences AWS values diverse experiences. Even if you do not meet all of the qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why AWS? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Inclusive Team Culture Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences, inspire us to never stop embracing our uniqueness. Mentorship & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. Hybrid Work We value innovation and recognize this sometimes requires uninterrupted time to focus on a build. We also value in-person collaboration and time spent face-to-face. Our team affords employees options to work in the office every day or in a flexible, hybrid work model near one of our U.S. Amazon offices. About the team Our team contributes to the fabrication of processors and other hardware that enable quantum computing technologies. Doing that necessitates the development of materials with tailored properties for superconducting circuits. Research Scientists and Engineers on the Materials team operate deposition and characterization systems in order to develop and optimize thin film processes for use in these devices. They work alongside other Research Scientists and Engineers to help deliver fabricated devices for quantum computing experiments. We are open to hiring candidates to work out of one of the following locations: Pasadena, CA, USA
US, CA, Sunnyvale
Help re-invent how millions of people watch TV! Fire TV remains the #1 best-selling streaming media player in the US. Our goal is to be the global leader in delivering entertainment inside and outside the home, with the broadest selection of content, devices and experiences for customers. Our science team works at the intersection of Recommender Systems, Information Retrieval, Machine Learning and Natural Language Understanding. We leverage techniques from all these fields to create novel algorithms that allow our customers to engage with the right content at the right time. Our work directly contributes to making our devices delightful to use and indispensable for the household. Key job responsibilities - Drive new initiatives applying Machine Learning techniques to improve our recommendation, search and entity matching algorithms - Perform hands-on data analysis and modeling with large data sets to develop insights that increase device usage and customer experience - Design and run A/B experiments, evaluate the impact of your optimizations and communicate your results to various business stakeholders - Work closely with product managers and software engineers to design experiments and implement end-to-end solutions - Setup and monitor alarms to detect anomalous data patterns and perform root cause analyses to explain and address them - Be a member of the Amazon-wide Machine Learning Community, participating in internal and external MeetUps, Hackathons and Conferences - Help attract and recruit technical talent; mentor junior scientists We are open to hiring candidates to work out of one of the following locations: Sunnyvale, CA, USA
US, NY, New York
Amazon is looking for a Senior Applied Scientist to help build the next generation of sourcing and vendor experience systems. The Optimal Sourcing Systems (OSS) owns the optimization of inventory sourcing and the orchestration of inbound flows from vendors worldwide. We source inventory from thousands of vendors for millions of products globally while orchestrating the inbound flow for billions of units. Our goals are to increase reliable access to supply, improve supply chain-driven vendor experience, and reduce end-to-end supply chain costs, all in service of maximizing Long-Term Free Cash Flow (LTFCF) for Amazon. As a Senior Applied Scientist, you will work with software engineers, product managers, and business teams to understand the business problems and requirements, distill that understanding to crisply define the problem, and design and develop innovative solutions to address them. Our team is highly cross-functional and employs a wide array of scientific tools and techniques to solve key challenges, including optimization, causal inference, and machine learning/deep learning. Some critical research areas in our space include modeling buying decisions under high uncertainty, vendors' behavior and incentives, supply risk and enhancing visibility and reliability of inbound signals. Key job responsibilities You will be a science tech leader for the team. As a Senior Applied Scientist you will: - Lead a team of scientists to innovate on state-of-the-art sourcing systems. - Set the scientific strategic vision for the team. You lead the decomposition of problems and development of roadmaps to execute on it. - Set an example for other scientists with exemplary scientific analyses; maintainable, extensible, and well-tested code; and simple, intuitive, and effective solutions. - Influence team business and engineering strategies. - Exercise sound judgment to prioritize between short-term vs. long-term and business vs. technology needs. - Communicate clearly and effectively with stakeholders to drive alignment and build consensus on key initiatives. - Foster collaborations between scientists across Amazon researching similar or related problems. - Actively engage in the development of others, both within and outside the team. - Engage with the broader scientific community through presentations, publications, and patents. To help describe some of our challenges, we created a short video about SCOT at Amazon: http://bit.ly/amazon-scot About the team Supply Chain Optimization Technologies (SCOT) owns Amazon's global inventory planning systems. We decide what, when, where, and how much we should buy to meet Amazon's business goals and to make our customers happy. We decide how to place and move inventory within Amazon's fulfillment network. We do this for hundreds of millions of items and hundreds of product lines worth billions of dollars worldwide. Check our website if you are curious to learn more about the breadth of problems we tackle: https://www.amazon.science/tag/supply-chain-optimization-technologies We are open to hiring candidates to work out of one of the following locations: Bellevue, WA, USA | New York, NY, USA
US, WA, Seattle
Come be a part of a rapidly expanding $35 billion dollar global business. At Amazon Business, a fast-growing startup passionate about building solutions, we set out every day to innovate and disrupt the status quo. We stand at the intersection of tech & retail in the B2B space developing innovative purchasing and procurement solutions to help businesses and organizations thrive. At Amazon Business, we strive to be the most recognized and preferred strategic partner for smart business buying. Bring your insight, imagination and a healthy disregard for the impossible. Join us in building and celebrating the value of Amazon Business to buyers and sellers of all sizes and industries. Unlock your career potential. We are seeking a Senior Applied Scientist who has a solid background in applied Machine Learning and Data Science, deep passion for building data-driven products, ability to formulate data insights and scientific vision, and has a proven track record of executing complex projects and delivering business impact. Key job responsibilities • Data driven insights to accelerate acquisition of new members. • Grow benefits adoption based on customer segment, vertical, and drive customers to their "aha moment". • Work closely with software engineering teams to drive model implementations and new feature creations. • Establish scalable, efficient, automated processes for large scale data analyses, model development, model validation and model implementation • Advance team's engineering craftsmanship and drive continued scientific innovation as a thought leader and practitioner. • Mentor junior scientists, provide technical and career development guidance. About the team The Marketing Science team applies scientific methods and research techniques to enhance our understanding of AB consumer behavior, market trends, and the effectiveness of marketing strategies. Our goal is to develop and advance theories and models that can be used to make informed decisions in marketing and to provide insights into consumer decision-making processes. Additionally, we seek to identify and explore emerging trends and technologies in marketing, and to develop innovative approaches for addressing the challenges and opportunities in the field. We are open to hiring candidates to work out of one of the following locations: Seattle, WA, USA
US, WA, Seattle
Are you fascinated by the power of Natural Language Processing (NLP) and Large Language Models (LLM) to transform the way we interact with technology? Are you fascinated by the use of Generative AI to build an advertiser facing solution that predict problems and coach users while they solve real word problems? Are you passionate about applying advanced machine learning techniques to solve complex challenges in the customer service space? If so, Amazon's Support Product & Services (SP&S) team has an exciting opportunity for you as an Applied Scientist. Key job responsibilities • Apply your expertise in LLM models to design, develop, and implement scalable machine learning solutions that address complex language-related challenges in the advertising support center domain. • Use Transformers and apply other NLP techniques like Sentence embeddings, Dimensionality reduction, clustering and topic modeling to identify customer intents and utterances. • Use services like AWS Lex, AWS Bedrock etc. to develop advertising facing solutions • Work closely with teams of scientists and software engineers to drive real-time model implementations and deliver novel and highly impactful solutions. • Automating feedback loops for algorithms in production. • Setup and monitor alarms to detect anomalous data patterns and perform root cause analyses to explain and address them. • Be a member of the Amazon-wide Machine Learning Community, participating in internal and external MeetUps, Hackathons and Conferences. A day in the life You will work closely with a cross functional team of Software Engineers, Product Owners, Data Scientists, and Contact Center experts. You will research and investigate the latest options in industry to apply machine learning and generative AI to real world problems. You will work backwards from customer problems and collaborate with stakeholders to determine how to scale new technology and integrate with complicated help channels used by advertisers everyday. About the team SP&S team provides solutions and libraries that are leveraged by teams all across Amazon Advertising to provide timely and personalized help. The team aims to predict Advertisers problems and proactively surface intelligent guidance to customers at the right time. As a AS, you will help the team to achieve its vision of building and implementing the next generation of Contact Center technology. You will build/leverage LLMs to train them on advertising support domain knowledge and work shoulder to shoulder with stakeholders to externalize to users in novel ways. We are open to hiring candidates to work out of one of the following locations: Seattle, WA, USA
SG, Singapore
Do you want to contribute to a team working on cutting edge technology, solve new problems that didn’t exist before, and have the ability to see the impact of your successes? Amazon is shaping the future of digital video entertainment. We seek experienced data scientists who can apply the latest research, state-of-the-art algorithms and machine learning to solve core problems in the video streaming space for Amazon. This is an exciting opportunity for candidates with a deep understanding of large data sets and structures, customer behavior and signals, machine learning algorithms and production pipelines. If you are passionate about solving complex problems in a challenging environment, we would love to talk with you. We are looking for a seasoned data scientist who can help us scale our video streaming and advertising business. He/She will develop and build machine learning models using large data-sets to improve our customer and advertiser experience, and will work closely with technology teams in deploying the models to production. He/She will work in a highly collaborative environment with some of the best engineers, marketers and product managers, and be part of a rapidly growing initiative which is going to become a huge area of growth for Amazon's Advertising business and pioneer the usage of new technology at Amazon scale. Key job responsibilities - Engage in advanced data analysis to uncover trends and correlations. Utilize statistical methods and tools to drive insightful recommendations for business strategies and process improvements. - Use the data insights to design, develop and build scalable and advanced machine learning models, algorithms and implement them in production through robust systems and architectures - Work closely with stakeholders across various departments including product, business analytics, marketing, operations and tech teams and influence business strategy - Be abreast of the advanced research and techniques in the deep-learning and artificial intelligence space, and conduct experiments to give the best output - Identify, develop, manage, and execute data analyses to uncover areas of opportunity and present business recommendations to drive cost benefit analysis and go/no-go decisions on various initiatives - Develop a roadmap and metrics to measure progress of the initiative they own - Lead initiatives for full-scale automation in collaboration with data engineering teams, enhancing data accuracy and operational efficiency We are open to hiring candidates to work out of one of the following locations: Singapore, SGP
SG, Singapore
Do you want to contribute to a team working on cutting edge technology, solve new problems that didn’t exist before, and have the ability to see the impact of your successes? Amazon is shaping the future of digital video entertainment. We seek experienced data scientists who can apply the latest research, state-of-the-art algorithms and machine learning to solve core problems in the video streaming space for Amazon. This is an exciting opportunity for candidates with a deep understanding of large data sets and structures, customer behavior and signals, machine learning algorithms and production pipelines. If you are passionate about solving complex problems in a challenging environment, we would love to talk with you. We are looking for a seasoned data scientist who can help us scale our video streaming and advertising business. He/She will develop and build machine learning models using large data-sets to improve our customer and advertiser experience, and will work closely with technology teams in deploying the models to production. He/She will work in a highly collaborative environment with some of the best engineers, marketers and product managers, and be part of a rapidly growing initiative which is going to become a huge area of growth for Amazon's Advertising business and pioneer the usage of new technology at Amazon scale. Key job responsibilities - Engage in advanced data analysis to uncover trends and correlations. Utilize statistical methods and tools to drive insightful recommendations for business strategies and process improvements. - Use the data insights to design, develop and build scalable and advanced machine learning models, algorithms and implement them in production through robust systems and architectures - Work closely with stakeholders across various departments including product, business analytics, marketing, operations and tech teams and influence business strategy - Be abreast of the advanced research and techniques in the deep-learning and artificial intelligence space, and conduct experiments to give the best output - Identify, develop, manage, and execute data analyses to uncover areas of opportunity and present business recommendations to drive cost benefit analysis and go/no-go decisions on various initiatives - Develop a roadmap and metrics to measure progress of the initiative they own - Lead initiatives for full-scale automation in collaboration with data engineering teams, enhancing data accuracy and operational efficiency We are open to hiring candidates to work out of one of the following locations: Singapore, SGP
SG, Singapore
Do you want to contribute to a team working on cutting edge technology, solve new problems that didn’t exist before, and have the ability to see the impact of your successes? Amazon is shaping the future of digital video entertainment. We seek experienced data scientists who can apply the latest research, state-of-the-art algorithms and machine learning to solve core problems in the video streaming space for Amazon. This is an exciting opportunity for candidates with a deep understanding of large data sets and structures, customer behavior and signals, machine learning algorithms and production pipelines. If you are passionate about solving complex problems in a challenging environment, we would love to talk with you. We are looking for a seasoned data scientist who can help us scale our video streaming and advertising business. He/She will develop and build machine learning models using large data-sets to improve our customer and advertiser experience, and will work closely with technology teams in deploying the models to production. He/She will work in a highly collaborative environment with some of the best engineers, marketers and product managers, and be part of a rapidly growing initiative which is going to become a huge area of growth for Amazon's Advertising business and pioneer the usage of new technology at Amazon scale. Key job responsibilities - Engage in advanced data analysis to uncover trends and correlations. Utilize statistical methods and tools to drive insightful recommendations for business strategies and process improvements. - Use the data insights to design, develop and build scalable and advanced machine learning models, algorithms and implement them in production through robust systems and architectures - Work closely with stakeholders across various departments including product, business analytics, marketing, operations and tech teams and influence business strategy - Be abreast of the advanced research and techniques in the deep-learning and artificial intelligence space, and conduct experiments to give the best output - Identify, develop, manage, and execute data analyses to uncover areas of opportunity and present business recommendations to drive cost benefit analysis and go/no-go decisions on various initiatives - Develop a roadmap and metrics to measure progress of the initiative they own - Lead initiatives for full-scale automation in collaboration with data engineering teams, enhancing data accuracy and operational efficiency We are open to hiring candidates to work out of one of the following locations: Singapore, SGP
US, WA, Seattle
Amazon brings buyers and sellers together. Our retail customers depend on us to give them access to every product at the best possible price. Our sellers depend on us to give them a platform to launch their business into every home and marketplace. Making this happen is the mission of every scientist in North America Stores (NAS) organization. To this end, the Science team is tasked with: · Building and deploying AI / ML models that lead to exponential growth of the business. · Organizing available data sources, and creating detailed dictionaries of data that can be used in future analyses. · Partnering with product teams in evaluating the financial and operational impact of new product offerings. · Partnering with science teams across other organizations to develop state of the art algorithms and models. · Carrying out independent data-backed initiatives that can be leveraged later on in the fields of network organization, costing and financial modeling of processes. · Publishing papers in both internal and external conferences / journals. In order to execute the above mandate we are on the look out for smart and qualified Applied Scientists who will own projects in partnership with product and research teams as well as operate autonomously on independent initiatives that are expected to unlock benefits in the future. A past background in Artificial Intelligence is necessary, along with advanced proficiency in programming languages such as Python and C++. Key job responsibilities As an Applied Scientist, you are able to use a range of artificial intelligence and operations research methodologies to solve challenging business problems when the solution is unclear. You have a combination of business acumen, broad knowledge of statistics, deep understanding of ML algorithms, and an analytical mindset. You thrive in a collaborative environment, and are passionate about learning. Our team utilizes a variety of AWS tools such as Redshift, Sagemaker, Lambda, S3, and EC2 with a variety of skillsets in Tabular ML, NLP, Generative AI, Forecasting, Probabilistic ML and Causal ML. You will bring knowledge in many of these domains along with your own specialties and skill-sets. We are open to hiring candidates to work out of one of the following locations: Atlanta, GA, USA | Seattle, WA, USA