Alexa speech science developments at Interspeech 2022

Research from Alexa Speech covers a range of topics related to end-to-end neural speech recognition and fairness.

Interspeech, the world’s largest and most comprehensive conference on the science and technology of spoken-language processing, took place this week in Incheon, Korea, with Amazon as a platinum sponsor. Amazon Science asked three of Alexa AI’s leading scientists — in the fields of speech, spoken-language-understanding, and text-to-speech — to highlight some of Amazon’s contributions to the conference.

Related content
Methods for learning from noisy data, using phonetic embeddings to improve entity resolution, and quantization-aware training are a few of the highlights.

In this installment, senior principal scientist Andreas Stolcke selects papers from Alexa AI’s speech science organization, focusing on two overarching themes in recent research on speech-enabled AI: end-to-end neural speech recognition and fairness.

End-to-end neural speech recognition

Traditionally, speech recognition systems have included components specialized for different aspects of linguistic knowledge: acoustic models to capture the correspondence between speech sounds and acoustic waveforms (phonetics), pronunciation models to map those sounds to words, and language models (LMs) to capture higher-order properties such as syntax, semantics, and dialogue context.

All these models are trained on separate data and combined using graph and search algorithms, to infer the most probable sequence of words corresponding to acoustic input. The latest versions of these systems employ neural networks for individual components, typically in the acoustic and language models, while still relying on non-neural methods for model integration; they are therefore known as “hybrid” automatic-speech-recognition (ASR) systems.

While the hybrid ASR approach is structured and modular, it also makes it hard to model the ways in which acoustic, phonetic, and word-level representations interact and to optimize the recognition system end to end. For these reasons, much recent research in ASR has focused on so-called end-to-end or all-neural recognition systems, which infer a sequence of words directly from acoustic inputs.

Related content
Innovative training methods and model compression techniques combine with clever engineering to keep speech processing local.

End-to-end ASR systems use deep multilayered neural architectures that can be optimized end to end for recognition accuracy. While they do require large amounts of data and computation for training, once trained, they offer a simplified computational architecture for inference, as well as superior performance.

Alexa’s ASR employs end-to-end as its core algorithm, both in the cloud and on-device. Across the industry and in academic research, end-to-end architectures are still being improved to achieve better accuracy, to require less computation and/or latency, or to mitigate the lack of modularity that makes it challenging to inject external (e.g., domain-specific) knowledge at run time.

Alexa AI papers at Interspeech address several open problems in end-to-end ASR, and we summarize a few of those papers here.

In “ConvRNN-T: Convolutional augmented recurrent neural network transducers for streaming speech recognition”, Martin Radfar and coauthors propose a new variant of the popular recurrent-neural-network-transducer (RNN-T) end-to-neural architecture. One of their goals is to preserve the property of causal processing, meaning that the model output depends only on past and current (but not future) inputs, which enables streaming ASR. At the same time, they want to improve the model’s ability to capture long-term contextual information.

A high-level block diagram of ConvRNN-T.

To achieve both goals, they augment the vanilla RNN-T with two distinct convolutional (CNN) front ends: a standard one for encoding correlations localized in time and a novel “global CNN” encoder that is designed to capture long-term correlations by summarizing activations over the entire utterance up to the current time step (while processing utterances incrementally through time).

The authors show that the resulting ConvRNN-T gives superior accuracy compared to other proposed neural streaming ASR architectures, such as the basic RNN-T, Conformer, and ContextNet.

Another concern with end-to-end ASR models is computational efficiency, especially since the unified neural architecture makes these models very attractive for on-device deployment, where compute cycles and (for mobile devices) power are at a premium.

In their paper “Compute cost amortized Transformer for streaming ASR”, Yi Xie and colleagues exploit the intuitive observation that the amount of computation a model performs should vary as a function of the difficulty of the task; for instance, input in which noise or an accent causes ambiguity may require more computation than a clean input with a mainstream accent. (We may think of this as the ASR model “thinking harder” in places where the words are more difficult to discern.)

Related content
A new approach to determining the “channel configuration” of convolutional neural nets improves accuracy while maintaining runtime efficiency.

The researchers achieve this with a very elegant method that leverages the integrated neural structure of the model. Their starting point is a Transformer-based ASR system, consisting of multiple stacked layers of multiheaded self-attention (MHA) and feed-forward neural blocks. In addition, they train “arbitrator” networks that look at the acoustic input (and, optionally, also at intermediate block outputs) to toggle individual components on or off.

Because these component blocks have “skip connections” that combine their outputs with the outputs of earlier layers, they are effectively optional for the overall computation to proceed. A block that is toggled off for a given input frame saves all the computation normally carried out by that block, producing a zero vector output. The following diagram shows the structure of both the elementary Transformer building block and the arbitrator that controls it:

Arbitrator:Transformer backbone.png
Illustration of the arbitrator and Transformer backbone of each block. The lightweight arbitrator toggles whether to evaluate subcomponents during the forward pass.

The arbitrator networks themselves are small enough that they do not contribute significant additional computation. What makes this scheme workable and effective, however, is that both the Transformer assemblies and the arbitrators that control them can be trained jointly, with dual goals: to perform accurate ASR and to minimize the overall amount of computation. The latter is achieved by adding a term to the training objective function that rewards reducing computation. Dialing a hyperparameter up or down selects the desired balance between accuracy and computation.

Related content
Branching encoder networks make operation more efficient, while “neural diffing” reduces bandwidth requirements for model updates.

The authors show that their method can achieve a 60% reduction in computation with only a minor (3%) increase in ASR error. Their cost-amortized Transformer proves much more effective than a benchmark method that constrains the model to attend only to sliding windows over the input, which yields only 13% savings and an error increase of almost three times as much.

Finally, in this short review of end-to-end neural ASR advances, we look at ways to recognize speech from more than one speaker, while keeping track of who said what (also known as speaker-attributed ASR).

This has traditionally been done with modular systems that perform ASR and, separately, perform speaker diarization, i.e., labeling stretches of audio according to who is speaking. However, here, too, neural models have recently brought advances and simplification, by integrating these two tasks in a single end-to-end neural model.

In their paper “Separator-transducer-segmenter: Streaming recognition and segmentation of multi-party speech”, Ilya Sklyar and colleagues not only integrate ASR and segmentation-by-speaker but do so while processing inputs incrementally. Streaming multispeaker ASR with low latency is a key technology to enable voice assistants to interact with customers in collaborative settings. Sklyar’s system does this with a generalization of the RNN-T architecture that keeps track of turn-taking between multiple speakers, up to two of whom can be active simultaneously. The researchers’ separator-transducer-segmenter model is depicted below:

Separator-transducer-segmenter. The tokens <sot> and <eot> represent the start of turn and end of turn. Model blocks with the same color have tied parameters, and transcripts in the color-matched boxes belong to the same speaker.

A key element that yields improvements over an earlier approach is the use of dedicated tokens to recognize both starts and ends of speaker turns, for what the authors call “start-pointing” and “end-pointing”. (End-pointing is a standard feature of many interactive ASR systems necessary to predict when a talker is done.) Beyond representing the turn-taking structure in this symbolic way, the model is also penalized during training for taking too long to output these markers, in order to improve the latency and temporal accuracy of the outputs.

Fairness in the performance of speech-enabled AI

The second theme we’d like to highlight, and one that is receiving increasing attention in speech and other areas of AI, is performance fairness: the desire to avert large differences in accuracy across different cohorts of users or on content associated with protected groups. As an example, concerns about this type of fairness gained prominence with demonstrations that certain computer vision algorithms performed poorly for certain skin tones, in part due to underrepresentation in the training data.

Related content
The team’s latest research on privacy-preserving machine learning, federated learning, and bias mitigation.

There’s a similar concern about speech-based AI, with speech properties varying widely as a function of speaker background and environment. A balanced representation in training sets is hard to achieve, since the speakers using commercial products are largely self-selected, and speaker attributes are often unavailable for many reasons, privacy among them. This topic is also the subject of a special session at Interspeech, Inclusive and Fair Speech Technologies, which several Alexa AI scientists are involved in as co-organizers and presenters.

One of the special-session papers, “Reducing geographic disparities in automatic speech recognition via elastic weight consolidation”, by Viet Anh Trinh and colleagues, looks at how geographic location within the U.S. affects ASR accuracy and how models can be adapted to narrow the gap for the worst-performing regions. Here and elsewhere, a two-step approach is used: first, subsets of speakers with higher-than-average error rates are identified; then a mitigation step attempts to improve performance for those cohorts. Trinh et al.’s method identifies the cohorts by partitioning the speakers according to their geographic longitude and latitude, using a decision-tree-like algorithm that maximizes the word-error-rate (WER) differences between resulting regions:

Reducing geographical disparities.png
A map of 126 regions identified by the clustering tree. The color does not indicate a specific word error rate (WER), but regions with the same color do have the same WER.

Next, the regions are ranked by their average WERs; data from the highest-error regions is identified for performance improvement. To achieve that, the researchers use fine-tuning to optimize the model parameters for the targeted regions, while also employing a technique called elastic weight consolidation (EWC) to minimize performance degradation on the remaining regions.

This is important to prevent a phenomenon known as “catastrophic forgetting”, in which neural models degrade substantially on prior training data during fine-tuning. The idea is to quantify the influence that different dimensions of the parameter space have on the overall performance and then avoid large variations along those dimensions when adapting to a data subset. This approach decreases the WER mean, maximum, and variance across regions and even the overall WER (including the regions not fine-tuned on), beating out several baseline methods for model adaptation.

Pranav Dheram et al., in their paper “Toward fairness in speech recognition: Discovery and mitigation of performance disparities”, look at alternative methods for identifying underperforming speaker cohorts. One approach is to use human-defined geographic regions as given by postal (a.k.a. zip) codes, in combination with demographic information from U.S. census data, to partition U.S. geography.

Related content
NSF deputy assistant director Erwin Gianchandani on the challenges addressed by funded projects.

Zip codes are sorted into binary partitions by majority demographic attributes, so as to maximize WER discrepancies. The partition with higher WER is then targeted for mitigations, an approach similar to that adopted in the Trinh et al. paper. However, this approach is imprecise (since it lumps together speakers by zip code) and limited to available demographic data, so it generalizes poorly to other geographies.

Alternatively, Dheram et al. use speech characteristics learned by a neural speaker identification model to group speakers. These “speaker embedding vectors” are clustered, reflecting the intuition that speakers who sound similar will tend to have similar ASR difficulty.

Subsequently, these virtual speaker regions (not individual identities) can be ranked by difficulty and targeted for mitigation, without relying on human labeling, grouping, or self-identification of speakers or attributes. As shown in the table below, the automatic approach identifies a larger gap in ASR accuracy than the “geo-demographic” approach, while at the same time targeting a larger share of speakers for performance mitigation:

Cohort discoveryWER gap (%)Bottom-cohort share (%)







The final fairness-themed paper we highlight explores yet another approach to avoiding performance disparities, known as adversarial reweighting (ARW). Instead of relying on explicit partitioning of the input space, this approach assigns continuous weights to the training instances (as a function of input features), with the idea that harder examples get higher weights and thereby exert more influence on the performance optimization.

Related content
Method significantly reduces bias while maintaining comparable performance on machine learning tasks.

Secondly, ARW more tightly interleaves, and iterates, the (now weighted) cohort identification and mitigation steps. Mathematically, this is formalized as a min-max optimization algorithm that alternates between maximizing the error by changing the sample weights (hence “adversarial”) and minimizing the weighted verification error by adjusting the target model parameters.

ARW was designed for group fairness in classification and regression tasks that take individual data points as inputs. “Adversarial reweighting for speaker verification fairness”, by Minho Jin et al., looks at how the concept can be applied to a classification task that depends on pairs of input samples, i.e., checking whether two speech samples come from the same speaker. Solving this problem could help make a voice-based assistant more reliable at personalization and other functions that require knowing who is speaking.

The authors look at several ways to adapt ARW to learning similarity among speaker embeddings. The method that ultimately worked best assigns each pair of input samples an adversarial weight that is the sum of individual sample weights (thereby reducing the dimensionality of the weight prediction). The individual sample weights are also informed by which region of the speaker embedding space a sample falls into (as determined by unsupervised k-means clustering, the same technique used in Dheram et al.’s automatic cohort-identification method).

Computing ARW weights.png
Computing adversarial-reweighting (ARW) weights.

I omit the details, but once the pairwise (PW) adversarial weights are formalized in this way, we can insert them into the loss function for metric learning, which is the basis of training a speaker verification model. Min-max optimization can then take turns training the adversary network that predicts the weights and optimizing the speaker embedding extractor that learns speaker similarity.

On a public speaker verification corpus, the resulting system reduced overall equal-error rate by 7.6%, while also reducing the gap between genders by 17%. It also reduced the error variability across different countries of origin, by nearly 10%. Note that, as in the case of the Trinh et al. ASR fairness paper, fairness mitigation improves both performance disparities and overall accuracy.

This concludes our thematic highlights of Alexa Speech Interspeech papers. Note that Interspeech covers much more than speech and speaker recognition. Please check out companion pieces that feature additional work, drawn from technical areas that are no less essential for a functioning speech-enabled AI assistant: natural-language understanding and speech synthesis.

Related content

US, VA, Arlington
Amazon’s mission is to be the most customer centric company in the world. The Workforce Staffing (WFS) organization is on the front line of that mission by hiring the hourly fulfillment associates who make that mission a reality. To drive the necessary growth and continued scale of Amazon’s associate needs within a constrained employment environment, Amazon has created the Workforce Intelligence (WFI) team. This team will (re)invent how Amazon attracts, communicates with, and ultimately hires its hourly associates. This team owns multi-layered research and program implementation to drive deep learning, process improvements, and strategic recommendations to global leadership. Are you passionate about data? Do you enjoy questioning the status quo? Do complex and difficult challenges excite you? If yes, this may be the team for you. The Data Scientist will be responsible for creating cutting edge algorithms, predictive and prescriptive models as well as required data models to facilitate WFS at-scale warehouse associate hiring. This role acts as an internal consultant to the marketing, biz ops and candidate experience teams covering responsibilities such as at-scale hiring process improvement, analyzing large scale candidate/associate data and being strategic to providing best candidate hiring experience to WFS warehouse associate candidates. We are open to hiring candidates to work out of one of the following locations: Arlington, VA, USA
US, CA, Sunnyvale
Are you passionate about solving unique customer-facing problems in the Amazon scale? Are you excited about utilizing statistical analysis, machine learning, data mining and leverage tons of Amazon data to learn and infer customer shopping patterns? Do you enjoy working with a diversity of engineers, machine learning scientists, product managers and user-experience designers? If so, you have found the right match! Fashion is extremely fast-moving, visual, subjective, and it presents numerous unique problem domains such as product recommendations, product discovery and evaluation. The vision for Amazon Fashion is to make Amazon the number one online shopping destination for Fashion customers by providing large selections, inspiring and accurate recommendations and customer experience. The mission of Fit science team as part of Fashion Tech is to innovate and develop scalable ML solutions to provide personalized fit and size recommendation when Amazon Fashion customers evaluate apparels or shoes online. The team is hiring a Data Scientist who has a solid background in Statistical Analysis, Machine Learning and Data Mining and a proven record of effectively analyzing large complex heterogeneous datasets, and is motivated to grow professionally as a Data Scientist. Key job responsibilities - You will work on our Science team and partner closely with applied scientists, data engineers as well as product managers, UX designers, and business partners to answer complex problems via data analysis. Outputs from your analysis will directly help improve the performance of the ML based recommendation systems thereby enhancing the customer experience as well as inform the roadmap for science and the product. - You can effectively analyze complex and disparate datasets collected from diverse sources to derive key insights. - You have excellent communication skills to be able to work with cross-functional team members to understand key questions and earn the trust of senior leaders. - You are able to multi-task between different tasks such as gap analysis of algorithm results, integrating multiple disparate datasets, doing business intelligence, analyzing engagement metrics or presenting to stakeholders. - You thrive in an agile and fast-paced environment on highly visible projects and initiatives. We are open to hiring candidates to work out of one of the following locations: Sunnyvale, CA, USA
US, CA, Sunnyvale
At Amazon Fashion, we are obsessed with making Amazon Fashion the most loved fashion destinations globally. We're searching for Computer Vision pioneers who are passionate about technology, innovation, and customer experience, and who are enthusiastic about making a lasting impact on the industry. You'll be working with talented scientists, engineers, and product managers to innovate on behalf of our customers. If you're fired up about being part of a dynamic, driven team, then this is your moment to join us on this exciting journey and change the world of eCommerce forever Key job responsibilities As a Applied Scientist, you will be at the forefront to define, own and drive the science that span multiple machine learning models and enabling multiple product/engineering teams and organizations. You will partner with product management and technical leadership to identify opportunities to innovate customer facing experiences. You will identify new areas of investment and work to align product roadmaps to deliver on these opportunities. As a science leader, you will not only develop unique scientific solutions, but more importantly influence strategy and outcomes across different Amazon organizations such as Search, Personalization and more. This role is inherently cross-functional and requires a strong ability to communicate, influence and earn the trust of software engineers, technical and business leadership. We are open to hiring candidates to work out of one of the following locations: Sunnyvale, CA, USA
US, WA, Seattle
Amazon is continuing to invest in its Advertising business to tap into the growing online advertising market. The Publisher Technologies team builds and operates extensible services that empower 1P Publishers to improve the monetization of their customer experiences, along with the experiences themselves. We bias toward standards-based and flexible designs that allow Publishers the ability to invent on top of our solutions and to interoperate well with other advertising technology providers; both internal and external. The Publisher Technology Data, Insights, and Analytics team enables faster data-driven decision making for Publishers and Monetization teams by providing them with near real time data, data management tools, actionable insights, and an easy-to-use reporting experience. Our data products provide Publishers and Monetization teams with the capabilities necessary to better understand the performance of their Advertising products along with supporting machine learning at scale. In this role, you will join a team whose data products and services empower hundreds of teams across Amazon with near real time data to support big data analytics, insights, and machine learning at scale. You will collaborate with cross-functional teams to design, develop, and implement advanced data tools, predictive models, and machine learning algorithms to support Advertising strategies and optimize revenue streams. You will analyze large-scale data to identify patterns and trends, and design and run A/B experiments to improve Publisher and advertiser experiences. Key job responsibilities - Design and lead large projects and experiments from beginning to end, and drive solutions to complex or ambiguous problems - Create tools and solve challenges using statistical modeling, machine learning, optimization, and/or other approaches for quantifiable impact on the business - Use broad expertise to recommend the right strategies, methodologies, and best practices, teaching and mentoring others - Key influencer of your team’s business strategy and of related teams’ strategies - Communication and documentation of methodologies, insights, and recommendations for senior leaders with various levels of technical knowledge We are open to hiring candidates to work out of one of the following locations: Seattle, WA, USA
GB, Cambridge
Our team undertakes research together with multiple organizations to advance the state-of-the-art in speech technologies. We not only work on giving Alexa, the ground-breaking service that powers Echo, her voice, but we also develop cutting-edge technologies with Amazon Studios, the provider of original content for Prime Video. Do you want to be part of the team developing the latest technology that impacts the customer experience of ground-breaking products? Then come join us and make history. We are looking for a passionate, talented, and inventive Senior Applied Scientist with a background in Machine Learning to help build industry-leading Speech, Language and Video technology. As a Senior Applied Scientist at Amazon you will work with talented peers to develop novel algorithms and modelling techniques to drive the state of the art in speech and vocal arts synthesis. Position Responsibilities: - Participate in the design, development, evaluation, deployment and updating of data-driven models for digital vocal arts applications. - Participate in research activities including the application and evaluation and digital vocal and video arts techniques for novel applications. - Research and implement novel ML and statistical approaches to add value to the business. - Mentor junior engineers and scientists. We are open to hiring candidates to work out of one of the following locations: Cambridge, GBR
US, WA, Seattle
The Amazon Economics Team is hiring Economist Interns. We are looking for detail-oriented, organized, and responsible individuals who are eager to learn how to work with large and complicated data sets to solve real-world business problems. Some knowledge of econometrics, as well as basic familiarity with Stata, R, or Python is necessary. Experience with SQL, UNIX, Sawtooth, and Spark would be a plus. These are full-time positions at 40 hours per week, with compensation being awarded on an hourly basis. You will learn how to build data sets and perform applied econometric analysis at Internet speed collaborating with economists, data scientists and MBAʼs. These skills will translate well into writing applied chapters in your dissertation and provide you with work experience that may help you with future job market placement. Roughly 85% of interns from previous cohorts have converted to full-time economics employment at Amazon. If you are interested, please send your CV to our mailing list at We are open to hiring candidates to work out of one of the following locations: Seattle, WA, USA
US, WA, Seattle
Innovators wanted! Are you an entrepreneur? A builder? A dreamer? This role is part of an Amazon Special Projects team that takes the company’s Think Big leadership principle to the extreme. We focus on creating entirely new products and services with a goal of positively impacting the lives of our customers. No industries or subject areas are out of bounds. If you’re interested in innovating at scale to address big challenges in the world, this is the team for you. Here at Amazon, we embrace our differences. We are committed to furthering our culture of inclusion. We have thirteen employee-led affinity groups, reaching 40,000 employees in over 190 chapters globally. We are constantly learning through programs that are local, regional, and global. Amazon’s culture of inclusion is reinforced within our 16 Leadership Principles, which remind team members to seek diverse perspectives, learn and be curious, and earn trust. Our team highly values work-life balance, mentorship and career growth. We believe striking the right balance between your personal and professional life is critical to life-long happiness and fulfillment. We care about your career growth and strive to assign projects and offer training that will challenge you to become your best. We are a team of doers working passionately to apply cutting-edge advances in technology to solve real-world problems. As an Applied Scientist, you will work with a unique and gifted team developing exciting products for consumers and collaborate with cross-functional teams. Our team rewards intellectual curiosity while maintaining a laser-focus in bringing entirely new products to Amazon. Competitive candidates are responsive, flexible, and able to succeed within an open, collaborative, entrepreneurial, startup-like environment. At the cutting edge of both academic and applied research in this product area, you have the opportunity to work together with some of the most talented scientists, engineers, and product managers. We are open to hiring candidates to work out of one of the following locations: Seattle, WA, USA
US, NY, New York
Amazon is investing heavily in building a world-class advertising business, and we are responsible for defining and delivering a collection of self-service performance advertising products that drive discovery and sales. We deliver billions of ad impressions and millions of clicks daily and break fresh ground to create world-class products. We are highly motivated, collaborative, and fun-loving with an entrepreneurial spirit and bias for action. With a broad mandate to experiment and innovate, we are growing at an unprecedented rate with a seemingly endless range of new opportunities. Our systems and algorithms operate on one of the world's largest product catalogs, matching shoppers with advertised products with a high relevance bar and strict latency constraints. Sponsored Products Detail Page Blended Widgets team is chartered with building novel product recommendation experiences. We push the innovation frontiers for our hundreds of millions of customers WW to aid product discovery while helping shoppers to find relevant products easily. Our team is building differentiated recommendations that highlight specific characteristics of products (either direct attributes, inferred or machine learned), and leveraging generative AI to provide interactive shopping experiences. We are looking for a Senior Applied Scientist who can delight our customers by continually learning and inventing. Our ideal candidate is an experienced Applied Scientist who has a track-record of performing deep analysis and is passionate about applying advanced ML and statistical techniques to solve real-world, ambiguous and complex challenges to optimize and improve the product performance, and who is motivated to achieve results in a fast-paced environment. The position offers an exceptional opportunity to grow your technical and non-technical skills and make a real difference to the Amazon Advertising business. As a Senior Applied Scientist on this team, you will: * Be the technical leader in Machine Learning; lead efforts within this team and collaborate across teams * Rapidly design, prototype and test many possible hypotheses in a high-ambiguity environment, perform hands-on analysis and modeling of enormous data sets to develop insights that improve shopper experiences and merchandise sales * Drive end-to-end Machine Learning projects that have a high degree of ambiguity, scale, complexity. * Build machine learning models, perform proof-of-concept, experiment, optimize, and deploy your models into production; work closely with software engineers to assist in productionizing your ML models. * Establish scalable, efficient, automated processes for large-scale data analysis, machine-learning model development, model validation and serving. * Research new and innovative machine learning approaches. * Promote the culture of experimentation and applied science at Amazon Team video We are also open to consider the candidate in Seattle, or Palo Alto. We are open to hiring candidates to work out of one of the following locations: New York, NY, USA
US, VA, Arlington
Amazon Advertising is one of Amazon's fastest growing and most profitable businesses, responsible for defining and delivering a collection of advertising products that drive discovery and sales. As a core product offering within our advertising portfolio, Sponsored Products (SP) helps merchants, retail vendors, and brand owners succeed via native advertising, which grows incremental sales of their products sold through Amazon. The SP team's primary goals are to help shoppers discover new products they love, be the most efficient way for advertisers to meet their business objectives, and build a sustainable business that continuously innovates on behalf of customers. Our products and solutions are strategically important to enable our Retail and Marketplace businesses to drive long-term growth. We deliver billions of ad impressions and millions of clicks and break fresh ground in product and technical innovations every day! The Search Sourcing and Relevance team parses billions of ads to surface the best ad to show to Amazon shoppers. The team strives to understand customer intent and identify relevant ads that enable them to discover new and alternate products. This also enables sellers on Amazon to showcase their products to customers, which may, at times, be buried deeper in the search results. By showing the right ads to customers at the right time, this team improves the shopper experience, increase advertiser ROI, and improves long-term monetization. This is a talented team of machine learning scientists and software engineers working on complex solutions to understand the customer intent and present them with ads that are not only relevant to their actual shopping experience but also non-obtrusive. This area is of strategic importance to Amazon Retail and Marketplace business, driving long term growth. Key job responsibilities As a Senior Applied Scientist on this team, you will: - Be the technical leader in Machine Learning; lead efforts within this team and across other teams. - Perform hands-on analysis and modeling of enormous data sets to develop insights that increase traffic monetization and merchandise sales, without compromising the shopper experience. - Drive end-to-end Machine Learning projects that have a high degree of ambiguity, scale, complexity. - Build machine learning models, perform proof-of-concept, experiment, optimize, and deploy your models into production; work closely with software engineers to assist in productionizing your ML models. - Run A/B experiments, gather data, and perform statistical analysis. - Establish scalable, efficient, automated processes for large-scale data analysis, machine-learning model development, model validation and serving. - Research new and innovative machine learning approaches. - Recruit Applied Scientists to the team and provide mentorship. About the team Amazon is investing heavily in building a world-class advertising business. This team defines and delivers a collection of advertising products that drive discovery and sales. Our solutions generate billions in revenue and drive long-term growth for Amazon’s Retail and Marketplace businesses. We deliver billions of ad impressions, millions of clicks daily, and break fresh ground to create world-class products. We are a highly motivated, collaborative, and fun-loving team with an entrepreneurial spirit - with a broad mandate to experiment and innovate. You will invent new experiences and influence customer-facing shopping experiences to help suppliers grow their retail business and the auction dynamics that leverage native advertising; this is your opportunity to work within the fastest-growing businesses across all of Amazon! Define a long-term science vision for our advertising business, driven from our customers' needs, translating that direction into specific plans for research and applied scientists, as well as engineering and product teams. This role combines science leadership, organizational ability, technical strength, product focus, and business understanding. We are open to hiring candidates to work out of one of the following locations: Arlington, VA, USA
US, WA, Seattle
Amazon Advertising Impact Team is looking for a Senior Economist to help translate cutting-edge causal inference and machine learning research into production solutions. The individual will have the opportunity to shape the technical and strategic vision of a highly ambiguous problem space, and deliver measurable business impacts via cross-team and cross-functional collaboration. Amazon is investing heavily in building a world class advertising business. Our advertising products are strategically important to Amazon’s Retail and Marketplace businesses for driving long-term growth. The mission of the Advertising Impact Team is to make our advertising products the most customer-centric in the world. We specialize in measuring and modeling the short- and long-term customer behavior in relation to advertising, using state of the art econometrics and machine learning techniques. With a broad mandate to experiment and innovate, we are constantly advancing our experimentation methodology and infrastructure to accelerate learning and scale impacts. We are highly motivated, collaborative and fun-loving with an entrepreneurial spirit and bias for action. Key job responsibilities • Function as a technical leader to shape the strategic vision and the science roadmap of a highly ambiguous problem space • Develop economic theory and deliver econometrics and machine learning models to optimize advertising strategies on behalf of our customers • Design, execute, and analyze experiments to verify the efficacy of different scientific solutions in production • Partner with cross-team technical contributors (scientists, software engineers, product managers) to implement the solution in production • Write effective business narratives and scientific papers to communicate to both business and technical audience, including the most senior leaders of the company We are open to hiring candidates to work out of one of the following locations: New York, NY, USA | Seattle, WA, USA