A screen grab from an NFL video shows Packers quarterback Aaron Rodgers preparing to pass the ball
In January, the National Football League announced its new QB passing score, which addressed the inconsistency across plays, games, weeks, and seasons found in previous scores. A method based on spliced binned-Pareto distributions, developed by Amazon researchers, led to the improved passing metric.

The science behind NFL Next Gen Stats’ new passing metric

Spliced binned-Pareto distributions are flexible enough to handle symmetric, asymmetric, and multimodal distributions, offering a more consistent metric.

When football fans evaluate a player’s performance, they measure the player’s execution of specific plays against an innate sense of the player’s potential. Trying to encode such judgments into machine learning models, however, has proved non-trivial.

Fans and commentators have criticized existing quarterback (QB) passing stats, such as Madden QB, the NFL passer rating, ESPN’s total quarterback rating (QBR), and the Pro Football Focus (PFF) grade, for being calibrated to obsolete data, being unrelated to winning, or scoring players anomalously — as when Kyler Murray received the low Madden QB21 rating of 77 despite being the 2019 Offensive Rookie of the Year.

Related content
Principal data scientist Elena Ehrlich uses her skills to help a wide variety of customers — including the National Football League.

On January 13, 2022, just before Super Bowl LVI, the NFL announced its new QB passing score, which seeks to improve on its predecessors’ limitations and to isolate a QB’s contributions from those of the team in a completely data-driven way.

The play level

A root problem with existing ratings is their inconsistency across plays, games, weeks, and seasons. We sought a metric that could account for play-specific dynamics and scale to different granularities with consistency.

We wanted to measure the QB’s decision making and pass execution given the game clock and the pressure he was under. For those conditions, we have directly measurable quantities, such as the defense’s movements. But how do we measure how “well” the QB performed? This is a point we address in the next section (“The model architecture”), but for now, we take yards gained as a measurable outcome. (This assumption will prove useful downstream.)

nflendzonesideline.png
An (x, y)-coordinate representation of the football field.

Since we said we wanted to take a data-driven approach, let’s look at exactly what the data is.

On each play, we receive updates every 100 milliseconds from radio frequency ID chips in the players’ shoulder pads, giving us all 22 players’ position in the (x, y)-coordinates of the field, along with their speed, acceleration, running direction, and body orientation, as shown in the image above.

This time series is of variable length, starting with the snap and ending when the QB releases the ball. For example, a QB throwing four seconds after the snap yields a time series of 40 timesteps, whereas a pass that takes just over two seconds yields a time series of 25 timesteps.

Related content
In its collaboration with the NFL, AWS contributes cloud computing technology, machine learning services, business intelligence services — and, sometimes, the expertise of its scientists.

The figure below shows how the time series is represented. Each row corresponds to a single timestep and contains eight features (x-position, y-position, x-speed, y-speed, x-acceleration, y-acceleration, direction, and orientation) for each of 22 players, for a matrix of 176 columns and 40 rows. Features such as the number of defenders within a two-yard radius of the target receiver receive additional columns, but we eschew them here to focus on modeling technique.

nflplaytimeseriesmatrix.png
Matrix representation of the time series of a single play.

The collection of passing plays from the 2018-2020 seasons provided us with around 34,000 completions, 15,000 incompletes, and 1,200 interceptions, for more than 50,000 plays total. Feature preprocessing is a memory-intensive job, requiring two hours runtime on a ml.m5.m24xlarge instance. Modeling so large a number of time series, however, is a high-compute job.

For the model described in the upcoming section, the one-gpu p3.8xlarge instance incurred an eight-hour training time. While the NFL can afford two-hour preprocessing and eight-hour model fittings before the season commences, in live televised games, the inference returning a QB’s score for his play needs to be in real-time, like the 0.001 second per play of the following model.

The model architecture

To learn the temporal complexities within plays’ time series, we opted for a temporal convolutional network (TCN), a convolutional network adapted to handle inputs of different lengths and factor in long-range relationships between sequential inputs.

Since a play also has static attributes — such as down, score, and games remaining in the season — that influence players’ decisions and performance, we concatenate these with the TCN state and pass both to a multilayer perceptron to produce the final output, a probabilistic prediction of yards gained. To that, we compare the play’s actual yards gained.

nflplayertimeseriestcn.png
In our model, players’ time series are encoded by a temporal convolutional network (TCN), concatenated with a play’s static features, and fed to a multilayer perceptron.

Now, the network output is worth careful consideration. Naively, one might want to output a point prediction of the yards gained and train the network with an error loss function. But this fails to achieve the desired goal of measuring the outcome of a play relative to its potential.

An extra two yards gained under easier circumstances is not the same as two yards gained in more difficult circumstances, yet both would have a mean absolute error (MAE) of two yards. Instead, we opted for a distributional prediction, where the network’s outputs are parameters that specify a probability distribution.

We thought about which probability distribution function (PDF) would be most suitable. For certain plays, the PDF of yards gained would need to be asymmetrical: e.g., in a completed pass, if the QB throws to a receiver already running toward the end zone, positive yards gained are more likely than negative yards. Whereas for other plays, the PDF of yards gained would need to capture symmetry: on an interception, for example, the “negative” yards gained by the defender would balance against the possible positive yards gained by a completion.

There are even those plays for which the PDF would be bimodal: if the QB passes to a receiver with only one defender closing in, then the likelihood of yards gained lies either in the one- to two-yards range (if the receiver is tackled) or in the high-yardage range (if the receiver eludes the tackle), but not in-between. Other multi-model plays include when the QB may have to scramble for yards, like in the second play in this video.

yardsgainedpassescompletedgraphic.png
Yards gained on intercepted versus completed passes.

So we needed a distribution whose parameterization is flexible enough to accommodate multimodality, different symmetries, and light or heavy tails and whose locations and scale can vary with the clock time, current score, and other factors. We can’t meet these requirements with distributions like Gaussian or gamma, but we can meet them with the spliced binned-Pareto distribution.

The spliced binned-Pareto distribution

The spliced binned-Pareto (SBP) distribution arises from a classic result in extreme-value theory (EVT), which states that the distribution of extreme values (i.e., the tail) is almost independent of the base distribution of the data and, as shown below, can be estimated from the datapoints above the assumed upper bound (t) of the base distribution.

The second theorem of EVT states that any such distribution tail can be well-approximated by a generalized Pareto distribution (GPD) that has only two parameters, shape (x) and scale (b), and closed-form quantiles. The figure below shows the PDF of a GPD for x < 0, yielding a finite tail; x = 0, yielding an exponential tail; and x > 0, yielding a heavier-than-exponential tail.

valuesofdistribution.png
At left is a visualization of the observation that extreme values of a distribution (i.e., the tail) are almost independent of the base distribution and can be estimated from the datapoints above the assumed upper bound (t) of the base distribution. At right are probability distribution functions for generalized Pareto distributions with three different shapes.

Since we need multimodality and asymmetry for the base distribution, we modeled the base of the predictive distribution with a discrete binned distribution; as shown below, we discretize the real axis between two points into bins and predict the probability of the observation falling in each of these bins.

This yields a distribution robust to extreme values at training time because it is now a classification problem. The log-likelihood is not affected by the distance between the predicted mean and the observed point, as would be the case when using a Gaussian, Student’s t, or other parametric distribution. Moreover, the bins’ probability heights are independent of one another, so they can capture asymmetries or multiple modes in the distribution.

From the binned distribution, we delimit the lower tail by the fifth quantile and replace it with a weighted GPD. Analogously, we delimit the upper tail by the 95th quantile and replace it with another weighted GPD, to yield the SBP shown below.

binned and spliced binned graphic.png
At left is a binned distribution; at right is a spliced binned distribution, whose topmost and bottommost quantiles have been replaced with weighted generalized Pareto distributions.

The figure on the left above shows that the base distribution is indeed robust: the event represented by the extreme red dot will not bias the learned mean of the distribution but simply inflate the probability associated with the far-right bin.

However, this still leaves two problems: (i) although the red-dot event was observed to occur, the binned distribution would give it zero probability; conversely, (ii) the distribution would predict with certainty that extreme (i.e., great) plays do not occur. Because extreme yardage from deep-pass touchdowns, breakaway interceptions, etc., is rare, it is the adrenaline of the sport and exactly what we are most interested in describing probabilistically. The SBP figure above on the right graphically illustrates how the GPD tails can quantify how much less likely — i.e., harder — each incremental yard is.

The binned distribution and the GPDs are parameterized by the neural network we described above, which takes as input play matrices and outputs parameters: each of the bin probabilities, as well as x and b for each of the GPDs, which can be used to predict the probability-of-yards-gained value.

Establishing a gradient-based learning of heavy-tailed distributions has been a challenge in the ML community. Carreau and Bengio’s Hybrid Pareto model stitched GPD tails onto parametric distributions, but since the likelihood isn’t differentiable with respect to the threshold t, their model is supplemented with simulation and numerical approximations, foregoing time-varying applications. Other previous methods such as SPOT, DSPOT, and NN-SPOT, forego modeling the base and capture only the tails outside a fixed distance from the mean, which precludes higher-order non-stationarity and asymmetric tails.

While prior methods use a fixed threshold t to delimit tails, by modeling the base distribution, we obtain a time-varying threshold. Furthermore, training a single neural network to maximize the log-probability of the observed time step under the binned and GPD distributions yields a prediction that accounts for temporal variation in all moments of the distribution — the mean and variance as well as tail heaviness and scale, including asymmetric tails. The capabilities of different approaches are tabled below.

capabilitiesofdifferentapproaches.png
Capabilities of different approaches.

While we need a distributional prediction to grade a QB’s performance — to compare our model’s accuracy to other models’ — we need to use point predictions of yards gained. The table below compares the MAE of our method’s predictive median against that of a neural network with Gaussian output and against the point prediction of XGBoost, a decision-tree-based model.

meanaverageerror.png
Mean average error on yards gained for roughly 5,000 plays.

We have released Pytorch code for the spliced binned-Pareto model, along with a demo notebook.

The NGS passing score

Our model’s predictive PDF quantifies how likely each yardage gain is, for a league-average QB, given a specific play’s circumstances. Therefore, evaluating the actual yards gained in the cumulative distribution function (CDF) of that play’s SBP distribution yields a ranking between 0 and 1 of that QB’s performance relative to peer QBs.

This CDF ranking, under some further standardizations, becomes the QB passing score at the play level.

Aggregating scores over multiple plays yields game-, season-, or other split-level QB passing scores. For example, based on all targeted pass attempts in the ’21 season, Kyler Murray has a score of 87, ranking him ninth out of playoff QBs.

Under pressure, Murray's score jumps to 89; zooming in to passes between 2.5 and 4 seconds (in 2020 and 2021), Murray now scores a 99 in a five-way tie for the highest possible score. Other splits can also be contextualized with the NGS passing score, like deep passes, for example.

Finally, the tables below show that the NGS passing score correlates better with win percentages and playoff percentages than preceding passing metrics.

ngspassingscorespassingmetricsandwins.png
At left is the correlation of passing score with winning percentages and playoff percentages. At right is the comparison of passing score and other metrics.

Acknowledgments: Brad Gross

Research areas

Related content

US, WA, Seattle
Do you want to re-invent how millions of people consume video content on their TVs, Tablets and Alexa? We are building a free to watch streaming service called Fire TV Channels (https://techcrunch.com/2023/08/21/amazon-launches-fire-tv-channels-app-400-fast-channels/). Our goal is to provide customers with a delightful and personalized experience for consuming content across News, Sports, Cooking, Gaming, Entertainment, Lifestyle and more. You will work closely with engineering and product stakeholders to realize our ambitious product vision. You will get to work with Generative AI and other state of the art technologies to help build personalization and recommendation solutions from the ground up. You will be in the driver's seat to present customers with content they will love. Using Amazon’s large-scale computing resources, you will ask research questions about customer behavior, build state-of-the-art models to generate recommendations and run these models to enhance the customer experience. You will participate in the Amazon ML community and mentor Applied Scientists and Software Engineers with a strong interest in and knowledge of ML. Your work will directly benefit customers and you will measure the impact using scientific tools.
US, MA, Boston
The Artificial General Intelligence (AGI) team is looking for a passionate, talented, and inventive Applied Scientist with a strong deep learning background, to build industry-leading technology with Large Language Models (LLMs) and multi-modal systems. You will support projects that work on technologies including multi-modal model alignment, moderation systems and evaluation. Key job responsibilities As an Applied Scientist with the AGI team, you will support the development of novel algorithms and modeling techniques, to advance the state of the art with LLMs. Your work will directly impact our customers in the form of products and services that make use of speech and language technology. You will leverage Amazon’s heterogeneous data sources and large-scale computing resources to accelerate advances in generative artificial intelligence (GenAI). You are also expected to publish in top tier conferences. About the team The AGI team has a mission to push the envelope in LLMs and multimodal systems. Specifically, we focus on model alignment with an aim to maintain safety while not denting utility, in order to provide the best-possible experience for our customers.
IN, HR, Gurugram
Our customers have immense faith in our ability to deliver packages timely and as expected. A well planned network seamlessly scales to handle millions of package movements a day. It has monitoring mechanisms that detect failures before they even happen (such as predicting network congestion, operations breakdown), and perform proactive corrective actions. When failures do happen, it has inbuilt redundancies to mitigate impact (such as determine other routes or service providers that can handle the extra load), and avoids relying on single points of failure (service provider, node, or arc). Finally, it is cost optimal, so that customers can be passed the benefit from an efficiently set up network. Amazon Shipping is hiring Applied Scientists to help improve our ability to plan and execute package movements. As an Applied Scientist in Amazon Shipping, you will work on multiple challenging machine learning problems spread across a wide spectrum of business problems. You will build ML models to help our transportation cost auditing platforms effectively audit off-manifest (discrepancies between planned and actual shipping cost). You will build models to improve the quality of financial and planning data by accurately predicting ship cost at a package level. Your models will help forecast the packages required to be pick from shipper warehouses to reduce First Mile shipping cost. Using signals from within the transportation network (such as network load, and velocity of movements derived from package scan events) and outside (such as weather signals), you will build models that predict delivery delay for every package. These models will help improve buyer experience by triggering early corrective actions, and generating proactive customer notifications. Your role will require you to demonstrate Think Big and Invent and Simplify, by refining and translating Transportation domain-related business problems into one or more Machine Learning problems. You will use techniques from a wide array of machine learning paradigms, such as supervised, unsupervised, semi-supervised and reinforcement learning. Your model choices will include, but not be limited to, linear/logistic models, tree based models, deep learning models, ensemble models, and Q-learning models. You will use techniques such as LIME and SHAP to make your models interpretable for your customers. You will employ a family of reusable modelling solutions to ensure that your ML solution scales across multiple regions (such as North America, Europe, Asia) and package movement types (such as small parcel movements and truck movements). You will partner with Applied Scientists and Research Scientists from other teams in US and India working on related business domains. Your models are expected to be of production quality, and will be directly used in production services. You will work as part of a diverse data science and engineering team comprising of other Applied Scientists, Software Development Engineers and Business Intelligence Engineers. You will participate in the Amazon ML community by authoring scientific papers and submitting them to Machine Learning conferences. You will mentor Applied Scientists and Software Development Engineers having a strong interest in ML. You will also be called upon to provide ML consultation outside your team for other problem statements. If you are excited by this charter, come join us!
US, MA, Boston
The Artificial General Intelligence (AGI) team is looking for a passionate, talented, and inventive Senior Applied Scientist with a strong deep learning background, to build industry-leading technology with Large Language Models (LLMs) and multimodal systems. Key job responsibilities As a Senior Applied Scientist with the AGI team, you will work with talented peers to lead the development of novel algorithms and modeling techniques, to advance the state of the art with LLMs. Your work will directly impact our customers in the form of products and services that make use of speech and language technology. You will leverage Amazon’s heterogeneous data sources and large-scale computing resources to accelerate advances in generative artificial intelligence (GenAI). About the team The AGI team has a mission to push the envelope in LLMs and multimodal systems, in order to provide the best-possible experience for our customers.
IN, KA, Bengaluru
The Amazon Alexa AI team in India is seeking a talented, self-driven Applied Scientist to work on prototyping, optimizing, and deploying ML algorithms within the realm of Generative AI. Key responsibilities include: - Research, experiment and build Proof Of Concepts advancing the state of the art in AI & ML for GenAI. - Collaborate with cross-functional teams to architect and execute technically rigorous AI projects. - Thrive in dynamic environments, adapting quickly to evolving technical requirements and deadlines. - Engage in effective technical communication (written & spoken) with coordination across teams. - Conduct thorough documentation of algorithms, methodologies, and findings for transparency and reproducibility. - Publish research papers in internal and external venues of repute - Support on-call activities for critical issues Basic Qualifications: - Master’s or PhD in computer science, statistics or a related field - 2-7 years experience in deep learning, machine learning, and data science. - Proficiency in coding and software development, with a strong focus on machine learning frameworks. - Experience in Python, or another language; command line usage; familiarity with Linux and AWS ecosystems. - Understanding of relevant statistical measures such as confidence intervals, significance of error measurements, development and evaluation data sets, etc. - Excellent communication skills (written & spoken) and ability to collaborate effectively in a distributed, cross-functional team setting. - Papers published in AI/ML venues of repute Preferred Qualifications: - Track record of diving into data to discover hidden patterns and conducting error/deviation analysis - Ability to develop experimental and analytic plans for data modeling processes, use of strong baselines, ability to accurately determine cause and effect relations - The motivation to achieve results in a fast-paced environment. - Exceptional level of organization and strong attention to detail - Comfortable working in a fast paced, highly collaborative, dynamic work environment
IN, KA, Bengaluru
Amazon is investing heavily in building a world class advertising business and we are responsible for defining and delivering a collection of self-service performance advertising products that drive discovery and sales. Our products are strategically important to our Retail and Marketplace businesses driving long term growth. We deliver billions of ad impressions and millions of clicks daily and are breaking fresh ground to create world-class products. We are highly motivated, collaborative and fun-loving with an entrepreneurial spirit and bias for action. With a broad mandate to experiment and innovate, we are growing at an unprecedented rate with a seemingly endless range of new opportunities. The ATT team, based in Bangalore, is responsible for ensuring that ads are relevant and is of good quality, leading to higher conversion for the sellers and providing a great experience for the customers. We deal with one of the world’s largest product catalog, handle billions of requests a day with plans to grow it by order of magnitude and use automated systems to validate tens of millions of offers submitted by thousands of merchants in multiple countries and languages. In this role, you will build and develop ML models to address content understanding problems in Ads. These models will rely on a variety of visual and textual features requiring expertise in both domains. These models need to scale to multiple languages and countries. You will collaborate with engineers and other scientists to build, train and deploy these models. As part of these activities, you will develop production level code that enables moderation of millions of ads submitted each day.
US, WA, Seattle
The Search Supply & Experiences team, within Sponsored Products, is seeking an Applied Scientist to solve challenging problems in natural language understanding, personalization, and other areas using the latest techniques in machine learning. In our team, you will have the opportunity to create new ads experiences that elevate the shopping experience for our hundreds of millions customers worldwide. As an Applied Scientist, you will partner with other talented scientists and engineers to design, train, test, and deploy machine learning models. You will be responsible for translating business and engineering requirements into deliverables, and performing detailed experiment analysis to determine how shoppers and advertisers are responding to your changes. We are looking for candidates who thrive in an exciting, fast-paced environment and who have a strong personal interest in learning, researching, and creating new technologies with high customer impact. Key job responsibilities As an Applied Scientist on the Search Supply & Experiences team you will: - Perform hands-on analysis and modeling of enormous datasets to develop insights that increase traffic monetization and merchandise sales, without compromising the shopper experience. - Drive end-to-end machine learning projects that have a high degree of ambiguity, scale, and complexity. - Build machine learning models, perform proof-of-concept, experiment, optimize, and deploy your models into production; work closely with software engineers to assist in productionizing your ML models. - Design and run experiments, gather data, and perform statistical analysis. - Establish scalable, efficient, automated processes for large-scale data analysis, machine-learning model development, model validation and serving. - Stay up to date on the latest advances in machine learning. About the team We are a customer-obsessed team of engineers, technologists, product leaders, and scientists. We are focused on continuous exploration of contexts and creatives where advertising delivers value to shoppers and advertisers. We specifically work on new ads experiences globally with the goal of helping shoppers make the most informed purchase decision. We obsess about our customers and we are continuously innovating on their behalf to enrich their shopping experience on Amazon
US, WA, Seattle
Have you ever wondered how Amazon launches and maintains a consistent customer experience across hundreds of countries and languages it serves its customers? Are you passionate about data and mathematics, and hope to impact the experience of millions of customers? Are you obsessed with designing simple algorithmic solutions to very challenging problems? If so, we look forward to hearing from you! At Amazon, we strive to be Earth's most customer-centric company, where both internal and external customers can find and discover anything they want in their own language of preference. Our Translations Services (TS) team plays a pivotal role in expanding the reach of our marketplace worldwide and enables thousands of developers and other stakeholders (Product Managers, Program Managers, Linguists) in developing locale specific solutions. Amazon Translations Services (TS) is seeking an Applied Scientist to be based in our Seattle office. As a key member of the Science and Engineering team of TS, this person will be responsible for designing algorithmic solutions based on data and mathematics for translating billions of words annually across 130+ and expanding set of locales. The successful applicant will ensure that there is minimal human touch involved in any language translation and accurate translated text is available to our worldwide customers in a streamlined and optimized manner. With access to vast amounts of data, cutting-edge technology, and a diverse community of talented individuals, you will have the opportunity to make a meaningful impact on the way customers and stakeholders engage with Amazon and our platform worldwide. Together, we will drive innovation, solve complex problems, and shape the future of e-commerce. Key job responsibilities * Apply your expertise in LLM models to design, develop, and implement scalable machine learning solutions that address complex language translation-related challenges in the eCommerce space. * Collaborate with cross-functional teams, including software engineers, data scientists, and product managers, to define project requirements, establish success metrics, and deliver high-quality solutions. * Conduct thorough data analysis to gain insights, identify patterns, and drive actionable recommendations that enhance seller performance and customer experiences across various international marketplaces. * Continuously explore and evaluate state-of-the-art modeling techniques and methodologies to improve the accuracy and efficiency of language translation-related systems. * Communicate complex technical concepts effectively to both technical and non-technical stakeholders, providing clear explanations and guidance on proposed solutions and their potential impact. About the team We are a start-up mindset team. As the long-term technical strategy is still taking shape, there is a lot of opportunity for this fresh Science team to innovate by leveraging Gen AI technoligies to build scalable solutions from scratch. Our Vision: Language will not stand in the way of anyone on earth using Amazon products and services. Our Mission: We are the enablers and guardians of translation for Amazon's customers. We do this by offering hands-off-the-wheel service to all Amazon teams, optimizing translation quality and speed at the lowest cost possible.
US, WA, Seattle
Amazon.com strives to be Earth's most customer-centric company where customers can shop in our stores to find and discover anything they want to buy. We hire the world's brightest minds, offering them a fast paced, technologically sophisticated and friendly work environment. Economists at Amazon partner closely with senior management, business stakeholders, scientist and engineers, and economist leadership to solve key business problems ranging from Amazon Web Services, Kindle, Prime, inventory planning, international retail, third party merchants, search, pricing, labor and employment planning, effective benefits (health, retirement, etc.) and beyond. Amazon Economists build econometric models using our world class data systems and apply approaches from a variety of skillsets – applied macro/time series, applied micro, econometric theory, empirical IO, empirical health, labor, public economics and related fields are all highly valued skillsets at Amazon. You will work in a fast moving environment to solve business problems as a member of either a cross-functional team embedded within a business unit or a central science and economics organization. You will be expected to develop techniques that apply econometrics to large data sets, address quantitative problems, and contribute to the design of automated systems around the company. About the team The International Seller Services (ISS) Economics team is a dynamic group at the forefront of shaping Amazon's global seller ecosystem. As part of ISS, we drive innovation and growth through sophisticated economic analysis and data-driven insights. Our mission is critical: we're transforming how Amazon empowers millions of international sellers to succeed in the digital marketplace. Our team stands at the intersection of innovative technology and practical business solutions. We're leading Amazon's transformation in seller services through work with Large Language Models (LLMs) and generative AI, while tackling fundamental questions about seller growth, marketplace dynamics, and operational efficiency. What sets us apart is our unique blend of rigorous economic methodology and practical business impact. We're not just analyzing data – we're building the frameworks and measurement systems that will define the future of Amazon's seller services. Whether we're optimizing the seller journey, evaluating new technologies, or designing innovative service models, our team transforms complex economic challenges into actionable insights that drive real-world results. Join us in shaping how millions of businesses worldwide succeed on Amazon's marketplace, while working on problems that combine economic theory, advanced analytics, and innovative technology.
US, CA, Santa Clara
Amazon Q Business is an AI assistant powered by generative technology. It provides capabilities such as answering queries, summarizing information, generating content, and executing tasks based on enterprise data. We are seeking a Language Data Scientist II to join our data team. Our mission is to engineer high-quality datasets that are essential to the success of Amazon Q Business. From human evaluations and Responsible AI safeguards to Retrieval-Augmented Generation and beyond, our work ensures that Generative AI is enterprise-ready, safe, and effective for users. As part of our diverse team—including language engineers, linguists, data scientists, data engineers, and program managers—you will collaborate closely with science, engineering, and product teams. We are driven by customer obsession and a commitment to excellence. In this role, you will leverage data-centric AI principles to assess the impact of data on model performance and the broader machine learning pipeline. You will apply Generative AI techniques to evaluate how well our data represents human language and conduct experiments to measure downstream interactions. Key job responsibilities * oversee end-to-end evaluation data pipeline and propose evaluation metrics and methods * incorporate your knowledge of linguistic fundamentals, NLU, NLP to the data pipeline * process and analyze diverse media formats including audio recordings, video, images and text * perform statistical analysis of the data * write intuitive data generation & annotation guidelines * write advanced and nuanced prompts to optimize LLM outputs * write python scripts for data wrangling * automate repetitive workflows and improve existing processes * perform background research and vet available public datasets on topics such as long text retrieval, text generation, summarization, question-answering, and reasoning * leverage and integrate AWS services to optimize data collection workflows * collaborate with scientists, engineers, and product managers in defining data quality metrics and guidelines. * lead dive deep sessions with data annotators About the team About AWS Diverse Experiences AWS values diverse experiences. Even if you do not meet all of the preferred qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why AWS? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Inclusive Team Culture AWS values curiosity and connection. Our employee-led and company-sponsored affinity groups promote inclusion and empower our people to take pride in what makes us unique. Our inclusion events foster stronger, more collaborative teams. Our continual innovation is fueled by the bold ideas, fresh perspectives, and passionate voices our teams bring to everything we do. Mentorship & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve.