Some highlights from the 2020 NFL season, quantified.

How AWS scientists help create the NFL’s Next Gen Stats

In its collaboration with the NFL, AWS contributes cloud computing technology, machine learning services, business intelligence services — and, sometimes, the expertise of its scientists.

At Super Bowl LV, Tom Brady won his seventh title, in his first year as quarterback for the Tampa Bay Buccaneers, whose defense held the high-octane offense of the defending champion Kansas City Chiefs to only nine points.

At key points, the broadcast was augmented by real-time evaluations using the NFL’s Next Gen Stats (NGS) powered by AWS. Several of those stats, such as pass completion probability or expected yards after catch, use machine learning models to analyze the data streaming in from radio frequency ID tags on players’ shoulder pads and on the ball.

Since 2017, Amazon Web Services (AWS) has been the NFL’s official technology provider in every phase of the development and deployment of Next Gen Stats. AWS stores the huge amount of data generated by tracking every player on every play in every NFL game — nearly 300 million data points per season; NFL software engineers use Amazon SageMaker to quickly build, train, and deploy the machine learning (ML) models behind their most sophisticated stats; and the NFL uses the business intelligence tool Amazon QuickSight to analyze and visualize the resulting statistical data.

“We wouldn’t have been able to make the strides we have as quickly as we have without AWS,” says Michael Schaefer, the director of product and analytics for the NFL’s Next Gen Stats. “SageMaker makes the development of ML models easy and intuitive — particularly for those who may not have deep familiarity with ML.”

“And where we’ve needed additional ML expertise,” Schaefer adds, “AWS’s data scientists have been an invaluable resource.”

Secondary variance

Take, for instance, the problem of defender ghosting, or predicting the trajectories of defensive backs after the ball leaves the quarterback’s hand. 

Defender ghosting is not itself a Next Gen Stat, but it’s an essential component of stats under development. For instance, defender ghosting can help estimate how a play would have evolved if the quarterback had targeted a different receiver: would the defensive backs have reached the receiver in time to stop a big gain? Defender ghosting can thus help evaluate a quarterback’s decision making.

QB decision making.png
Defender ghosting can help evaluate a quarterback’s decision making — by, for instance, predicting how a play would have developed if the quarterback had targeted a different receiver.
Credit: Gregory Trott/AP

Using SageMaker, the NFL’s Next Gen Stats team has constructed some sophisticated machine learning models: the completion probability model, for instance, factors in 10 on-field measurements — including the distance of the pass, distance between the quarterback and the nearest pass rushers, and distance between the receiver and the nearest defenders — and outputs the (league-average) likelihood of completing a pass under those conditions.

But predicting the trajectories of defensive backs — the cornerbacks and safeties who defend against downfield plays — is a particularly tough challenge. Defensive backs tend to cover more territory than other defensive players, and they also tend to make more radical adjustments in coverage as a play develops.

Safety breaking.png
Predicting on-field trajectories is particularly difficult in the case of defensive backs — like number 32, DeShon Elliott, in this image — who tend to cover more territory and make more radical trajectory adjustments than other defensive players.
Credit: Kenneth David Richmond

So to build a defender ghosting model, the NFL engineers joined forces with AWS senior scientist Lin Lee Cheong and her team at the Amazon Machine Learning Solutions Lab.

The first thing the AWS-NFL team did was to filter anomalies out of the training data. In 99.9% of cases, the NFL player-tracking system is accurate to within six inches, but like all radio-based technology, it’s susceptible to noise that can compromise accuracy.

“We're scientists. We’re not football experts,” Cheong says. “So we worked closely with the folks from NFL to understand the gameplay. Basic anomaly detection, as well as cleaning of the data, helped tremendously.”

The research team excised player-tracking data that violated a few cardinal rules. For instance, players’ trajectories should never take them off the field, and their speed should never exceed 12.5 yards per second (NFL players’ measured speeds top out at around 11 yards per second).

Where we’ve needed additional ML expertise, AWS’s data scientists have been an invaluable resource.
Michael Schaefer, director of product and analytics for the NFL’s Next Gen Stats

Next, the team winnowed down the “feature set” for the model. Features are the different types of input data on which a machine learning model bases its predictions. For every player on the field, the NFL tracking system provides location, direction of movement, and speed, which are all essential for predicting defensive backs’ trajectories. But any number of other features — down and distance, distance to the goal line, elapsed game time, length of the current drive, temperature — could, in principle, affect player performance.

The more input features a machine learning model has, however, the more difficult it is to tease out each feature’s correlation with the phenomenon the model is trying to predict. Absent a huge amount of training data, it’s usually preferable to keep the feature set small.

To predict trajectories, the AWS researchers planned to use a deep-learning model. But first they trained a simpler model, called a gradient boosting model, on all the available features. 

Gradient boosting models tend to be less accurate than neural networks, but they make it easy to see which input features make the largest contributions to the model output. The AWS-NFL team chose the features most important to the gradient boosting model, and just those features, as inputs to the deep-learning model.

That model proved quite accurate at predicting defensive backs’ trajectories. But the researchers’ job wasn’t done yet.

Quantifying the hypothetical

It was straightforward to calculate the model’s accuracy on plays that had actually taken place on NFL football fields: the researchers simply fed the model a sequence of three player position measurements and determined how well it predicted the next ten.

But one of the purposes of defender ghosting is to predict the outcomes of plays that didn’t happen, in order to assess players’ decision making. Absent the ground truth about the plays’ outcome, how do you gauge the model’s performance?

The researchers’ first recourse was to ask Schaefer to evaluate the predicted trajectories for hypothetical plays.

Next Gen Stats leaderboards

Read more about the NFL regular season's most remarkable performances, as measured by Next Gen Stats powered by AWS.

“He spent a week reviewing every trajectory our model predicted and pointed out all the ones that he thought were questionable, versus the ones that he thought were good,” Cheong says. “He also explained the thought process behind his evaluations, which was nuanced and complex. I thought, ‘Asking a director to spend a whole week reviewing our work after each model iteration is not scalable.’ I wanted to quantify his knowledge. So we created this composite metric that incorporates the know-how that a subject matter expert would use to evaluate trajectories.”

“By combining the NFL’s expertise in football with AWS’s ML experts, we’ve been able to develop and refine statistics for things never before quantified,” Schaefer says.

The core of Cheong and her colleagues’ composite metric is a measure of how quickly a defensive back’s trajectory diminishes his distance from the targeted receiver. Other factors include the distance the defender covers relative to the maximum distance he could have covered at top NFL speeds and whether the defender moves at superhuman speeds, which incurs a penalty in the scoring.

Defender ghosting.png
At left is the deep-learning model's projected trajectory for player 3, a defensive back, when player 6 is the targeted receiver; at right is the projected trajectory when player 7 is targeted.

When the AWS researchers apply their metric to actual NFL trajectories, they get an average score of -0.1036; the score is negative because it indicates that the defender is closing the distance between himself and the receiver. When they apply their metric to the trajectories their model predicts, they get an average score of -0.0825 — not quite as good, but in the same ballpark.

When, however, they distort the input data so that the starting orientation and velocity of 25% of defenders are random — that is, 25% of players are totally out of the play to begin with — the score goes up to a positive 0.0425. That’s a further indication that their metric captures information about the quality of the defensive backs’ play.

NFL offenses are incredibly complex, with many moving parts, and getting a statistical handle on them is much more difficult than, say, characterizing the one-on-one confrontations between a pitcher and hitter in baseball. All over the Internet, for instance, debate is raging about whether Tom Brady’s success in Tampa Bay proves that his former coach, Bill Belichick, gets too much credit for the New England Patriots’ nine Super Bowl trips in 17 years.

These types of arguments will probably go on forever; they’re part of the fun of sports fandom. But at the very least, Next Gen Stats powered by AWS should help make them more coherent.

Editor's note: The opening paragraphs of this article were revised to reflect the outcome of Super Bowl LV.

View from space of a connected network around planet Earth representing the Internet of Things.
Sign up for our newsletter

Related content

US, CA, San Francisco
About Twitch Launched in 2011, Twitch is a global community that comes together each day to create multiplayer entertainment: unique, live, unpredictable experiences created by the interactions of millions. We bring the joy of co-op to everything, from casual gaming to world-class esports to anime marathons, music, and art streams. Twitch also hosts TwitchCon, where we bring everyone together to celebrate, learn, and grow their personal interests and passions. We’re always live at Twitch. Stay up to date on all things Twitch on Linkedin, Twitter and on our Blog. About the role: Twitch builds data-driven machine learning solutions across several rich problem spaces: Natural Language Processing (NLP), Recommendations, Semantic Search, Classification/Categorization, Anomaly Detection, Forecasting, Safety, and HCI/Social Computing/Computational Social Science. As an Intern, you will work with a dedicated Mentor and Manager on a project in one of these problem areas. You will also be supported by an Advisor and participate in cohort activities such as research teach backs and leadership talks. This position can also be located in San Francisco, CA or virtual. You Will: Solve large-scale data problems. Design solutions for Twitch's problem spaces Explore ML and data research
LU, Luxembourg
Are you a talented and inventive scientist with a strong passion about modern data technologies and interested to improve business processes, extracting value from the data? Would you like to be a part of an organization that is aiming to use self-learning technology to process data in order to support the management of the procurement function? The Global Procurement Technology, as a part of Global Procurement Operations, is seeking a skilled Data Scientist to help build its future data intelligence in business ecosystem, working with large distributed systems of data and providing Machine Learning (ML) and Predictive Modeling expertise. You will be a member of the Data Engineering and ML Team, joining a fast-growing global organization, with a great vision to transform the Procurement field, and become the role model in the market. This team plays a strategic role supporting the core Procurement business domains as well as it is the cornerstone of any transformation and innovation initiative. Our mission is to provide a high-quality data environment to facilitate process optimization and business digitalization, on a global scale. We are supporting business initiatives, including but not limited to, strategic supplier sourcing (e.g. contracting, negotiation, spend analysis, market research, etc.), order management, supplier performance, etc. We are seeking an individual who can thrive in a fast-paced work environment, be collaborative and share knowledge and experience with his colleagues. You are expected to deliver results, but at the same time have fun with your teammates and enjoy working in the company. In Amazon, you will find all the resources required to learn new skills, grow your career, and become a better professional. You will connect with world leaders in your field and you will be tackling Data Science challenges to ensure business continuity, by taking the right decisions for your customers. As a Data Scientist in the team, you will: -be the subject matter expert to support team strategies that will take Global Procurement Operations towards world-class predictive maintenance practices and processes, driving more effective procurement functions, e.g. supplier segmentation, negotiations, shipping supplies volume forecast, spend management, etc. -have strong analytical skills and excel in the design, creation, management, and enterprise use of large data sets, combining raw data from different sources -provide technical expertise to support the development of ML models to facilitate intelligent digital services, such as Contract Lifecycle Management (CLM) and Negotiations platform -cooperate closely with different groups of stakeholders, e.g. data/software engineers, product/program managers, analysts, senior leadership, etc. to evaluate business needs and objectives to set up the best data management environment -create and share with audiences of varying levels technical papers and presentations -deal with ambiguity, prioritizing needs, and delivering results in a dynamic environment Basic qualifications -Master’s Degree in Computer Science/Engineering, Informatics, Mathematics, or a related technical discipline -3+ years of industry experience in data engineering/science, business intelligence or related field -3+ years experience in algorithm design, engineering and implementation for very-large scale applications to solve real problems -Very good knowledge of data modeling and evaluation -Very good understanding of regression modeling, forecasting techniques, time series analysis, machine-learning concepts such as supervised and unsupervised learning, classification, random forest, etc. -SQL and query performance tuning skills Preferred qualifications -2+ years of proficiency in using R, Python, Scala, Java or any modern language for data processing and statistical analysis -Experience with various RDBMS, such as PostgreSQL, MS SQL Server, MySQL, etc. -Experience architecting Big Data and ML solutions with AWS products (Redshift, DynamoDB, Lambda, S3, EMR, SageMaker, Lex, Kendra, Forecast etc.) -Experience articulating business questions and using quantitative techniques to arrive at a solution using available data -Experience with agile/scrum methodologies and its benefits of managing projects efficiently and delivering results iteratively -Excellent written and verbal communication skills including data visualization, especially in regards to quantitative topics discussed with non-technical colleagues
US, CA, San Francisco
About Twitch Launched in 2011, Twitch is a global community that comes together each day to create multiplayer entertainment: unique, live, unpredictable experiences created by the interactions of millions. We bring the joy of co-op to everything, from casual gaming to world-class esports to anime marathons, music, and art streams. Twitch also hosts TwitchCon, where we bring everyone together to celebrate, learn, and grow their personal interests and passions. We’re always live at Twitch. Stay up to date on all things Twitch on Linkedin, Twitter and on our Blog. About the role: Twitch builds data-driven machine learning solutions across several rich problem spaces: Natural Language Processing (NLP), Recommendations, Semantic Search, Classification/Categorization, Anomaly Detection, Forecasting, Safety, and HCI/Social Computing/Computational Social Science. As an Intern, you will work with a dedicated Mentor and Manager on a project in one of these problem areas. You will also be supported by an Advisor and participate in cohort activities such as research teach backs and leadership talks. This position can also be located in San Francisco, CA or virtual. You Will: Solve large-scale data problems. Design solutions for Twitch's problem spaces Explore ML and data research
US, CA, San Francisco
About Twitch Launched in 2011, Twitch is a global community that comes together each day to create multiplayer entertainment: unique, live, unpredictable experiences created by the interactions of millions. We bring the joy of co-op to everything, from casual gaming to world-class esports to anime marathons, music, and art streams. Twitch also hosts TwitchCon, where we bring everyone together to celebrate, learn, and grow their personal interests and passions. We’re always live at Twitch. Stay up to date on all things Twitch on Linkedin, Twitter and on our Blog. About the role: Twitch builds data-driven machine learning solutions across several rich problem spaces: Natural Language Processing (NLP), Recommendations, Semantic Search, Classification/Categorization, Anomaly Detection, Forecasting, Safety, and HCI/Social Computing/Computational Social Science. As an Intern, you will work with a dedicated Mentor and Manager on a project in one of these problem areas. You will also be supported by an Advisor and participate in cohort activities such as research teach backs and leadership talks. This position can also be located in San Francisco, CA or virtual. You Will: Solve large-scale data problems. Design solutions for Twitch's problem spaces Explore ML and data research
US, CA, San Francisco
About Twitch Launched in 2011, Twitch is a global community that comes together each day to create multiplayer entertainment: unique, live, unpredictable experiences created by the interactions of millions. We bring the joy of co-op to everything, from casual gaming to world-class esports to anime marathons, music, and art streams. Twitch also hosts TwitchCon, where we bring everyone together to celebrate, learn, and grow their personal interests and passions. We’re always live at Twitch. Stay up to date on all things Twitch on Linkedin, Twitter and on our Blog. About the role: Twitch builds data-driven machine learning solutions across several rich problem spaces: Natural Language Processing (NLP), Recommendations, Semantic Search, Classification/Categorization, Anomaly Detection, Forecasting, Safety, and HCI/Social Computing/Computational Social Science. As an Intern, you will work with a dedicated Mentor and Manager on a project in one of these problem areas. You will also be supported by an Advisor and participate in cohort activities such as research teach backs and leadership talks. This position can also be located in San Francisco, CA or virtual. You Will: Solve large-scale data problems. Design solutions for Twitch's problem spaces Explore ML and data research
US, CA, San Francisco
About Twitch Launched in 2011, Twitch is a global community that comes together each day to create multiplayer entertainment: unique, live, unpredictable experiences created by the interactions of millions. We bring the joy of co-op to everything, from casual gaming to world-class esports to anime marathons, music, and art streams. Twitch also hosts TwitchCon, where we bring everyone together to celebrate, learn, and grow their personal interests and passions. We’re always live at Twitch. Stay up to date on all things Twitch on Linkedin, Twitter and on our Blog. About the role: Twitch builds data-driven machine learning solutions across several rich problem spaces: Natural Language Processing (NLP), Recommendations, Semantic Search, Classification/Categorization, Anomaly Detection, Forecasting, Safety, and HCI/Social Computing/Computational Social Science. As an Intern, you will work with a dedicated Mentor and Manager on a project in one of these problem areas. You will also be supported by an Advisor and participate in cohort activities such as research teach backs and leadership talks. This position can also be located in San Francisco, CA or virtual. You Will: Solve large-scale data problems. Design solutions for Twitch's problem spaces Explore ML and data research
US, WA, Seattle
Amazon is seeking an experienced, self-directed data scientist to support the research and analytical needs of Amazon Web Services' Sales teams. This is a unique opportunity to invent new ways of leveraging our large, complex data streams to automate sales efforts and to accelerate our customers' journey to the cloud. This is a high-visibility role with significant impact potential. You, as the right candidate, are adept at executing every stage of the machine learning development life cycle in a business setting; from initial requirements gathering to through final model deployment, including adoption measurement and improvement. You will be working with large volumes of structured and unstructured data spread across multiple databases and can design and implement data pipelines to clean and merge these data for research and modeling. Beyond mathematical understanding, you have a deep intuition for machine learning algorithms that allows you to translate business problems into the right machine learning, data science, and/or statistical solutions. You’re able to pick up and grasp new research and identify applications or extensions within the team. You’re talented at communicating your results clearly to business owners in concise, non-technical language. Key job responsibilities • Work with a team of analytics & insights leads, data scientists and engineers to define business problems. • Research, develop, and deliver machine learning & statistical solutions in close partnership with end users, other science and engineering teams, and business stakeholders. • Use AWS services like SageMaker to deploy scalable ML models in the cloud. • Examples of projects include modeling usage of AWS services to optimize sales planning, recommending sales plays based on historical patterns, and building a sales-facing alert system using anomaly detection.
US, WA, Seattle
We are a team of doers working passionately to apply cutting-edge advances in deep learning in the life sciences to solve real-world problems. As a Senior Applied Science Manager you will participate in developing exciting products for customers. Our team rewards curiosity while maintaining a laser-focus in bringing products to market. Competitive candidates are responsive, flexible, and able to succeed within an open, collaborative, entrepreneurial, startup-like environment. At the leading edge of both academic and applied research in this product area, you have the opportunity to work together with a diverse and talented team of scientists, engineers, and product managers and collaborate with others teams. Location is in Seattle, US Embrace Diversity Here at Amazon, we embrace our differences. We are committed to furthering our culture of inclusion. We have ten employee-led affinity groups, reaching 40,000 employees in over 190 chapters globally. We have innovative benefit offerings, and host annual and ongoing learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences. Amazon’s culture of inclusion is reinforced within our 14 Leadership Principles, which remind team members to seek diverse perspectives, learn and be curious, and earn trust Balance Work and Life Our team puts a high value on work-life balance. It isn’t about how many hours you spend at home or at work; it’s about the flow you establish that brings energy to both parts of your life. We believe striking the right balance between your personal and professional life is critical to life-long happiness and fulfillment. We offer flexibility in working hours and encourage you to find your own balance between your work and personal lives Mentor & Grow Careers Our team is dedicated to supporting new members. We have a broad mix of experience levels and tenures, and we’re building an environment that celebrates knowledge sharing and mentorship. Our senior members enjoy one-on-one mentoring and thorough, but kind, code reviews. We care about your career growth and strive to assign projects based on what will help each team member develop into a better-rounded engineer and enable them to take on more complex tasks in the future. Key job responsibilities • Manage high performing engineering and science teams • Hire and develop top-performing engineers, scientists, and other managers • Develop and execute on project plans and delivery commitments • Work with business, data science, software engineer, biological, and product leaders to help define product requirements and with managers, scientists, and engineers to execute on them • Build and maintain world-class customer experience and operational excellence for your deliverables
US, Virtual
The Amazon Economics Team is hiring Interns in Economics. We are looking for detail-oriented, organized, and responsible individuals who are eager to learn how to work with large and complicated data sets. Some knowledge of econometrics, as well as basic familiarity with Stata, R, or Python is necessary. Experience with SQL, UNIX, Sawtooth, and Spark would be a plus. These are full-time positions at 40 hours per week, with compensation being awarded on an hourly basis. You will learn how to build data sets and perform applied econometric analysis at Internet speed collaborating with economists, data scientists and MBAʼs. These skills will translate well into writing applied chapters in your dissertation and provide you with work experience that may help you with placement. Roughly 85% of interns from previous cohorts have converted to full time economics employment at Amazon. If you are interested, please send your CV to our mailing list at econ-internship@amazon.com.
US, WA, Seattle
Amazon internships are full-time (40 hours/week) for 12 consecutive weeks with start dates in May - July 2023. Our internship program provides hands-on learning and building experiences for students who are interested in a career in hardware engineering. This role will be based in Seattle, and candidates must be willing to work in-person. Corporate Projects (CPT) is a team that sits within the broader Corporate Development organization at Amazon. We seek to bring net-new, strategic projects to life by working together with customers and evolving projects from ZERO-to-ONE. To do so, we deploy our resources towards proofs-of-concept (POCs) and pilot programs and develop them from high-level ideas (the ZERO) to tangible short-term results that provide validating signal and a path to scale (the ONE). We work with our customers to develop and create net-new opportunities by relentlessly scouring all of Amazon and finding new and innovative ways to strengthen and/or accelerate the Amazon Flywheel. CPT seeks an Applied Science intern to work with a diverse, cross-functional team to build new, innovative customer experiences. Within CPT, you will apply both traditional and novel scientific approaches to solve and scale problems and solutions. We are a team where science meets application. A successful candidate will be a self-starter comfortable with ambiguity, strong attention to detail, and the ability to work in a fast-paced, ever-changing environment. As an Applied Science Intern, you will own the design and development of end-to-end systems. You’ll have the opportunity to create technical roadmaps, and drive production level projects that will support Amazon Science. You will work closely with Amazon scientists, and other science interns to develop solutions and deploy them into production. The ideal scientist must have the ability to work with diverse groups of people and cross-functional teams to solve complex business problems.