Improving forecasting by learning quantile functions

Learning the complete quantile function, which maps probabilities to variable values, rather than building separate models for each quantile level, enables better optimization of resource trade-offs.

The quantile function is a mathematical function that takes a quantile (a percentage of a distribution, from 0 to 1) as input and outputs the value of a variable. It can answer questions like, “If I want to guarantee that 95% of my customers receive their orders within 24 hours, how much inventory do I need to keep on hand?” As such, the quantile function is commonly used in the context of forecasting questions.

In practical cases, however, we rarely have a tidy formula for computing the quantile function. Instead, statisticians usually use regression analysis to approximate it for a single quantile level at a time. That means that if you decide you want to compute it for a different quantile, you have to build a new regression model — which, today, often means retraining a neural network.

In a pair of papers we’re presenting at this year’s International Conference on Artificial Intelligence and Statistics (AISTATS), we describe an approach to learning an approximation of the entire quantile function at once, rather than simply approximating it for each quantile level.

Related content
Konstantinos Benidis talks about his experience as an intern at Amazon, and why he decided to pursue a full-time role at the company.

This means that users can query the function at different points, to optimize the trade-offs between performance criteria. For instance, it could be that lowering the guarantee of 24-hour delivery from 95% to 94% enables a much larger reduction in inventory, which might be a trade-off worth making. Or, conversely, it could be that raising the guarantee threshold — and thus increasing customer satisfaction — requires very little additional inventory.

Our approach is agnostic as to the shape of the distribution underlying the quantile function. The distribution could be Gaussian (the bell curve, or normal distribution); it could be uniform; or it could be anything else. Not locking ourselves into any assumptions about distribution shape allows our approach to follow the data wherever it leads, which increases the accuracy of our approximations.

In the first of our AISTATS papers, we present an approach to learning the quantile function in the univariate case, where there’s a one-to-one correspondence between probabilities and variable values. In the second paper, we consider the multivariate case.

The quantile function

Any probability distribution — say, the distribution of heights in a population — can be represented as a function, called the probability density function (PDF). The input to the function is a variable (a particular height), and the output is a positive number representing the probability of the input (the fraction of people in that population who have that height).

Cumulative distribution function.png
The graph of a probability density function (blue line) and its associated cumulative distribution function (orange line).

A useful related function is the cumulative distribution function (CDF), which is the probability that the variable will take a value at or below a particular value — for instance, the fraction of the population that is 5’6” or shorter. The CDF’s values are between 0 (no one is shorter than 0’0”) and 1 (100% of the population is shorter than 500’0”).

Technically, the CDF is the integral of the PDF, so it computes the area under the probability curve up to the target point. At low input values, the probability output by the CDF can be lower than that output by the PDF. But because the CDF is cumulative, it is monotonically non-decreasing: the higher the input value, the higher the output value.

If the CDF exists, the quantile function is simply its inverse. The quantile function’s graph can be produced by flipping the CDF graph over — that is, rotating it 180 degrees around a diagonal axis that extends for the lower left to the upper right of the graph.

Quantile function animation.gif
The quantile function is simply the inverse of the cumulative distribution function (if it exists). Its graph can be produced by flipping the cumulative distribution function's graph over.

Like the CDF, the quantile function is monotonically non-decreasing. That’s the fundamental observation on which our method rests.

The univariate case

Quantile estimator architecture.png
The architecture of our quantile function estimator (the incremental quantile function, or IQF), which enforces the monotonicity of the quantile function by representing the value of each quantile as an incremental increase in the value of the previous quantile.

One of the drawbacks of the conventional approach to approximating the quantile function — estimating it only at specific points — is that it can lead to quantile crossing. That is, because each prediction is based on a different model, trained on different local data, the predicted variable value for a given probability could be lower than the value predicted for a lower probability. This violates the requirement that the quantile function be monotonically non-decreasing.

Quantile function, five knots.png
An approximation of the quantile function that (mostly) uses linear extrapolation.
Quantile function, 20 knots.png
An approximation of the quantile function with 20 knots (anchor points).

To avoid quantile crossing, our method learns a predictive model for several different input values — quantiles — at once, spaced at regular intervals between 0 and 1. The model is a neural network designed so that the prediction for each successive quantile is an incremental increase of the prediction for the preceding quantile.

Once our model has learned estimates for several anchor points that enforce the monotonicity of the quantile function, we can estimate the function through simple linear extrapolation between the anchor points (called “knots” in the literature), with nonlinear extrapolation to handle the tails of the function.

Where training data is plentiful enough to enable a denser concentration of anchor points (knots), linear extrapolation provides a more accurate approximation.

To test our method, we applied it to a toy distribution with three arbitrary peaks, to demonstrate that we don’t need to make any assumptions about distribution shape.

Distribution and approximations.png
The true distribution (red, right), with three arbitrary peaks; our method's approximation, using five knots (center); and our method's approximation, using 20 knots (right).

The multivariate case

So far, we’ve been considering the case in which our distribution applies to a single variable. But in many practical forecasting use cases, we want to consider multivariate distributions.

For instance, if a particular product uses a rare battery that doesn’t come included, a forecast of the demand for that battery will probably be correlated with the forecast of the demand for that product.

Related content
New method goes beyond Granger causality to identify only the true causes of a target time series, given some graph constraints.

Similarly, if we want to predict demand over several different time horizons, we would expect there to be some correlation between consecutive predictions: demand shouldn’t undulate too wildly. A multivariate probability distribution over time horizons should capture that correlation better than a separate univariate prediction for each horizon.

The problem is that the notion of a multivariate quantile function is not well defined. If the CDF maps multiple variables to a single probability, when you perform that mapping in reverse, which value do you map to?

This is the problem we address in our second AISTATS paper. Again, the core observation is that the quantile function must be monotonically non-decreasing. So we define the multivariate quantile function as the derivative of a convex function.

A convex function is one that tends everywhere toward a single global minimum: in two dimensions, it looks like a U-shaped curve. The derivative of a function computes the slope of its graph: again in the two-dimensional case, the slope of a convex function is negative but flattening as it approaches the global minimum, zero at the minimum, and increasingly positive on the other side. Hence, the derivative is monotonically increasing.

Multivariate quantile function.png
A convex function (blue) and its monotonically increasing derivative (green).

This two-dimensional picture generalizes readily to higher dimensions. In our paper, we describe a method for training a neural network to learn a quantile function that is the derivative of a convex function. The architecture of the network enforces convexity, and, essentially, the model learns the convex function using its derivative as a training signal.

In addition to real-world datasets, we test our approach on the problem of simultaneous prediction across multiple time horizons, using a dataset that follows a multivariate Gaussian distribution. Our experiments showed that, indeed, our approach better captures the correlations between successive time horizons than a univariate approach.

Quantile correlation.png
Three self-correlation graphs that maps a time series against itself. At left is the ground truth. In the center is the forecast produced by a standard univariate quantile function, in which each time step correlates only with itself. At right is the forecast produced using our method, which better captures correlations between successive time steps.

This work continues a line of research at Amazon combining quantile regression and deep learning to solve forecasting problems at a massive scale. In particular, it builds upon work on the MQ-CNN model proposed by a group of Amazon scientists in 2017, extensions of which are currently powering Amazon’s demand forecasting system. The current work is also closely related to spline quantile function RNNs, which — like the multivariate quantile forecaster — started as an internship project.

Code for all these methods is available in the open source GluonTS probabilistic time series modeling library.

Acknowledgements

This work would have not been possible without the help of our awesome co-authors, whom we would like to thank for their contributions to these two papers: Kelvin Kan, Danielle Maddix, Tim Januschowski, Konstantinos Benidis, Lars Ruthotto, and Yuyang Wang, Jan Gasthaus.

Related content

US, WA, Seattle
Amazon's Global Fixed Marketing Campaign Measurement & Optimization (CMO) team is looking for a senior economic expert in causal inference and applied ML to advance the economic measurement, accuracy validation and optimization methodologies of Amazon's global multi-billion dollar fixed marketing spend. This is a thought leadership position to help set the long-term vision, drive methods innovation, and influence cross-org methods alignment. This role is also an expert in modeling and measuring marketing and customer value with proven capacity to innovate, scale measurement, and mentor talent. This candidate will also work closely with senior Fixed Marketing tech, product, finance and business leadership to devise science roadmaps for innovation and simplification, and adoption of insights to influence important resource allocation, fixed marketing spend and prioritization decisions. Excellent communication skills (verbal and written) are required to ensure success of this collaboration. The candidate must be passionate about advancing science for business and customer impact. Key job responsibilities - Advance measurement, accuracy validation, and optimization methodology within Fixed Marketing. - Motivate and drive data generation to size. - Develop novel, innovative and scalable marketing measurement techniques and methodologies. - Enable product and tech development to scale science solutions and approaches. A day in the life - Propose and refine economic and scientific measurement, accuracy validation, and optimization methodology to improve Fixed Marketing models, outputs and business results - Brief global fixed marketing and retails executives about FM measurement and optimization approaches, providing options to address strategic priorities. - Collaborate with and influence the broader scientific methodology community. About the team CMO's vision is to maximizing long-term free cash flow by providing reliable, accurate and useful global fixed marketing measurement and decision support. The team measures and helps optimize the incremental impact of Amazon (Stores, AWS, Devices) fixed marketing investment across TV, Digital, Social, Radio, and many other channels globally. This is a fully self supported team composed of scientists, economists, engineers, and product/program leaders with S-Team visibility. We are open to hiring candidates to work out of one of the following locations: Irvine, CA, USA | San Francisco, CA, USA | Seattle, WA, USA | Sunnyvale, CA, USA
GB, Cambridge
Our team builds generative AI solutions that will produce some of the future’s most influential voices in media and art. We develop cutting-edge technologies with Amazon Studios, the provider of original content for Prime Video, with Amazon Game Studios and Alexa, the ground-breaking service that powers the audio for Echo. Do you want to be part of the team developing the future technology that impacts the customer experience of ground-breaking products? Then come join us and make history. We are looking for a passionate, talented, and inventive Applied Scientist with a background in Machine Learning to help build industry-leading Speech, Language, Audio and Video technology. As an Applied Scientist at Amazon you will work with talented peers to develop novel algorithms and generative AI models to drive the state of the art in audio (and vocal arts) generation. Position Responsibilities: * Participate in the design, development, evaluation, deployment and updating of data-driven models for digital vocal arts applications. * Participate in research activities including the application and evaluation and digital vocal and video arts techniques for novel applications. * Research and implement novel ML and statistical approaches to add value to the business. * Mentor junior engineers and scientists. We are open to hiring candidates to work out of one of the following locations: Cambridge, GBR
US, TX, Austin
The Workforce Solutions Analytics and Tech team is looking for a senior Applied Scientist who is interested in solving challenging optimization problems in the labor scheduling and operations efficiency space. We are actively looking to hire senior scientists to lead one or more of these problem spaces. Successful candidates will have a deep knowledge of Operations Research and Machine Learning methods, experience in applying these methods to large-scale business problems, the ability to map models into production-worthy code in Python or Java, the communication skills necessary to explain complex technical approaches to a variety of stakeholders and customers, and the excitement to take iterative approaches to tackle big research challenges. As a member of our team, you'll work on cutting-edge projects that directly impact over a million Amazon associates. This is a high-impact role with opportunities to designing and improving complex labor planning and cost optimization models. The successful candidate will be a self-starter comfortable with ambiguity, with strong attention to detail and outstanding ability in balancing technical leadership with strong business judgment to make the right decisions about model and method choices. Successful candidates must thrive in fast-paced environments, which encourage collaborative and creative problem solving, be able to measure and estimate risks, constructively critique peer research, and align research focuses with the Amazon's strategic needs. Key job responsibilities • Candidates will be responsible for developing solutions to better manage and optimize flexible labor capacity. The successful candidate should have solid research experience in one or more technical areas of Operations Research or Machine Learning. As a senior scientist, you will also help coach/mentor junior scientists on the team. • In this role, you will be a technical leader in applied science research with significant scope, impact, and high visibility. You will lead science initiatives for strategic optimization and capacity planning. They require superior logical thinkers who are able to quickly approach large ambiguous problems, turn high-level business requirements into mathematical models, identify the right solution approach, and contribute to the software development for production systems. • Invent and design new solutions for scientifically-complex problem areas and identify opportunities for invention in existing or new business initiatives. • Successfully deliver large or critical solutions to complex problems in the support of medium-to-large business goals. • Apply mathematical optimization techniques and algorithms to design optimal or near optimal solution methodologies to be used for labor planning. • Research, prototype, simulate, and experiment with these models and participate in the production level deployment in Python or Java. We are open to hiring candidates to work out of one of the following locations: Arlington, VA, USA | Austin, TX, USA | Bellevue, WA, USA | Nashville, TN, USA | Seattle, WA, USA | Tempe, AZ, USA
US, NY, New York
Where will Amazon's growth come from in the next year? What about over the next five? Which product lines are poised to quintuple in size? Are we investing enough in our infrastructure, or too much? How do our customers react to changes in prices, product selection, or delivery times? These are among the most important questions at Amazon today. The Topline Forecasting team in the Supply Chain Optimization Technologies (SCOT) group is looking for innovative, passionate and results-oriented Economists to answer these questions. You will have an opportunity to own the long-run outlook for Amazon’s global consumer business and shape strategic decisions at the highest level. The successful candidate will be able to formalize problem definitions from ambiguous requirements, build econometrics models using Amazon’s world-class data systems, and develop cutting-edge solutions for non-standard problems. Key job responsibilities · Develop new econometric models or improve existing approaches using scalable techniques. · Extract data for analysis and model development from large, complex datasets. · Closely work with engineering teams to build scalable, efficient systems that implement prototypes in production. · Apply economic theory to solve business problems in a fast moving environment. · Distill problem definitions from informal business requirements and communicate technical solutions to senior business leaders. · Drive innovation and best practices in applied research across the Amazon research science community. We are open to hiring candidates to work out of one of the following locations: New York, NY, USA
US, WA, Bellevue
We are seeking a passionate, talented, and inventive individual to join the Applied AI team and help build industry-leading technologies that customers will love. This team offers a unique opportunity to make a significant impact on the customer experience and contribute to the design, architecture, and implementation of a cutting-edge product. Key job responsibilities On our team you will push the boundaries of ML and Generative AI techniques to scale the inputs for hundreds of billions of dollars of annual revenue for our eCommerce business. If you have a passion for AI technologies, a drive to innovate and a desire to make a meaningful impact, we invite you to become a valued member of our team. We are seeking an experienced Scientist who combines superb technical, research, analytical and leadership capabilities with a demonstrated ability to get the right things done quickly and effectively. This person must be comfortable working with a team of top-notch developers and collaborating with our research teams. We’re looking for someone who innovates, and loves solving hard problems. You will be expected to have an established background in building highly scalable systems and system design, great communication skills, and a motivation to achieve results in a fast-paced environment. You should be somebody who enjoys working on complex problems, is customer-centric, and feels strongly about building good software as well as making that software achieve its operational goals. A day in the life You will be responsible for developing and maintaining the systems and tools that enable us to accelerate knowledge operations and work in the intersection of Science and Engineering. You will push the boundaries of ML and Generative AI techniques to scale the inputs for hundreds of billions of dollars of annual revenue for our eCommerce business. If you have a passion for AI technologies, a drive to innovate and a desire to make a meaningful impact, we invite you to become a valued member of our team. About the team The mission of the Applied AI team is to enable organizations within Worldwide Amazon.com Stores to accelerate the adoption of AI technologies across various parts of our business. We are looking for an Applied Scientist to join our Applied AI team to work on LLM-based solutions. We are open to hiring candidates to work out of one of the following locations: Bellevue, WA, USA
US, WA, Bellevue
We are seeking a passionate, talented, and inventive individual to join the Applied AI team and help build industry-leading technologies that customers will love. This team offers a unique opportunity to make a significant impact on the customer experience and contribute to the design, architecture, and implementation of a cutting-edge product. The mission of the Applied AI team is to enable organizations within Worldwide Amazon.com Stores to accelerate the adoption of AI technologies across various parts of our business. We are looking for a Senior Applied Scientist to join our Applied AI team to work on LLM-based solutions. We are seeking an experienced Scientist who combines superb technical, research, analytical and leadership capabilities with a demonstrated ability to get the right things done quickly and effectively. This person must be comfortable working with a team of top-notch developers and collaborating with our research teams. We’re looking for someone who innovates, and loves solving hard problems. You will be expected to have an established background in building highly scalable systems and system design, excellent project management skills, great communication skills, and a motivation to achieve results in a fast-paced environment. You should be somebody who enjoys working on complex problems, is customer-centric, and feels strongly about building good software as well as making that software achieve its operational goals. Key job responsibilities You will be responsible for developing and maintaining the systems and tools that enable us to accelerate knowledge operations and work in the intersection of Science and Engineering. A day in the life On our team you will push the boundaries of ML and Generative AI techniques to scale the inputs for hundreds of billions of dollars of annual revenue for our eCommerce business. If you have a passion for AI technologies, a drive to innovate and a desire to make a meaningful impact, we invite you to become a valued member of our team. We are open to hiring candidates to work out of one of the following locations: Bellevue, WA, USA
US, MD, Virtual Location - Maryland
We are looking for detail-oriented, organized, and responsible individuals who are eager to learn how to work with large and complicated data sets. Some knowledge of econometrics, as well as basic familiarity with Python is necessary, and experience with SQL and UNIX would be a plus. This is a part time position, 29 hours per week, with compensation being awarded on an hourly basis. You will learn how to build data sets and perform applied econometric analysis at Internet speed collaborating with economists, scientists, and product managers. These skills will translate well into writing applied chapters in your dissertation and provide you with work experience that may help you with placement. Roughly 85% of previous cohorts have converted to full time economics employment at Amazon. If you are interested, please send your CV to our mailing list at econ-internship@amazon.com. We are open to hiring candidates to work out of one of the following locations: Virtual Location - MD
US, WA, Bellevue
We are seeking a passionate, talented, and inventive individual to join the Applied AI team and help build industry-leading technologies that customers will love. This team offers a unique opportunity to make a significant impact on the customer experience and contribute to the design, architecture, and implementation of a cutting-edge product. The mission of the Applied AI team is to enable organizations within Worldwide Amazon.com Stores to accelerate the adoption of AI technologies across various parts of our business. We are looking for a Senior Applied Scientist to join our Applied AI team to work on LLM-based solutions. We are seeking an experienced Scientist who combines superb technical, research, analytical and leadership capabilities with a demonstrated ability to get the right things done quickly and effectively. This person must be comfortable working with a team of top-notch developers and collaborating with our research teams. We’re looking for someone who innovates, and loves solving hard problems. You will be expected to have an established background in building highly scalable systems and system design, excellent project management skills, great communication skills, and a motivation to achieve results in a fast-paced environment. You should be somebody who enjoys working on complex problems, is customer-centric, and feels strongly about building good software as well as making that software achieve its operational goals. Key job responsibilities You will be responsible for developing and maintaining the systems and tools that enable us to accelerate knowledge operations and work in the intersection of Science and Engineering. You will push the boundaries of ML and Generative AI techniques to scale the inputs for hundreds of billions of dollars of annual revenue for our eCommerce business. If you have a passion for AI technologies, a drive to innovate and a desire to make a meaningful impact, we invite you to become a valued member of our team. We are open to hiring candidates to work out of one of the following locations: Bellevue, WA, USA
IN, KA, Bengaluru
Amazon strives to be Earth's most customer-centric company where people can find and discover virtually anything they want to buy online. By giving customers more of what they want - low prices, vast selection, and convenience - Amazon continues to grow and evolve as a world-class e-commerce platform. The AOP team is an integral part of this and strives to provide Analytical Capabilities to fulfil all customer processes in the IN-ECCF regions. We’re seeking a Data Scientist with expertise in a breadth of ML techniques. Your responsibilities will include developing, prototyping and productionizing innovative models using a range of techniques (Supervised/Unsupervised/Reinforcement). We are also looking for innovators capable of using generative AI to design, evangelize, and implement state-of-the-art solutions for never-before-solved problems. Key job responsibilities - Demonstrate thorough technical knowledge on feature engineering of massive datasets, effective exploratory data analysis, and model building using industry standard AI/ML models and working with Large Language Models - Proficiency in both Supervised(Linear/Logistic Regression) and UnSupervised algorithms(k means clustering) - Understand the business reality behind large sets of data and develop meaningful solutions comprising of analytics as well as marketing management. - Work closely with internal stakeholders like the business teams, engineering teams and partner teams and align them with respect to your focus area - Innovate by adapting new modeling techniques and procedures - Passionate about working with huge data sets ( training/fine tuning) and be someone who loves to bring datasets together to answer business questions. You should have deep expertise in creation and management of datasets - Exposure at implementing and operating stable, scalable data flow solutions from production systems into end-user facing applications/reports. These solutions will be fault tolerant, self-healing and adaptive. - Work with distributed machine learning and statistical algorithms to harness enormous volumes of data at scale to serve our customers We are open to hiring candidates to work out of one of the following locations: Bengaluru, KA, IND | Hyderabad, TS, IND
DE, Aachen
The Artificial General Intelligence (AGI) team is looking for a passionate, talented, and inventive Senior Applied Scientist with a strong deep learning background, to build industry-leading Generative Artificial Intelligence (GenAI) technology with Large Language Models (LLMs) and multimodal systems. Key job responsibilities As a Senior Applied Scientist with the AGI team, you will work with talented peers to lead the development of novel algorithms and modeling techniques, to advance the state of the art with LLMs. Your work will directly impact our customers in the form of products and services that make use of speech and language technology. You will leverage Amazon’s heterogeneous data sources and large-scale computing resources to accelerate advances in spoken language understanding. About the team The AGI team has a mission to push the envelope in GenAI with LLMs and multimodal systems, in order to provide the best-possible experience for our customers. We are open to hiring candidates to work out of one of the following locations: Aachen, DEU