Determining causality in correlated time series

New method goes beyond Granger causality to identify only the true causes of a target time series, given some graph constraints.

Given observed time series and a target time series of interest, can we identify the causes of the target, without excluding the presence of hidden time series? This question arises in many fields — such as finance, biology, and supply chain management — where sequences of data constitute partial observations of a system.

Imagine, for instance, that we have time series for the prices of dairy products. From the data alone, can we identify the causes of fluctuations in the price of butter?

Dairy prices.png
The prices of dairy products in Germany are correlated, but do any of those correlations imply causation?

The standard way to represent causal relationships between variables that are associated with each other is with a graph whose nodes represent variables and whose edges represent causal relationships.

In a paper that we presented at the International Conference on Machine Learning (ICML) 2021, coauthored by Bernhard Schölkopf, we described a new technique for detecting all the direct causal features of a target time series — and only the direct or indirect causal features — given some graph constraints. The proposed method yielded false-positive rates of detected causes close to zero.

The constraints we observe refer to the target and the “memory” of some hidden time series (the lack of dependency on their own pasts, in some cases). We wanted to limit our assumptions to those that can be naturally derived from the setting and that could not be avoided otherwise. Therefore, we wanted to avoid strong assumptions made by other methods, such as excluding hidden common causes (unobserved time series that caused multiple observed ones).

We also wanted to avoid other drawbacks of prior methods, such as requiring interventions on the system (to test for particular causal sequences) and requiring large conditioning sets (sets of variables that must be controlled for to detect dependences) or exhaustive conditional-independence tests, which hinder the statistical strength of the outcome.

Our method, by contrast, accounts for hidden common causes, uses only observational data, and constructs conditioning sets that are small and efficient in terms of signal-to-noise ratio, given some graph constraints that seemed hard to avoid.

Conditioning set.gif
The researchers' new method constructs a conditioning set — a set of variables that must be controlled for — that enables tests for conditional dependence and independence in a causal graph.

Conditional independence

As is well known, statistical dependence (i.e., correlation in linear cases) does not imply causation. The graphs we use to represent causal relationships between associated variables are so-called directed acyclic graphs (DAGs), meaning the edges have direction and there are no loops. The direction of the edges (represented by arrows in the graphs below) indicates the direction of causal influence. In the time series case, we use “full time DAGs”, where each node represents a different time step from a time series. 

To analyze whether a third variable, S, explains a statistical dependency (i.e., correlation) between two other variables, one checks whether the dependency disappears after restricting the statistics to data points with fixed values of S. In larger graphs, S can be a whole set of variables, which we call a conditioning set. Controlling for all the variables in a conditioning set is known as conditional independence testing and is the main tool we use in our method. 

Another important notion is that of confounding. If two variables, X and Y, are dependent, not because one causes the other, but because they’re both caused by a third variable, U, we say that they are confounded by U.

Before we get into the complex graphs of time series, let's present the intuition behind our method with simple graphs. 

In the graphs below, we manage to distinguish between causal influence and confounding relationships by searching for different patterns of conditional independence. In both graphs, X and Y are dependent (i.e., they vary together). But in the left-hand graph, Z and Y are independent when we condition on the cause X; i.e., when we control for X, variations in Y become independent from variations in Z

When, however, there is a hidden confounder between X and Y, as in the graph at right, Z and Y become dependent when conditioning on X.

This can seem counterintuitive. When we condition on a variable, we treat it as if we know its outcome. In the graph below, because we know how Z contributes to X, the difference between this contribution and the actual value of X comes from U (with some variation from noise). Since Y varies with U, it reflects that variation as well, and Z and Y become dependent.

simple_iid_case.png
An example of how the presence of a confounder can create causal dependence.

Causality in time series

This idea of finding similar characteristic patterns of conditional independences to distinguish causes from confounders is very relevant to our method. In the time series case, the graph is much more complicated than in the examples above. Here we show such a time series graph:

Baseline causal graph.png
A full time graph with hidden time series (U).

Here, we have a univariate (one-dimensional) target time series, Y, whose causes we want to find. Then we have several observed candidate time series, Xi, which might be causing the target or have different dependencies with it. Finally, we allow for the existence of several hidden time series, U.

We know the directions of some edges from the time order, which is helpful. On the other hand, time series’ dependence on their own pasts complicates the picture, because it creates common-cause schemes between nodes. 

For each candidate time series, we want to isolate the current and previous node and the corresponding target node. We thus extract triplets like the one indicated by green and yellow in the graph below.

Causal graph conditional tests.png
Tests for conditional dependence and independence in the full time graph.

If we manage to do that, then it is enough to check whether the green nodes become independent when we simultaneously condition on the yellow node and all the purple ones. 

If there is a hidden confounder between the yellow node and the target’s green node, then, conditioning on the yellow node will force a dependence between the two green nodes, as in the first example above. But to perform that test, we need to isolate our triplet from the causal influences of other time series. 

To do that, we construct a conditioning set, S, that includes at most one node from each time series that is dependent on the target. This node corresponds to the one that enters the previous time stamp of the target (Yt in the graph above). And of course, we also need to include the previous time stamp of the target node itself (Yt, above) to remove the target's past dependency, as well as the yellow node.

Here we see that indeed the relationship between Xj and Y is confounded (Xj does not cause Y, although they appear to be related). We see that the second condition of our method is violated, and consequently, Xj is correctly rejected (as it is not a cause of Y).

Given some restrictions on the graph, which we do not consider extreme given the hardness of hidden confounding, we propose and prove two theorems for the identification of direct and indirect causes in single-lag graphs — that is, graphs in which a node in a candidate time series shares only one edge with nodes in the target time series. These theorems result in an algorithm with only two conditional-independence tests and well-defined conditioning sets, which scales linearly with the number of candidate time series. 

dairy_experiments_graphs.PNG
Graphs of the causal relationships between dairy-product prices in Germany, Ireland, and the UK, with the true-positive rates (TPR) and true-negative rates (TNR) achieved by the researchers' new method.

We now return to our original motivational example, predicting the price of butter. The real-world data we used to test our approach included the price of raw milk, the price of butter, and, depending on the country, the prices of other dairy products, such as cheese and whey powder. Our method correctly deduced that the price of butter was caused by the price of raw milk but not by the prices of other dairy products, although they were strongly dependent on it. In one dataset, where the data did not include the price of raw milk, our method correctly deduced that the dependencies between the price of butter and the prices of other dairy products did not imply causation. 

Research areas

Related content

US, CA, San Francisco
About Twitch Launched in 2011, Twitch is a global community that comes together each day to create multiplayer entertainment: unique, live, unpredictable experiences created by the interactions of millions. We bring the joy of co-op to everything, from casual gaming to world-class esports to anime marathons, music, and art streams. Twitch also hosts TwitchCon, where we bring everyone together to celebrate, learn, and grow their personal interests and passions. We’re always live at Twitch. Stay up to date on all things Twitch on Linkedin, Twitter and on our Blog. About the role: Twitch builds data-driven machine learning solutions across several rich problem spaces: Natural Language Processing (NLP), Recommendations, Semantic Search, Classification/Categorization, Anomaly Detection, Forecasting, Safety, and HCI/Social Computing/Computational Social Science. As an Intern, you will work with a dedicated Mentor and Manager on a project in one of these problem areas. You will also be supported by an Advisor and participate in cohort activities such as research teach backs and leadership talks. This position can also be located in San Francisco, CA or virtual. You Will: Solve large-scale data problems. Design solutions for Twitch's problem spaces Explore ML and data research
LU, Luxembourg
Are you a talented and inventive scientist with a strong passion about modern data technologies and interested to improve business processes, extracting value from the data? Would you like to be a part of an organization that is aiming to use self-learning technology to process data in order to support the management of the procurement function? The Global Procurement Technology, as a part of Global Procurement Operations, is seeking a skilled Data Scientist to help build its future data intelligence in business ecosystem, working with large distributed systems of data and providing Machine Learning (ML) and Predictive Modeling expertise. You will be a member of the Data Engineering and ML Team, joining a fast-growing global organization, with a great vision to transform the Procurement field, and become the role model in the market. This team plays a strategic role supporting the core Procurement business domains as well as it is the cornerstone of any transformation and innovation initiative. Our mission is to provide a high-quality data environment to facilitate process optimization and business digitalization, on a global scale. We are supporting business initiatives, including but not limited to, strategic supplier sourcing (e.g. contracting, negotiation, spend analysis, market research, etc.), order management, supplier performance, etc. We are seeking an individual who can thrive in a fast-paced work environment, be collaborative and share knowledge and experience with his colleagues. You are expected to deliver results, but at the same time have fun with your teammates and enjoy working in the company. In Amazon, you will find all the resources required to learn new skills, grow your career, and become a better professional. You will connect with world leaders in your field and you will be tackling Data Science challenges to ensure business continuity, by taking the right decisions for your customers. As a Data Scientist in the team, you will: -be the subject matter expert to support team strategies that will take Global Procurement Operations towards world-class predictive maintenance practices and processes, driving more effective procurement functions, e.g. supplier segmentation, negotiations, shipping supplies volume forecast, spend management, etc. -have strong analytical skills and excel in the design, creation, management, and enterprise use of large data sets, combining raw data from different sources -provide technical expertise to support the development of ML models to facilitate intelligent digital services, such as Contract Lifecycle Management (CLM) and Negotiations platform -cooperate closely with different groups of stakeholders, e.g. data/software engineers, product/program managers, analysts, senior leadership, etc. to evaluate business needs and objectives to set up the best data management environment -create and share with audiences of varying levels technical papers and presentations -deal with ambiguity, prioritizing needs, and delivering results in a dynamic environment Basic qualifications -Master’s Degree in Computer Science/Engineering, Informatics, Mathematics, or a related technical discipline -3+ years of industry experience in data engineering/science, business intelligence or related field -3+ years experience in algorithm design, engineering and implementation for very-large scale applications to solve real problems -Very good knowledge of data modeling and evaluation -Very good understanding of regression modeling, forecasting techniques, time series analysis, machine-learning concepts such as supervised and unsupervised learning, classification, random forest, etc. -SQL and query performance tuning skills Preferred qualifications -2+ years of proficiency in using R, Python, Scala, Java or any modern language for data processing and statistical analysis -Experience with various RDBMS, such as PostgreSQL, MS SQL Server, MySQL, etc. -Experience architecting Big Data and ML solutions with AWS products (Redshift, DynamoDB, Lambda, S3, EMR, SageMaker, Lex, Kendra, Forecast etc.) -Experience articulating business questions and using quantitative techniques to arrive at a solution using available data -Experience with agile/scrum methodologies and its benefits of managing projects efficiently and delivering results iteratively -Excellent written and verbal communication skills including data visualization, especially in regards to quantitative topics discussed with non-technical colleagues
US, CA, San Francisco
About Twitch Launched in 2011, Twitch is a global community that comes together each day to create multiplayer entertainment: unique, live, unpredictable experiences created by the interactions of millions. We bring the joy of co-op to everything, from casual gaming to world-class esports to anime marathons, music, and art streams. Twitch also hosts TwitchCon, where we bring everyone together to celebrate, learn, and grow their personal interests and passions. We’re always live at Twitch. Stay up to date on all things Twitch on Linkedin, Twitter and on our Blog. About the role: Twitch builds data-driven machine learning solutions across several rich problem spaces: Natural Language Processing (NLP), Recommendations, Semantic Search, Classification/Categorization, Anomaly Detection, Forecasting, Safety, and HCI/Social Computing/Computational Social Science. As an Intern, you will work with a dedicated Mentor and Manager on a project in one of these problem areas. You will also be supported by an Advisor and participate in cohort activities such as research teach backs and leadership talks. This position can also be located in San Francisco, CA or virtual. You Will: Solve large-scale data problems. Design solutions for Twitch's problem spaces Explore ML and data research
US, CA, San Francisco
About Twitch Launched in 2011, Twitch is a global community that comes together each day to create multiplayer entertainment: unique, live, unpredictable experiences created by the interactions of millions. We bring the joy of co-op to everything, from casual gaming to world-class esports to anime marathons, music, and art streams. Twitch also hosts TwitchCon, where we bring everyone together to celebrate, learn, and grow their personal interests and passions. We’re always live at Twitch. Stay up to date on all things Twitch on Linkedin, Twitter and on our Blog. About the role: Twitch builds data-driven machine learning solutions across several rich problem spaces: Natural Language Processing (NLP), Recommendations, Semantic Search, Classification/Categorization, Anomaly Detection, Forecasting, Safety, and HCI/Social Computing/Computational Social Science. As an Intern, you will work with a dedicated Mentor and Manager on a project in one of these problem areas. You will also be supported by an Advisor and participate in cohort activities such as research teach backs and leadership talks. This position can also be located in San Francisco, CA or virtual. You Will: Solve large-scale data problems. Design solutions for Twitch's problem spaces Explore ML and data research
US, CA, San Francisco
About Twitch Launched in 2011, Twitch is a global community that comes together each day to create multiplayer entertainment: unique, live, unpredictable experiences created by the interactions of millions. We bring the joy of co-op to everything, from casual gaming to world-class esports to anime marathons, music, and art streams. Twitch also hosts TwitchCon, where we bring everyone together to celebrate, learn, and grow their personal interests and passions. We’re always live at Twitch. Stay up to date on all things Twitch on Linkedin, Twitter and on our Blog. About the role: Twitch builds data-driven machine learning solutions across several rich problem spaces: Natural Language Processing (NLP), Recommendations, Semantic Search, Classification/Categorization, Anomaly Detection, Forecasting, Safety, and HCI/Social Computing/Computational Social Science. As an Intern, you will work with a dedicated Mentor and Manager on a project in one of these problem areas. You will also be supported by an Advisor and participate in cohort activities such as research teach backs and leadership talks. This position can also be located in San Francisco, CA or virtual. You Will: Solve large-scale data problems. Design solutions for Twitch's problem spaces Explore ML and data research
US, CA, San Francisco
About Twitch Launched in 2011, Twitch is a global community that comes together each day to create multiplayer entertainment: unique, live, unpredictable experiences created by the interactions of millions. We bring the joy of co-op to everything, from casual gaming to world-class esports to anime marathons, music, and art streams. Twitch also hosts TwitchCon, where we bring everyone together to celebrate, learn, and grow their personal interests and passions. We’re always live at Twitch. Stay up to date on all things Twitch on Linkedin, Twitter and on our Blog. About the role: Twitch builds data-driven machine learning solutions across several rich problem spaces: Natural Language Processing (NLP), Recommendations, Semantic Search, Classification/Categorization, Anomaly Detection, Forecasting, Safety, and HCI/Social Computing/Computational Social Science. As an Intern, you will work with a dedicated Mentor and Manager on a project in one of these problem areas. You will also be supported by an Advisor and participate in cohort activities such as research teach backs and leadership talks. This position can also be located in San Francisco, CA or virtual. You Will: Solve large-scale data problems. Design solutions for Twitch's problem spaces Explore ML and data research
US, WA, Seattle
Amazon is seeking an experienced, self-directed data scientist to support the research and analytical needs of Amazon Web Services' Sales teams. This is a unique opportunity to invent new ways of leveraging our large, complex data streams to automate sales efforts and to accelerate our customers' journey to the cloud. This is a high-visibility role with significant impact potential. You, as the right candidate, are adept at executing every stage of the machine learning development life cycle in a business setting; from initial requirements gathering to through final model deployment, including adoption measurement and improvement. You will be working with large volumes of structured and unstructured data spread across multiple databases and can design and implement data pipelines to clean and merge these data for research and modeling. Beyond mathematical understanding, you have a deep intuition for machine learning algorithms that allows you to translate business problems into the right machine learning, data science, and/or statistical solutions. You’re able to pick up and grasp new research and identify applications or extensions within the team. You’re talented at communicating your results clearly to business owners in concise, non-technical language. Key job responsibilities • Work with a team of analytics & insights leads, data scientists and engineers to define business problems. • Research, develop, and deliver machine learning & statistical solutions in close partnership with end users, other science and engineering teams, and business stakeholders. • Use AWS services like SageMaker to deploy scalable ML models in the cloud. • Examples of projects include modeling usage of AWS services to optimize sales planning, recommending sales plays based on historical patterns, and building a sales-facing alert system using anomaly detection.
US, WA, Seattle
We are a team of doers working passionately to apply cutting-edge advances in deep learning in the life sciences to solve real-world problems. As a Senior Applied Science Manager you will participate in developing exciting products for customers. Our team rewards curiosity while maintaining a laser-focus in bringing products to market. Competitive candidates are responsive, flexible, and able to succeed within an open, collaborative, entrepreneurial, startup-like environment. At the leading edge of both academic and applied research in this product area, you have the opportunity to work together with a diverse and talented team of scientists, engineers, and product managers and collaborate with others teams. Location is in Seattle, US Embrace Diversity Here at Amazon, we embrace our differences. We are committed to furthering our culture of inclusion. We have ten employee-led affinity groups, reaching 40,000 employees in over 190 chapters globally. We have innovative benefit offerings, and host annual and ongoing learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences. Amazon’s culture of inclusion is reinforced within our 14 Leadership Principles, which remind team members to seek diverse perspectives, learn and be curious, and earn trust Balance Work and Life Our team puts a high value on work-life balance. It isn’t about how many hours you spend at home or at work; it’s about the flow you establish that brings energy to both parts of your life. We believe striking the right balance between your personal and professional life is critical to life-long happiness and fulfillment. We offer flexibility in working hours and encourage you to find your own balance between your work and personal lives Mentor & Grow Careers Our team is dedicated to supporting new members. We have a broad mix of experience levels and tenures, and we’re building an environment that celebrates knowledge sharing and mentorship. Our senior members enjoy one-on-one mentoring and thorough, but kind, code reviews. We care about your career growth and strive to assign projects based on what will help each team member develop into a better-rounded engineer and enable them to take on more complex tasks in the future. Key job responsibilities • Manage high performing engineering and science teams • Hire and develop top-performing engineers, scientists, and other managers • Develop and execute on project plans and delivery commitments • Work with business, data science, software engineer, biological, and product leaders to help define product requirements and with managers, scientists, and engineers to execute on them • Build and maintain world-class customer experience and operational excellence for your deliverables
US, Virtual
The Amazon Economics Team is hiring Interns in Economics. We are looking for detail-oriented, organized, and responsible individuals who are eager to learn how to work with large and complicated data sets. Some knowledge of econometrics, as well as basic familiarity with Stata, R, or Python is necessary. Experience with SQL, UNIX, Sawtooth, and Spark would be a plus. These are full-time positions at 40 hours per week, with compensation being awarded on an hourly basis. You will learn how to build data sets and perform applied econometric analysis at Internet speed collaborating with economists, data scientists and MBAʼs. These skills will translate well into writing applied chapters in your dissertation and provide you with work experience that may help you with placement. Roughly 85% of interns from previous cohorts have converted to full time economics employment at Amazon. If you are interested, please send your CV to our mailing list at econ-internship@amazon.com.
US, WA, Seattle
Amazon internships are full-time (40 hours/week) for 12 consecutive weeks with start dates in May - July 2023. Our internship program provides hands-on learning and building experiences for students who are interested in a career in hardware engineering. This role will be based in Seattle, and candidates must be willing to work in-person. Corporate Projects (CPT) is a team that sits within the broader Corporate Development organization at Amazon. We seek to bring net-new, strategic projects to life by working together with customers and evolving projects from ZERO-to-ONE. To do so, we deploy our resources towards proofs-of-concept (POCs) and pilot programs and develop them from high-level ideas (the ZERO) to tangible short-term results that provide validating signal and a path to scale (the ONE). We work with our customers to develop and create net-new opportunities by relentlessly scouring all of Amazon and finding new and innovative ways to strengthen and/or accelerate the Amazon Flywheel. CPT seeks an Applied Science intern to work with a diverse, cross-functional team to build new, innovative customer experiences. Within CPT, you will apply both traditional and novel scientific approaches to solve and scale problems and solutions. We are a team where science meets application. A successful candidate will be a self-starter comfortable with ambiguity, strong attention to detail, and the ability to work in a fast-paced, ever-changing environment. As an Applied Science Intern, you will own the design and development of end-to-end systems. You’ll have the opportunity to create technical roadmaps, and drive production level projects that will support Amazon Science. You will work closely with Amazon scientists, and other science interns to develop solutions and deploy them into production. The ideal scientist must have the ability to work with diverse groups of people and cross-functional teams to solve complex business problems.