Amazon Redshift re-invented research paper and photos of Rahul Pathak, vice president of analytics at AWS, and Ippokratis Pandis, AWS senior principal engineer
The "Amazon Redshift re-invented" research paper will be presented at a leading database conference next month. Two of the paper's authors, Rahul Pathak (top right), vice president of analytics at AWS, and Ippokratis Pandis (bottom right), an AWS senior principal engineer, discuss the origins of Redshift, how the system has evolved in the past decade, and where they see the service evolving in the years ahead.

Amazon Redshift: Ten years of continuous reinvention

Two authors of Amazon Redshift research paper that will be presented at leading international forum for database researchers reflect on how far the first petabyte scale cloud data warehouse has advanced since it was announced ten years ago.

Nearly ten years ago, in November 2012 at the first-ever Amazon Web Services (AWS) re:Invent, Andy Jassy, then AWS senior vice president, announced the preview of Amazon Redshift, the first fully managed, petabyte-scale cloud data warehouse. The service represented a significant leap forward from traditional on-premises data warehousing solutions, which were expensive, inflexible, and required significant human and capital resources to operate.

In a blog post on November 28, 2012, Werner Vogels, Amazon chief technical officer, highlighted the news: “Today, we are excited to announce the limited preview of Amazon Redshift, a fast and powerful, fully managed, petabyte-scale data warehouse service in the cloud.”

Further in the post, Vogels added, “The result of our focus on performance has been dramatic. Amazon.com’s data warehouse team has been piloting Amazon Redshift and comparing it to their on-premise data warehouse for a range of representative queries against a two billion row data set. They saw speedups ranging from 10x – 150x!”

That’s why, on the day of the announcement, Rahul Pathak, then a senior product manager, and the entire Amazon Redshift team were confident the product would be popular.

“But we didn’t really understand how popular,” he recalls.

“At preview we asked customers to sign up and give us some indication of their data volume and workloads,” Pathak, now vice president of Relational Engines at AWS, said. “Within about three days we realized that we had ten times more demand for Redshift than we had planned for the entire first year of the service. So we scrambled right after re:Invent to accelerate our hardware orders to ensure we had enough capacity on the ground for when the product became generally available in early 2013. If we hadn’t done that preview, we would have been caught short.”

The Redshift team has been sprinting to keep apace of customer demand ever since. Today, the service is used by tens of thousands of customers to process exabytes of data daily. In June a subset of the team will present the paper “Amazon Redshift re-invented ” at a leading international forum for database researchers, practitioners, and developers, the ACM SIGMOD/PODS Conference in Philadelphia.

Related content
Amazon DynamoDB was introduced 10 years ago today; one of its key contributors reflects on its origins, and discusses the 'never-ending journey' to make DynamoDB more secure, more available and more performant.

The paper highlights four key areas where Amazon Redshift has evolved in the past decade, provides an overview of the system architecture, describes its high-performance transactional storage and compute layers, details how smart autonomics are provided, and discusses how AWS and Redshift make it easy for customers to use the best set of services to meet their needs.

Amazon Science recently connected with two of the paper’s authors, Pathak, and Ippokratis Pandis, an AWS senior principal engineer, to discuss the origins of Redshift, how the system has evolved over the past decade, and where they see the service evolving in the years ahead.

  1. Q. 

    Can you provide some background on the origin story for Redshift? What were customers seeking, and how did the initial version address those needs?

    A. 

    Rahul: We had been meeting with customers who in the years leading up to the launch of Amazon Redshift had moved just about every workload they had to the cloud except for their data warehouse. In many cases, it was the last thing they were running on premises, and they were still dealing with all of the challenges of on-premises data warehouses. They were expensive, had punitive licensing, were hard to scale, and customers couldn’t analyze all of their data. Customers told us they wanted to run data warehousing at scale in the cloud, that they didn’t want to compromise on performance or functionality, and that it had to be cost-effective enough for them to analyze all of their data.

    So, this is what we started to build, operating under the code name Cookie Monster. This was at a time when customers’ data volumes were exploding, and not just from relational databases, but from a wide variety of sources. One of our early private beta customers tried it and the results came back so fast they thought the system was broken. It was about 10 to 20 times faster than what they had been using before. Another early customer was pretty unhappy with gaps in our early functionality. When I heard about their challenges, I got in touch, understood their feedback, and incorporated it into the service before we made it generally available in February 2013. This customer soon turned into one of our biggest advocates.

    When we launched the service and announced our pricing at $1000 a terabyte per year, people just couldn’t believe we could offer a product with that much capability at such a low price point. The fact that you could provision a data warehouse in minutes instead of months also caught everyone’s attention. It was a real game-changer for this industry segment.

    Ippokratis: I was at IBM Research at the time working on database technologies there, and we recognized that providing data warehousing as a cloud service was a game changer. It was disruptive. We were working with customers’ on-premises systems where it would take us several days or weeks to resolve an issue, whereas with a cloud data warehouse like Redshift, it would take minutes. It was also apparent that the rate of innovation would accelerate in the cloud.

    In the on-premises world, it was taking months if not years to get new functionality into a software release, whereas in the cloud new capabilities could be introduced in weeks, without customers having to change a single line of code in their consuming applications. The Redshift announcement was an inflection point; I got really interested in the cloud, and cloud data warehouses, and eventually joined Amazon [Ippokratis joined the Redshift team as a principal engineer in Oct. 2015].

  2. Q. 

    How has Amazon Redshift evolved over the past decade since the launch nearly 10 years ago?

    A. 

    Ippokratis: As we highlight in the paper, the service has evolved at a rapid pace in response to customers’ needs. We focused on four main areas: 1) customers’ demand for high-performance execution of increasingly complex analytical queries; 2) our customers’ need to process more data and significantly increase the number of users who need to derive insights from that data; 3) customers’ need for us to make the system easier to use; and 4) our customers’ desire to integrate Redshift with other AWS services, and the AWS ecosystem. That’s a lot, so we’ll provide some examples across each dimension.

    Related publication
    Enterprise companies use spatial data for decision optimization and gain new insights regarding the locality of their business and services. Industries rely on efficiently combining spatial and business data from different sources, such as data warehouses, geospatial information systems, transactional systems, and data lakes, where spatial data can be found in structured or unstructured form. In this demonstration

    Offering the leading price performance has been our primary focus since Rahul first began working on what would become Redshift. From the beginning, the team has focused on making core query execution latency as low as possible so customers can run more workloads, issue more jobs into the system, and run their daily analysis. To do this, Redshift generates C++ code that is highly optimized and then sends it to the distributor in the parallel database and executes this highly optimized code. This makes Redshift unique in the way it executes queries, and it has always been the core of the service.

    We have never stopped innovating here to deliver our customers the best possible performance. Another thing that’s been interesting to me is that in the traditional business intelligence (BI) world, you optimize your system for very long-running jobs. But as we observe the behavior of our customers in aggregate, what’s surprising is that 90 percent of our queries among the billions we run daily in our service execute in less than one second. That’s not what people had traditionally expected from a data warehouse, and that has changed the areas of the code that we optimize.

    Rahul: As Ippokratis mentioned, the second area we focused on in the paper was customers’ need to process more data and to use that data to drive value throughout the organization. Analytics has always been super important, but eight or ten years ago it wasn’t necessarily mission critical for customers in the same way transactional databases were. That has definitely shifted. Today, core business processes rely on Redshift being highly available and performant. The biggest architectural change in the past decade in support of this goal was the introduction of Redshift Managed Storage, which allowed us to separate compute and storage, and focus a lot of innovation in each area.

    Diagram of the Redshift Managed Storage
    The Redshift managed storage layer (RMS) is designed for a durability of 99.999999999% and 99.99% availability over a given year, across multiple availability zones. RMS manages both user data as well as transaction metadata.

    Another big trend has been the desire of customers to query across and integrate disparate datasets. Redshift was the first data warehouse in the cloud to query Amazon S3 data, that was with Redshift Spectrum in 2017. Then we demonstrated the ability to run a query that scanned an exabyte of data in S3 as well as data in the cluster. That was a game changer.

    Customers like NASDAQ have used this extensively to query data that’s on local disk for the highest performance, but also take advantage of Redshift’s ability to integrate with the data lake and query their entire history of data with high performance. In addition to querying the data lake, integrated querying of transactional data stores like Aurora and RDS has been another big innovation, so customers can really have a high-performance analytics system that’s capable of transparently querying all of the data that matters to them without having to manage these complex integration processes that other systems require.

    Illustration of how a query flows through Redshift.
    This diagram from the research paper illustrates how a query flows through Redshift. The sequence is described in detail on pages 2 and 3 of the paper.

    Ippokratis: The third area we focused on in the paper was ease of use. One change that stands out for me is that on-premises data warehousing required IT departments to have a DBA (data base administrator) who would be responsible for maintaining the environment. Over the past decade, the expectation from customers has evolved. Now, if you are offering data warehousing as a service, the systems must be capable of auto tuning, auto healing, and auto optimizing. This has become a big area of focus for us where we incorporate machine learning and automation into the system to make it easier to use, and to reduce the amount of involvement required of administrators.

    Rahul: In terms of ease of use, three innovations come to mind. One is concurrency scaling. Similar to workload management, customers would previously have to manually tweak concurrency or reset clusters of the manually split workloads. Now, the system automatically provisions new resources and scales up and down without customers having to take any action. This is a great example of how Redshift has gotten much more dynamic and elastic.

    The second ease of use innovation is automated table optimization. This is another place where the system is able to observe workloads and data layouts and automatically suggest how data should be sorted and distributed across nodes in the cluster. This is great because it’s a continuously learning system so workloads are never static in time.

    Related publication
    How should we split data among the nodes of a distributed data warehouse in order to boost performance for a forecasted workload? In this paper, we study the effect of different data partitioning schemes on the overall network cost of pairwise joins. We describe a generally-applicable data distribution framework initially designed for Amazon Redshift, a fully-managed petabyte-scale data warehouse in the

    Customers are always adding more datasets, and adding more users, so what was optimal yesterday might not be optimal tomorrow. Redshift observes this and modifies what's happening under the covers to balance that. This was the focus of a really interesting graph optimization paper that we wrote a few years ago about how to analyze for optimal distribution keys for how data is laid out within a multi-node parallel-processing system. We've coupled this with automated optimization and then table encoding. In an analytics system, how you compress data has a big impact because the less data you scan, the faster your queries go. Customers had to reason about this in the past. Now Redshift can automatically determine how to encode data correctly to deliver the best possible performance for the data and the workload.

    The third innovation I want to highlight here is Amazon Redshift Serverless, which we launched in public preview at re:Invent last fall. Redshift Serverless removes all of the management of instances and clusters, so customers can focus on getting to insights from data faster and not spend time managing infrastructure. With Redshift Serverless, customers can simply provision an endpoint and begin to interact with their data, and Redshift Serverless will auto scale and automatically manage the system to essentially remove all of that complexity from customers.

    Customers can just focus on their data, set limits to manage their budgets, and we deliver optimal performance between those limits. This is another massive step forward in terms of ease of use because it eliminates any operations for customers. The early response to the preview has been tremendous. Thousands of customers have been excited to put Amazon Redshift Serverless through its paces over the past few months, and we’re excited about making it generally available in the near future.

    Amazon Redshift architecture diagram
    The Amazon Redshift architecture as presented in the research paper.

    Ippokratis: A fourth area of focus in the paper is on integration with other AWS services, and the AWS ecosystem. Integration is another area where customer behavior has evolved from traditional BI use cases. Today, cloud data warehouses are a central hub with tight integration with a broader set of AWS services. We provided the ability for customers to join data from the warehouse with the data lake. Then customers said they needed access to high-velocity business data in operational databases like Aurora and RDS, so we provided access to these operational data stores. Then we added support for streams, as well as integration with SageMaker and Lambda so customers can run machine learning training and inference without moving their data, and do generic compute. As a result, we’ve converted the traditional BI system into a well-integrated set of AWS services.

    Rahul: One big area of integration has been with our machine-learning ecosystem. With Redshift ML we have enabled anyone who knows SQL to take advantage of all of our machine-learning innovation. We built the ability to create a model from the SQL prompt, which gets the data into Amazon S3 and calls Amazon SageMaker, to use automated machine learning to build the most appropriate model to provide predictions on the data.

    This model is compiled efficiently and brought back into the data warehouse for customers to run very high-performance parallel inferences with no additional compute or no extra cost. The beauty of this integration is that every innovation we make within SageMaker means that Redshift ML gets better as well. This is just another means by which customers benefit from us connecting our services together.

    Related content
    Amazon researchers describe new method for distributing database tables across servers.

    Another big area for integration has been data sharing. Once we separated storage and compute layers with RA3 instances, we could enable data sharing, giving customers the ability to share data with clusters in the same account, and other accounts, or across regions. This allows us to separate consumers from producers of data, which enables things like modern data mesh architectures. Customers can share data without data copying, so they are transactionally consistent across accounts.

    For example, users within a data-science organization can securely work from the shared data, as can users within the reporting or marketing organization. We’ve also integrated data sharing with AWS Data Exchange, so now customers can search for — and subscribe to — third-party datasets that are live, up to date, and can be queried immediately in Redshift. This has been another game changer from the perspective of setting data free, enabling data monetization for third-party providers, and secure and live data access and licensing for subscribers for high-performance analytics within and across organizations. The fact that Redshift is part of an incredibly rich data ecosystem is a huge win for customers, and in keeping with customers’ desire to make data more pervasively available across the company.

  3. Q. 

    You indicate in the paper that Redshift innovation is continuing at an accelerated pace.  How do you see the cloud data warehouse segment evolving – and more specifically Redshift – over the next several years?

    A. 

    Rahul: A few things will continue to be true as we head into the future. Customers will be generating ever more amounts of data, and they’re going to want to analyze that data more cost effectively. Data volumes are growing exponentially, but obviously customers don't want their costs growing exponentially. This requires that we continue to innovate, and find new levels of performance to ensure that the cost of processing a unit of data continues to go down.

    We’ll continue innovating in software, in hardware, in silicon, and in using machine learning to make sure we deliver on that promise for customers. We’ve delivered on that promise for the past 10 years, and we’ll focus on making sure we deliver on that promise into the future.

    I’m very proud of what the team has accomplished, but equally as excited about all the things we’re going to do to improve Redshift in the future.
    Ippokratis Pandis

    Also, customers are always going to want better availability, they’re always going to want their data to be secure, and they’re always going to want more integrations with more data sources, and we intend to continue to deliver on all of those. What will stay the same is our ability to offer the-best in-segment price performance and capabilities, and the best-in-segment integration and security because they will always deliver value for customers.

    Ippokratis: It has been an incredible journey; we have been rebuilding the plane as we’ve been flying it with customers onboard, and this would not have happened without the support of AWS leadership, but most importantly the tremendous engineers, managers, and product people who have worked on the team.

    As we did in the paper, I want to recognize the contributions of Nate Binkert and Britt Johnson, who have passed, but whose words of wisdom continue to guide us. We’ve taken data warehousing, what we learned from books in school (Ippokratis earned his PhD in electrical and computer engineering from Carnegie Mellon University) and brought it to the cloud. In the process, we’ve been able to innovate, and write new pages in the book. I’m very proud of what the team has accomplished, but equally as excited about all the things we’re going to do to improve Redshift in the future.

Research areas

Related content

US, TX, Austin
Amazon Leo is an initiative to launch a constellation of Low Earth Orbit satellites that will provide low-latency, high-speed broadband connectivity to unserved and underserved communities around the world. As a Systems Engineer, this role is primarily responsible for the design, development and integration of Ka band and S/C band communication payload and ground terminal systems. The Role: Be part of the team defining the overall communication system and architecture of Amazon’s broadband wireless network. This is a unique opportunity to innovate and define groundbreaking wireless technology with few legacy constraints. The team develops and designs the communication system of Amazon Leo and analyzes its overall system level performance such as for overall throughput, latency, system availability, packet loss etc. This role in particular will be responsible for leading the effort in designing and developing advanced technology and solutions for communication system. This role will also be responsible developing advanced L1/L2 proof of concept HW/SW systems to improve the performance and reliability of the Amazon Leo network. In particular this role will be responsible for using concepts from digital signal processing, information theory, wireless communications to develop novel solutions for achieving ultra-high performance LEO network. This role will also be part of a team and develop simulation tools with particular emphasis on modeling the physical layer aspects such as advanced receiver modeling and abstraction, interference cancellation techniques, FEC abstraction models etc. This role will also play a critical role in the design, integration and verification of various HW and SW sub-systems as a part of system integration and link bring-up and verification. Export Control Requirement: Due to applicable export control laws and regulations, candidates must be a U.S. citizen or national, U.S. permanent resident (i.e., current Green Card holder), or lawfully admitted into the U.S. as a refugee or granted asylum. Key job responsibilities • Design advanced L1/L2 algorithms and solutions for the Amazon Leo communication system, particularly Multi-User MIMO techniques. • Develop proof-of-concepts for critical communication payload components using SDR platforms consisting of FPGAs and general-purpose processors. • Work with ASIC development teams to build power/area efficient L1/L2 HW accelerators to be integrated into Amazon Leo SoCs. • Provide specifications and work with implementation teams on the development of embedded L1/L2 HW/SW architectures. • Work with multi-disciplinary teams to develop advanced solutions for time, frequency and spatial acquisition/tracking in LEO systems, particularly under large uncertainties. • Develop link-level and system-level simulators and work closely with implementation teams to evaluate expected performance and provide quick feedback on potential improvements. • Develop testbeds consisting of digital, IF and RF components while accounting for link-budgets and RF/IF line-ups. Previous experiences with VSAs/VSGs, channel emulators, antennas (particularly phased-arrays) and anechoic chamber instrumentation are a plus. • Work with development teams on system integration and debugging from PHY to network layer, including interfacing with flight computer and SDN control subsystems. • Willing to work in fast-paced environment and take ownership that goes from algorithm specification, to HW/SW architecture definition, to proof-of-concept development, to testbed bring-up, to integration into the Amazon Leo system. • Be a team player and provide support when requested while being able to unblock themselves by reaching out to RF, ASIC, SW, Comsys and Testbed supporting teams to move forward in development, testing and integration activities. • Ability to adapt design and test activities based on current HW/SW capabilities delivered by the development teams.
US, TX, Austin
Project Leo (former Kuiper) is an initiative to launch a constellation of Low Earth Orbit satellites that will provide low-latency, high-speed broadband connectivity to unserved and underserved communities around the world. As a Systems Engineer, this role is primarily responsible for the design, development and integration of Ka band and FR1 band communication payload and customer terminal systems. The Role: Be part of the team defining the overall communication system and architecture of Amazon Leo’s broadband wireless network. This is a unique opportunity to innovate and define groundbreaking wireless technology at global scale. The team develops and designs the communication system for project Leo and analyzes its overall system level performance such as for overall throughput, latency, system availability, packet loss etc. This role in particular will be responsible for leading the effort in designing and developing advanced technology and solutions for communication system. This role will also be responsible developing advanced physical layer + protocol stacks systems as proof of concept and reference implementation to improve the performance and reliability of the LEO network. In particular this role will be responsible for using concepts from digital signal processing, information theory, wireless communications to develop novel solutions for achieving ultra-high performance LEO network. This role will also be part of a team and develop simulation tools with particular emphasis on modeling the physical layer aspects such as advanced receiver modeling and abstraction, interference cancellation techniques, FEC abstraction models etc. This role will also play a critical role in the integration and verification of various HW and SW sub-systems as a part of system integration and link bring-up and verification. Export Control Requirement: Due to applicable export control laws and regulations, candidates must be a U.S. citizen or national, U.S. permanent resident (i.e., current Green Card holder), or lawfully admitted into the U.S. as a refugee or granted asylum.
US, WA, Bellevue
What does it take to build a foundation model that can forecast demand for hundreds of millions of products — including ones that have never been sold before? At Amazon, our Demand Forecasting team is tackling one of the most ambitious challenges in applied time series research: designing and building large-scale foundation models that generalize across an enormous and diverse catalog of products, geographies, and business contexts. This is not incremental modeling work. We are redefining what's possible in demand forecasting through novel architectures, training strategies, and data generation techniques. Our team operates at a scale that is unmatched in industry or academia. You'll design experiments across millions of products simultaneously, developing new model architectures and training methodologies that push the boundaries of what foundation models can learn from vast, heterogeneous time series data. You'll explore techniques in transfer learning, zero-shot forecasting, and synthetic data generation. The models you design here will ship to production and directly influence hundreds of millions of dollars in automated inventory decisions every week. Beyond operational impact, you'll publish your work at top-tier conferences and contribute to advancing the state of the art in time series foundation models for the broader scientific community. If you are a scientist who wants to work at the frontier of time series research, design novel solutions to problems no one else has solved at this scale, and see your research deployed to real-world impact — this is the team for you. Key job responsibilities 1. Design and implement novel deep learning architectures (e.g., Transformers, SSMs, or Graph Neural Networks) for time-series foundation models that generalize across hundreds of millions of products and diverse global contexts. 2. Drive the full development cycle - from whiteboarding new algorithmic approaches to overseeing production-scale deployments. 3. Collaborate with SDEs to build high-performance, distributed training and inference pipelines; translate complex scientific concepts into scalable, production-grade code in Python and Scala. 4. Leverage and develop agentic GenAI workflows to automate the end-to-end research cycle from synthesizing state-of-the-art literature and auto-generating experimental code to rapidly iterating on model architectures across millions of products. 5. Maintain a high bar for scientific excellence by publishing novel research in top-tier venues (e.g., NeurIPS, ICLR, KDD) and contributing to Amazon’s internal patent and science community. A day in the life No two days look the same, but most will involve a high-velocity blend of deep architectural work, distributed system design, and frontier scientific thinking at a scale you won’t find anywhere else. You might start the morning by designing a synthetic data pipeline to stress-test your foundation model. You’ll use generative techniques to simulate rare "black swan" supply chain events, ensuring your model remains robust where historical data is thin. You'll then lead a Scientific Design Review, walking senior leaders through your model’s architecture, defending your choice of loss functions with data-driven rigor. You’ll write high-performance code often paired with AI-coding assistants to handle the heavy lifting of boilerplate and unit testing. You’ll collaborate across a "Two-Pizza Team" of scientists and engineers, pushing the boundaries of research with a clear goal: contributing to work that will be published at top-tier venues (ICLR, NeurIPS) while simultaneously driving multi-million dollar automated decisions. The work is hard, the math is complex, and the tools are state-of-the-art. If you want to build the models that actually ship—this is where you do it. About the team The Demand Forecasting team sits at the heart of Amazon's supply chain, building the science that determines what products are available, when, and at what cost — for hundreds of millions of customers around the world. Our mission is to push the frontier of what's possible in large-scale time series forecasting, and to deploy that science where it creates real, measurable impact. We are a team of scientists who care deeply about both research rigor and real-world outcomes. We don't just publish — we ship. And we don't just ship — we measure, iterate, and raise the bar. Our work spans the full lifecycle: from foundational research and large-scale experimentation to production deployment and downstream impact measurement across supply chain, inventory, and financial planning.
US, WA, Bellevue
Do you enjoy solving challenging problems and driving innovations in research? Are you seeking for an environment with a group of motivated and talented scientists like yourself? Do you want to create scalable optimization models and apply machine learning techniques to guide real-world decisions? Do you want to play a key role in the future of Amazon transportation and operations? Come and join us at Amazon's Modeling and Optimization team (MOP). Key job responsibilities A Research Scientist in the Modeling and Optimization (MOP) team - provides analytical decision support to Amazon planning teams via applying advanced mathematical and statistical techniques. - collaborates effectively with Amazon internal business customers, and is their trusted partner - is proactive and autonomous in discovering and resolving business pain-points within a given scope - is able to identify a suitable level of sophistication in resolving the different business needs - is confident in leveraging existing solutions to new problems where appropriate and is independent in designing and implementing new solutions where needed - is aware of the limitations of their proposed solutions and is proactive in communicating them to the business, and advances the application of sciences towards Amazon business problems by bringing new methods, ideas, and practices to the team and scientific community. A day in the life - Your will be developing model-based optimization, simulation, and/or predictive tools to identify and evaluate opportunities to improve customer experience, network speed, cost, and efficiency of capital investment. - You will quantify the improvements resulting from the application of these tools and you will evaluate the trade-offs between potentially competing objectives. - You will develop good communication skills and ability to speak at a level appropriate for the audience, will collaborate effectively with fellow scientists, software development engineers, and product managers, and will deliver business value in a close partnership with many stakeholders from operations, finance, IT, and business leadership. About the team - At the Modeling and Optimization (MOP) team, we use mathematical optimization, algorithm design, statistics, and machine learning to improve decision-making capabilities across WW Operations and Amazon Logistics. - We focus on transportation topology, labor and resource planning for fulfillment facilities, routing science, visualization research, data science and development, and process optimization. - We create models to simulate, optimize, and control the fulfillment network with the objective of reducing cost while improving speed and reliability. - We support multiple business lanes, therefore maintain a comprehensive and objective view, coordinating solutions across organizational lines where possible.
US, NJ, Jersey City
MULTIPLE POSITIONS AVAILABLE Employer: AMAZON WEB SERVICES, INC. Offered Position: Economist III Job Location: Jersey City, New Jersey Job Number: AMZ9674161 Position Responsibilities: Work with the chief economist and senior management on key business problems faced in retail, international retail, cloud computing, third party merchants, search, Kindle, streaming video, or operations. Apply the frontier of economic thinking to market design, pricing, forecasting, program evaluation, online advertising, and other areas. Build econometric models using data systems. Apply economic theory to solve business problems. Develop new techniques to process large data sets, address quantitative problems, and contribute to design of automated systems. Apply tools from applied micro-econometrics (e.g. experimental design, difference-in-difference, regression discontinuity, and IV) and forecasting (essential time series models). Leverage big data tools for data extraction. Write up and present analysis for distribution to various levels of management at Amazon. Gain experience in academic research. Use program evaluation, forecasting, time series, panel data, and high dimensional problems. Use R and Stata. Position Requirements: Ph.D. or foreign equivalent degree in Economics, Finance, or a related field and three years of research or work experience in the job offered or a related occupation. Must have at least one year of research or work experience in the following skill(s): (1) working with Causal inference techniques (Difference-in-Differences, Matching, Double Machine Learning, Instrumental Variables, and Regression Discontinuity Designs); (2) statistical analysis tools (Python, R or Stata); (3) Data querying languages (SQL). Amazon.com is an Equal Opportunity-Affirmative Action Employer – Minority / Female / Disability / Veteran / Gender Identity / Sexual Orientation. 40 hours / week, 8:00am-5:00pm, Salary Range $175,100/year to $236,900/year. Amazon is a total compensation company. Dependent on the position offered, equity, sign-on payments, and other forms of compensation may be provided as part of a total compensation package, in addition to a full range of medical, financial, and/or other benefits. For more information, visit: https://www.aboutamazon.com/workplace/employee-benefits.#0000
US, NY, New York
MULTIPLE POSITIONS AVAILABLE Employer: AMAZON.COM SERVICES LLC Offered Position: Manager III, Economist Job Location: New York, New York Job Number: AMZ9782156 Position Responsibilities: Support the measurement of the Alexa business and provide actionable insights across Alexa customers and devices. Work with product managers, SDEs, financial analysts, and BIEs to help the Alexa organization identify new features and business opportunities as well as drive optimization of current features and services through your analyses as the technical lead on the team. Own the development of econometric models, and manage the modelling and validation work for analysis products. Design and develop Econometric models to solve business problems and improve customer CX. Develop techniques to process large datasets, address quantitative problems, and contribute to design of automated systems around the company. Write high quality code and participating in Econ tech reviews, work with the business stakeholders to understand and solve their business problems by applying the frontier of economic thinking. Mentor and support junior Economists and scientists. Position Requirements: PhD degree or foreign equivalent in Economics, Computer Science, or related field and five years of research or work experience in the job offered or related occupation. Must have one year of research or work experience in the following skill(s): experience with casual inference and predictive modeling; experience in econometrics (program evaluation, forecasting, time series, panel data, and high dimensional problems); and experience with economic theory and quantitative methods. Amazon.com is an Equal Opportunity-Affirmative Action Employer – Minority / Female / Disability / Veteran / Gender Identity / Sexual Orientation. 40 hours / week, 8:00am-5:00pm, Salary Range $226,782/year to $260,500/year. Amazon is a total compensation company. Dependent on the position offered, equity, sign-on payments, and other forms of compensation may be provided as part of a total compensation package, in addition to a full range of medical, financial, and/or other benefits. For more information, visit: https://www.aboutamazon.com/workplace/employee-benefits.#0000
US, NJ, Newark
At Audible, we believe stories have the power to transform lives. It’s why we work with some of the world’s leading creators to produce and share audio storytelling with our millions of global listeners. We are dreamers and inventors who come from a wide range of backgrounds and experiences to empower and inspire each other. Imagine your future with us. ABOUT THIS ROLE We are seeking a Data Scientist to own our causal inference infrastructure and drive sophisticated modeling that measures the incremental impact of business decisions. This role requires deep expertise in advanced causal inference methodologies—including synthetic control methods, Synthetic Difference-in-Differences (SDID), and Bayesian approaches—to design rigorous experiments, estimate long-term customer behavior effects, and translate complex analytical results into clear business recommendations. You will own the development and continuous improvement of these causal inference models while being responsible for machine learning operations at scale to ensure our organization makes data-driven decisions with confidence. At Audible, you will have an opportunity to make the best of your skillsets to both develop advanced scientific solutions and drive critical customer and business impact. You will play a key role to drive end-to-end solutions from understanding our business and business requirements, identifying opportunities from a large amount of historical data and engaging in research to solve the business problems. You'll seek to create value for both stakeholders and customers and inform findings in a clear, actionable way to managers and senior leaders. You will be at the heart of an agile and growing area at Audible. ABOUT THE TEAM Audible Data Scientists are members of a global interdisciplinary insights and research team with an integral role in the design and integration of models to automate decision making throughout the business in every country. We empower the machine learning and deep learning techniques in many areas of the business. We translate business goals into agile, insightful analytics and seek to create value for both stakeholders and customers and convey findings in a clear, actionable way to managers and senior leaders. As a Data Scientist, you will... - Design and execute geo-level randomized experiments to measure incremental impact - Apply statistical techniques to evaluate causal impact in quasi-experimental settings - Ensure experiments are statistically valid by evaluating sampling strategies, statistical power, and potential sources of bias - Develop models that estimate long-term effects from short-term experiments using machine learning - Estimate how changes in customer behavior persist and decay over time - Own and maintain the geo-testing codebase, including deployment and scalability - Implement machine learning models at scale with focus on performance optimization - Partner with stakeholders to ensure models align with real business dynamics - Engage deeply with business problems through curiosity-driven questioning and brainstorming - Translate experimental results into financial impact and investment recommendations - Analyze marginal and average revenue impacts relative to costs - Communicate complex quantitative ideas clearly to non-technical stakeholders - Demonstrate understanding of Audible's business model and customer experience ABOUT AUDIBLE Audible is the leading producer and provider of audio storytelling. We spark listeners’ imaginations, offering immersive, cinematic experiences full of inspiration and insight to enrich our customers daily lives. We are a global company with an entrepreneurial spirit. We are dreamers and inventors who are passionate about the positive impact Audible can make for our customers and our neighbors. This spirit courses throughout Audible, supporting a culture of creativity and inclusion built on our People Principles and our mission to build more equitable communities in the cities we call home.
US, WA, Seattle
Prime Video is a first-stop entertainment destination offering customers a vast collection of premium programming in one app available across thousands of devices. Prime members can customize their viewing experience and find their favorite movies, series, documentaries, and live sports – including Amazon MGM Studios-produced series and movies; licensed fan favorites; and programming from Prime Video subscriptions such as Apple TV+, HBO Max, Peacock, Crunchyroll and MGM+. All customers, regardless of whether they have a Prime membership or not, can rent or buy titles via the Prime Video Store, and can enjoy even more content for free with ads. Are you interested in shaping the future of entertainment? Prime Video's technology teams are creating best-in-class digital video experience. As a Prime Video team member, you’ll have end-to-end ownership of the product, user experience, design, and technology required to deliver state-of-the-art experiences for our customers. You’ll get to work on projects that are fast-paced, challenging, and varied. You’ll also be able to experiment with new possibilities, take risks, and collaborate with remarkable people. We’ll look for you to bring your diverse perspectives, ideas, and skill-sets to make Prime Video even better for our customers. With global opportunities for talented technologists, you can decide where a career Prime Video Tech takes you! Key job responsibilities We are looking for passionate, hard-working, and talented individuals to help us push the envelope of content localization. We work on a broad array of research areas and applications, including but not limited to multimodal machine translation, speech synthesis, speech analysis, and asset quality assessment. Candidates should be prepared to help drive innovation in one or more areas of machine learning, audio processing, and natural language understanding. The ideal candidate would have experience in audio processing, natural language understanding and machine learning. Familiarity with machine translation, foundational models, and speech synthesis will be a plus. As an Applied Scientist, you should be a strong communicator, able to describe scientifically rigorous work to business stakeholders of varying levels of technical sophistication. You will closely partner with the solution development teams, and should be intensely curious about how the research is moving the needle for business. Strong inter-personal and mentoring skills to develop applied science talent in the team is another important requirement.
US, WA, Bellevue
The Artificial General Intelligence (AGI) team is seeking a dedicated, skilled, and innovative Applied Science Manager with a robust background in machine learning, statistics, quality assurance, auditing methodologies, and automated evaluation systems to lead a team ensuring the highest standards of data quality, to build industry-leading technology with Large Language Models (LLMs) and multimodal systems. Key job responsibilities As part of the AGI team, an Applied Science Manager will lead and mentor a team of Applied Scientists who develop comprehensive quality strategies and auditing frameworks that safeguard the integrity of data collection workflows. The manager will guide the team in designing auditing strategies with detailed SOPs, quality metrics, and sampling methodologies that align with core scientist team developing Amazon Nova models. The Applied Science Manager will oversee expert-level manual audits, meta-audits to evaluate auditor performance, and provide coaching to uplift overall quality capabilities across the team. The manager will lead research in areas related to HIL data impact to LLM models, and define utility measurement strategies for data generated by AGI-DS for Nova models. The Applied Science Manager will be responsible for recruiting, hiring, and developing team members, conducting performance reviews, setting clear expectations and growth plans, and fostering a culture of scientific excellence and innovation. The manager will communicate with senior leadership, cross-functional technical teams, and customers to collect requirements, describe product features and technical designs, and articulate product strategy. A day in the life An Applied Science Manager with the AGI team will lead quality solution design, guide root cause analysis on data quality issues, drive research into new auditing methodologies, and find innovative ways of optimizing data quality while setting examples for the team on quality assurance best practices and standards. The manager will work closely with talented engineers, domain experts, and vendor teams to put quality strategies and automated judging systems into practice. The manager will also conduct regular 1:1s with team members, provide mentorship and coaching, and ensure the team delivers high-impact results aligned with organizational goals.
US, WA, Seattle
The GRAISE team (Grocery, Retail & In-Store Experience) within Worldwide Grocery Store Tech (WWGST) builds foundational AI and machine learning systems that power Amazon's in-store grocery technologies. We develop domain-specific models that solve uniquely complex challenges in grocery — from smart shopping carts and inventory intelligence to personalization and store operations. Our mission is to create technology which makes grocery shopping more convenient, economical, personalized, and enjoyable for customers while empowering retailers with operational efficiency. We are looking for a talented and motivated Applied Scientist to join our team. In this role, you will design, develop, and deploy machine learning and computer vision models and algorithms that solve real-world problems at scale. You will work closely with engineering, product, and business teams to translate ambiguous problems into rigorous scientific solutions, and you will own the end-to-end development of models from ideation through production. This is a high-impact role where your work will directly shape the intelligence layer of Amazon's grocery ecosystem. Key job responsibilities - Design and implement machine learning models to solve complex grocery-domain problems. - Conduct exploratory data analysis and develop deep understanding of domain-specific data challenges. - Collaborate with software engineers to productionize models and ensure reliability at scale. - Define and track key metrics to evaluate model performance and business impact. - Communicate findings and recommendations clearly to technical and non-technical stakeholders. - Stay current with the latest research and evaluate applicability to team problems. - Contribute to a culture of scientific rigor, experimentation, and continuous improvement. A day in the life As an Applied Scientist on the GRAISE team, you'll spend your days analyzing model performance from overnight experiments, collaborating with engineers to deploy computer vision models to production, and prototyping new approaches using multimodal learning with store video and sensor data. You'll present findings to product and business stakeholders, translating technical results into actionable recommendations. Throughout the day, you'll balance rigorous scientific thinking with practical engineering constraints, knowing your work directly improves the shopping experience for millions of customers in Amazon grocery stores.