Lessons learned from 10 years of DynamoDB

Prioritizing predictability over efficiency, adapting data partitioning to traffic, and continuous verification are a few of the principles that help ensure stability, availability, and efficiency.

Amazon DynamoDB is one of the most popular NoSQL database offerings on the Internet, designed for simplicity, predictability, scalability, and reliability. To celebrate DynamoDB’s 10th anniversary, the DynamoDB team wrote a paper describing lessons we’d learned in the course of expanding a fully managed cloud-based database system to hundreds of thousands of customers. The paper was presented at this year’s USENIX ATC conference.

The paper captures the following lessons that we have learned over the years:

  • Designing systems for predictability over absolute efficiency improves system stability. While components such as caches can improve performance, they should not introduce bimodality, in which the system has two radically different ways of responding to similar requests (e.g., one for cache misses and one for cache hits). Consistent behaviors ensure that the system is always provisioned to handle the unexpected. 
  • Adapting to customers’ traffic patterns to redistribute data improves customer experience. 
  • Continuously verifying idle data is a reliable way to protect against both hardware failures and software bugs in order to meet high durability goals. 
  • Maintaining high availability as a system evolves requires careful operational discipline and tooling. Mechanisms such as formal proofs of complex algorithms, game days (chaos and load tests), upgrade/downgrade tests, and deployment safety provide the freedom to adjust and experiment with the code without the fear of compromising correctness. 
Related content
Amazon DynamoDB was introduced 10 years ago today; one of its key contributors reflects on its origins, and discusses the 'never-ending journey' to make DynamoDB more secure, more available and more performant.

Before we dig deeper into these topics, a little terminology. A DynamoDB table is a collection of items (e.g., products), and each item is a collection of attributes (e.g., name, price, category, etc.). Each item is uniquely identified by its primary key. In DynamoDB, tables are typically partitioned, or divided into smaller sub-tables, which are assigned to nodes. A node is a set of dedicated computational resources — a virtual machine — running on a single server in a datacenter.

DynamoDB stores three copies of each partition, in different availability zones. This makes the partition highly available and durable because the availability zones’ storage resources share nothing and are substantially independent. For instance, we wouldn’t assign a partition and one of its copies to nodes that share a power supply, because a power outage would take both of them offline. The three copies of the same partition are known as a replication group, and there is a leader for the group that is responsible for replicating all the customer mutations and serving strongly consistent reads.

DynamoDB architecture.png
The DynamoDB architecture, including a request router, the partition metadata system, and storage nodes in different availability zones (AZs).

Those definitions in hand, let’s turn to our lessons learned.

Predictability over absolute efficiency

DynamoDB employs a lot of metadata caches in order to reduce latency. One of those caches stores the routing metadata for data requests. This cache is deployed on a fleet of thousands of request routers, DynamoDB’s front-end service.

In the original implementation, when the request router received the first request for a table, it downloaded the routing information for the entire table and cached it locally. Since the configuration information about partition replicas rarely changed, the cache hit rate was approximately 99.75%.

Related content
How Alexa scales machine learning models to millions of customers.

This was an amazing hit rate. However, on the flip side, the fallback mechanism for this cache was to hit the metadata table directly. When the cache becomes ineffective, the metadata table needs to instantaneously scale from handling 0.25% of requests to 100%. The sudden increase in traffic can cause the metadata table to fail, causing cascading failure in other parts of the system. To mitigate against such failures, we redesigned our caches to behave predictably.

First, we built an in-memory datastore called MemDS, which significantly reduced request routers’ and other metadata clients’ reliance on local caches. MemDS stores all the routing metadata in a highly compressed manner and replicates it across a fleet of servers. MemDS scales horizontally to handle all incoming requests to DynamoDB.

Second, we deployed a new local cache that avoids the bimodality of the original cache. All requests, even if satisfied by the local cache, are asynchronously sent to the MemDS. This ensures that the MemDS fleet is always serving a constant volume of traffic, regardless of cache hit or miss. The regular exercise of the fallback code helps prevent surprises during fallback.

DDB-MemDS.png
DynamoDB architecture with MemDS.

Unlike conventional local caches, MemDS sees traffic that is proportional to the customer traffic seen by the service; thus, during cache failures, it does not see a sudden amplification of traffic. Doing constant work removed the need for complex logic to handle edge cases around cache misses and reduced the reliance on local caches, improving system stability.

Reshaping partitioning based on traffic

Partitions offer a way to dynamically scale both the capacity and performance of tables. In the original DynamoDB release, customers explicitly specified the throughput that a table required in terms of read capacity units (RCUs) and write capacity units (WCUs). The original system assigned partitions to nodes based on both available space and computational capacity.

Related content
Optimizing placement of configuration data ensures that it’s available and consistent during “network partitions”.

As the demands on a table changed (because it grew in size or because the load increased), partitions could be further split to allow the table to scale elastically. Partition abstraction proved really valuable and continues to be central to the design of DynamoDB.

However, the early version of DynamoDB assigned both space and capacity to individual partitions on the basis of size, evenly distributing computational resources across table entries. This led to challenges of “hot partitions” and throughput dilution.

Hot partitions happened because customer workloads were not uniformly distributed and kept hitting a subset of items. Throughput dilution happened when partitions that had been split to handle increased load ended up with so few keys that they could quickly max out their meager allocated capacity.

Our initial response to these challenges was to add bursting and adaptive capacity (along with other features such as split for consumption) to DynamoDB. This line of work also led to the launch of on-demand tables.

Bursting is a way to absorb temporal spikes in workloads at a partition level. It’s based on the observation that not all partitions hosted by a storage node use their allocated throughput simultaneously.

Related content
Amazon researchers describe new method for distributing database tables across servers.

The idea is to let applications tap into unused capacity at a partition level on a best-effort basis to absorb short-lived spikes. DynamoDB still maintains workload isolation by ensuring that a partition can burst only if there is unused throughput at the node level.

DynamoDB also launched adaptive capacity to handle long-lived spikes that cannot be absorbed by the burst capacity. Adaptive capacity monitors traffic patterns and repartitions tables so that heavily accessed items reside on different nodes.

Both bursting and adaptive capacity had limitations, however. Bursting was helpful only for short-lived spikes in traffic, and it was dependent on nodes’ having enough throughput to support it. Adaptive capacity was reactive and kicked in only after transmission rates had been throttled down to avoid overloads.

To address these limitations, the DynamoDB team replaced adaptive capacity with global admission control (GAC). GAC builds on the idea of token buckets, in which bandwidth is allocated to network nodes as tokens, and the nodes “cash in” tokens in order to transmit data. Each request router maintains a local token bucket and communicates with GAC to replenish tokens at regular intervals (on the order of every few seconds). For an extra layer of defense, DynamoDB also uses token buckets at the partition level.

Continuous verification 

To provide durability and crash recovery, DynamoDB uses write-ahead logs, which record data writes before they occur. In the event of a crash, DynamoDB can use the write-ahead logs to reconstruct lost data writes, bringing partitions up to date.

Write-ahead logs are stored in all three replicas of a partition. For higher durability, the write-ahead logs are periodically archived to S3, an object store that is designed for more than 99.99% (in fact, 11 nines) durability. Each replica contains the most recent write-ahead logs, which are usually waiting to be archived. The unarchived logs are typically a few hundred megabytes in size.

Storage replica vs. log replica.png
Healing a storage replica by copying the B-tree can take several minutes, while adding a log replica, which takes only a few seconds, ensures that there is no impact on durability.

DynamoDB continuously verifies data at rest. Our goal is to detect any silent data errors or “bit rot” — bit errors caused by degradation of the storage medium. An example of continuous verification is the scrub process.

The scrub process verifies two things: that all three copies in a replication group have the same data and that the live replicas match a reference replica built offline using the archived write-ahead-log entries.

The verification is done by computing the checksum of the live replica and matching that with a snapshot of the reference replica. A similar technique is used to verify replicas of global tables. Over the years, we have learned that continuous verification of data at rest is the most reliable method of protecting against hardware failures, silent data corruption, and even software bugs.

Availability

DynamoDB regularly tests its resilience to node, rack, and availability zone (AZ) failures. For example, to test the availability and durability of the overall service, DynamoDB performs power-off tests. Using realistic simulated traffic, a job scheduler powers off random nodes. At the end of all the power-off tests, the test tools verify that the data stored in the database is logically valid and not corrupted.

Related content
Amazon Athena reduces query execution time by 14% by eliminating redundant operations.

The first point about availability is that it needs to be measurable. DynamoDB is designed for 99.999% availability for global tables and 99.99% availability for regional tables. To ensure that these goals are being met, DynamoDB continuously monitors availability at the service and table levels. The tracked availability data is used to estimate customer-perceived availability trends and trigger alarms if the number of errors that customers see crosses a certain threshold.

These alarms are called customer-facing alarms (CFAs). The goal of these alarms is to report any availability-related problems and proactively mitigate them either automatically or through operator intervention. The key point to note here is that availability is measured not only on the server side but on the client side.

We also use two sets of clients to measure the user-perceived availability. The first set of clients is internal Amazon services using DynamoDB as the data store. These services share the availability metrics for DynamoDB API calls as observed by their software.

The second set of clients is our DynamoDB canary applications. These applications are run from every AZ in the region, and they talk to DynamoDB through every public endpoint. Real application traffic allows us to reason about DynamoDB availability and latencies as seen by our customers. The canary applications offer a good representation of what our customers might be experiencing both long and short term.

The second point is that read and write availability need to be handled differently. A partition’s write availability depends on the health of its leader and of its write quorum, meaning two out of the three replicas from different AZs. A partition remains available as long as there are enough healthy replicas for a write quorum and a leader.

Related content
“Anytime query” approach adapts to the available resources.

In a large service, hardware failures such as memory and disk failures are common. When a node fails, all replication groups hosted on the node are down to two copies. The process of healing a storage replica can take several minutes because the repair process involves copying the B-tree — a data structure that maps partitions to storage locations — and write-ahead logs.

Upon detecting an unhealthy storage replica, the leader of a replication group adds a log replica to ensure there is no impact on durability. Adding a log replica takes only a few seconds, because the system has to copy only the most recent write-ahead logs from a healthy replica; reconstructing the more memory-intensive B-tree can wait. Quick healing of affected replication groups using log replicas thus ensures the high durability of the most recent writes. Adding a log replica is the fastest way to ensure that the write quorum of the group is always met. This minimizes disruption to write availability due to an unhealthy write quorum. The leader replica serves consistent reads.

Introducing log replicas was a big change to the system, but the Paxos consensus protocol, which is formally provable, gave us the confidence to safely tweak and experiment with the system to achieve higher availability. We have been able to run millions of Paxos groups in a region with log replicas. Eventually, consistent reads can be served by any of the replicas. In case a leader fails, other replicas detect its failure and elect a new leader to minimize disruptions to the availability of consistent reads.

Research areas

Related content

US, WA, Bellevue
The Routing and Planning organization supports all parcel and grocery delivery programs across Amazon Delivery. All these programs have different characteristics and require a large number of decision support systems to operate at scale. As part of Routing and Planning organization, you’ll partner closely with other scientists and engineers in a collegial environment with a clear path to business impact. We have an exciting portfolio of research areas including network optimization, routing, routing inputs, electric vehicles, delivery speed, capacity planning, geospatial planning and dispatch solutions for different last mile programs leveraging the latest OR, ML, and Generative AI methods, at a global scale. We are actively looking to hire senior scientists to lead one or more of these problem spaces. Successful candidates will have a deep knowledge of Operations Research and Machine Learning methods, experience in applying these methods to large-scale business problems, the ability to map models into production-worthy code in Python or Java, the communication skills necessary to explain complex technical approaches to a variety of stakeholders and customers, and the excitement to take iterative approaches to tackle big research challenges. Inclusive Team Culture Here at Amazon, we embrace our differences. We are committed to furthering our culture of inclusion. We have innovative benefit offerings, and host annual and ongoing learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences. Amazon’s culture of inclusion is reinforced within our 14 Leadership Principles, which reminds team members to seek diverse perspectives, learn and be curious, and earn trust. Mentorship & Career Growth We care about your career growth too. Whether your goals are to explore new technologies, take on bigger opportunities, or get to the next level, we'll help you get there. Our business is growing fast and our people will grow with it. Key job responsibilities • Invent and design new solutions for scientifically-complex problem areas and identify opportunities for invention in existing or new business initiatives. • Successfully deliver large or critical solutions to complex problems in the support of medium-to-large business goals. • Influence the design of scientifically-complex software solutions or systems, for which you personally write significant parts of the critical scientific novelty. • Apply mathematical optimization techniques and algorithms to design optimal or near optimal solution methodologies to be used by in-house decision support tools and software. • Research, prototype, simulate, and experiment with these models and participate in the production level deployment in Python or Java. • Make insightful contributions to teams's roadmaps, goals, priorities, and approach. • Actively engage with the internal and external scientific communities by publishing scientific articles and participating in research conferences.
US, WA, Seattle
The Creator team’s mission is to make Amazon the Earth’s most desired destination for commerce creators and their content. We own the Associates and Influencer programs and brands across Amazon to ensure a cohesive experience, expand creators’ opportunities to earn through innovation, and launch experiences that reinforce feelings of achievement for creators. Within Creators, our Shoppable Content team focuses on enriching the Amazon shopping experience with inspiring and engaging content like shoppable videos that guides customers’ purchasing decisions, building products and services that enable creators to publish and manage shoppable content posts internal teams to build innovative, content-first experiences for customers. We’re seeking a customer-obsessed, data driven leader to manage our Science and Analytics teams. You will lead a team of Applied Scientists, Economists, Data Scientists, Business Intelligence Engineers, and Data Engineers to develop innovative solutions that help us address creator needs and make creators more successful on Amazon. You will define the strategic vision for how to make science foundational to everything we do, including leading the development of data models and analysis tools to represent the ground truth about creator measurement and test results to facilitate important business decisions, both off and on Amazon. Domains include creator incrementality, compensation, acquisition, recommendations, life cycle, and content quality. You will work with multiple engineering, software, economics and business teams to specify requirements, define data collection, interpretation strategies, data pipelines and implement data analysis and reporting tools. Your focus will be in optimizing the analysis of test results to enable the efficient growth of the Creator channel. You should be able to operate with a high level of autonomy and possess strong communication skills, both written and verbal. You have a combination of strong data analytical skills and business acumen, and a demonstrable ability to operate at scale. You can influence up, down, and across and thrive in entrepreneurial environments. You will excel at hiring and retaining a team of top performers, set strategic direction, build bridges with stakeholders, and cultivate a culture of invention and collaboration. Key job responsibilities · Own and prioritize the science and BI roadmaps for the Creator channel, working with our agile developers to size, scope, and weigh the trade-offs across that roadmap and other areas of the business. · Lead a team of data scientists and engineers skilled at using a variety of techniques including classic analytic techniques as well Data Science and Machine Learning to address hard problems. · Deeply understand the creator, their needs, the business landscape, and backend technologies. · Partner with business and engineering stakeholders across multiple teams to gather data/analytics requirements, and provide clear guidance to the team on prioritization and execution. · Run frequent experiments, in partnership with business stakeholders, to inform our innovation plans. · Ensure high availability for data infrastructure and high data quality, partnering with upstream teams as required.
US, CA, Pasadena
The Amazon Web Services (AWS) Center for Quantum Computing in Pasadena, CA, is looking to hire a Research Scientist specializing the design of microwave components for cryogenic environments. Working alongside other scientists and engineers, you will design and validate hardware performing microwave signal conditioning at cryogenic temperatures for AWS quantum processors. Candidates must have a background in both microwave theory and implementation. Working effectively within a cross-functional team environment is critical. The ideal candidate will have a proven track record of hardware development from requirements development to validation. Key job responsibilities Our scientists and engineers collaborate across diverse teams and projects to offer state of the art, cost effective solutions for the signal conditioning of AWS quantum processor systems at cryogenic temperatures. You’ll bring a passion for innovation, collaboration, and mentoring to: Solve layered technical problems across our cryogenic signal chain. Develop requirements with key system stakeholders, including quantum device, test and measurement, cryogenic hardware, and theory teams. Design, implement, test, deploy, and maintain innovative solutions that meet both performance and cost metrics. Research enabling technologies necessary for AWS to produce commercially viable quantum computers. A day in the life As you design and implement cryogenic microwave signal conditioning solutions, from requirements definition to deployment, you will also: Participate in requirements, design, and test reviews and communicate with internal stakeholders. Work cross-functionally to help drive decisions using your unique technical background and skill set. Refine and define standards and processes for operational excellence. Work in a high-paced, startup-like environment where you are provided the resources to innovate quickly. About the team AWS values diverse experiences. Even if you do not meet all of the qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying.
US, WA, Bellevue
The Worldwide Design Engineering (WWDE) organization delivers innovative, effective and efficient engineering solutions that continually improve our customers’ experience. WWDE optimizes designs throughout the entire Amazon value chain providing overall fulfillment solutions from order receipt to last mile delivery. We are seeking a Simulation Scientist to assist in designing and optimizing the fulfillment network concepts and process improvement solutions using discrete event simulations for our World Wide Design Engineering Team. Successful candidates will be visionary technical expert and natural self-starter who have the drive to apply simulation and optimization tools to solve complex flow and buffer challenges during the development of next generation fulfillment solutions. The Simulation Scientist is expected to deep dive into complex problems and drive relentlessly towards innovative solutions working with cross functional teams. Be comfortable interfacing and influencing various functional teams and individuals at all levels of the organization in order to be successful. Lead strategic modelling and simulation projects related to drive process design decisions. Responsibilities: - Lead the design, implementation, and delivery of the simulation data science solutions to perform system of systems discrete event simulations for significantly complex operational processes that have a long-term impact on a product, business, or function using FlexSim, Demo 3D, AnyLogic or any other Discrete Event Simulation (DES) software packages - Lead strategic modeling and simulation research projects to drive process design decisions - Be an exemplary practitioner in simulation science discipline to establish best practices and simplify problems to develop discrete event simulations faster with higher standards - Identify and tackle intrinsically hard process flow simulation problems (e.g., highly complex, ambiguous, undefined, with less existing structure, or having significant business risk or potential for significant impact - Deliver artifacts that set the standard in the organization for excellence, from process flow control algorithm design to validation to implementations to technical documents using simulations - Be a pragmatic problem solver by applying judgment and simulation experience to balance cross-organization trade-offs between competing interests and effectively influence, negotiate, and communicate with internal and external business partners, contractors and vendors for multiple simulation projects - Provide simulation data and measurements that influence the business strategy of an organization. Write effective white papers and artifacts while documenting your approach, simulation outcomes, recommendations, and arguments - Lead and actively participate in reviews of simulation research science solutions. You bring clarity to complexity, probe assumptions, illuminate pitfalls, and foster shared understanding within simulation data science discipline - Pay a significant role in the career development of others, actively mentoring and educating the larger simulation data science community on trends, technologies, and best practices - Use advanced statistical /simulation tools and develop codes (python or another object oriented language) for data analysis , simulation, and developing modeling algorithms - Lead and coordinate simulation efforts between internal teams and outside vendors to develop optimal solutions for the network, including equipment specification, material flow control logic, process design, and site layout - Deliver results according to project schedules and quality Key job responsibilities • You influence the scientific strategy across multiple teams in your business area. You support go/no-go decisions, build consensus, and assist leaders in making trade-offs. You proactively clarify ambiguous problems, scientific deficiencies, and where your team’s solutions may bottleneck innovation for other teams. A day in the life The dat-to-day activities include challenging and problem solving scenario with fun filled environment working with talented and friendly team members. The internal stakeholders are IDEAS team members, WWDE design vertical and Global robotics team members. The team solve problems related to critical Capital decision making related to Material handling equipment and technology design solutions. About the team World Wide Design EngineeringSimulation Team’s mission is to apply advanced simulation tools and techniques to drive process flow design, optimization, and improvement for the Amazon Fulfillment Network. Team develops flow and buffer system simulation, physics simulation, package dynamics simulation and emulation models for various Amazon network facilities, such as Fulfillment Centers (FC), Inbound Cross-Dock (IXD) locations, Sort Centers, Airhubs, Delivery Stations, and Air hubs/Gateways. These intricate simulation models serve as invaluable tools, effectively identifying process flow bottlenecks and optimizing throughput. We are open to hiring candidates to work out of one of the following locations: Bellevue, WA, USA
IN, KA, Bengaluru
Do you want to join an innovative team of scientists who use machine learning and statistical techniques to create state-of-the-art solutions for providing better value to Amazon’s customers? Do you want to build and deploy advanced algorithmic systems that help optimize millions of transactions every day? Are you excited by the prospect of analyzing and modeling terabytes of data to solve real world problems? Do you like to own end-to-end business problems/metrics and directly impact the profitability of the company? Do you like to innovate and simplify? If yes, then you may be a great fit to join the Machine Learning and Data Sciences team for India Consumer Businesses. If you have an entrepreneurial spirit, know how to deliver, love to work with data, are deeply technical, highly innovative and long for the opportunity to build solutions to challenging problems that directly impact the company's bottom-line, we want to talk to you. Major responsibilities - Use machine learning and analytical techniques to create scalable solutions for business problems - Analyze and extract relevant information from large amounts of Amazon’s historical business data to help automate and optimize key processes - Design, development, evaluate and deploy innovative and highly scalable models for predictive learning - Research and implement novel machine learning and statistical approaches - Work closely with software engineering teams to drive real-time model implementations and new feature creations - Work closely with business owners and operations staff to optimize various business operations - Establish scalable, efficient, automated processes for large scale data analyses, model development, model validation and model implementation - Mentor other scientists and engineers in the use of ML techniques ML-India
US, WA, Seattle
Amazon's Global Fixed Marketing Campaign Measurement & Optimization (CMO) team is looking for a senior economic expert in causal inference and applied ML to advance the economic measurement, accuracy validation and optimization methodologies of Amazon's global multi-billion dollar fixed marketing spend. This is a thought leadership position to help set the long-term vision, drive methods innovation, and influence cross-org methods alignment. This role is also an expert in modeling and measuring marketing and customer value with proven capacity to innovate, scale measurement, and mentor talent. This candidate will also work closely with senior Fixed Marketing tech, product, finance and business leadership to devise science roadmaps for innovation and simplification, and adoption of insights to influence important resource allocation, fixed marketing spend and prioritization decisions. Excellent communication skills (verbal and written) are required to ensure success of this collaboration. The candidate must be passionate about advancing science for business and customer impact. Key job responsibilities - Advance measurement, accuracy validation, and optimization methodology within Fixed Marketing. - Motivate and drive data generation to size. - Develop novel, innovative and scalable marketing measurement techniques and methodologies. - Enable product and tech development to scale science solutions and approaches. A day in the life - Propose and refine economic and scientific measurement, accuracy validation, and optimization methodology to improve Fixed Marketing models, outputs and business results - Brief global fixed marketing and retails executives about FM measurement and optimization approaches, providing options to address strategic priorities. - Collaborate with and influence the broader scientific methodology community. About the team CMO's vision is to maximizing long-term free cash flow by providing reliable, accurate and useful global fixed marketing measurement and decision support. The team measures and helps optimize the incremental impact of Amazon (Stores, AWS, Devices) fixed marketing investment across TV, Digital, Social, Radio, and many other channels globally. This is a fully self supported team composed of scientists, economists, engineers, and product/program leaders with S-Team visibility. We are open to hiring candidates to work out of one of the following locations: Irvine, CA, USA | San Francisco, CA, USA | Seattle, WA, USA | Sunnyvale, CA, USA
GB, Cambridge
Our team builds generative AI solutions that will produce some of the future’s most influential voices in media and art. We develop cutting-edge technologies with Amazon Studios, the provider of original content for Prime Video, with Amazon Game Studios and Alexa, the ground-breaking service that powers the audio for Echo. Do you want to be part of the team developing the future technology that impacts the customer experience of ground-breaking products? Then come join us and make history. We are looking for a passionate, talented, and inventive Applied Scientist with a background in Machine Learning to help build industry-leading Speech, Language, Audio and Video technology. As an Applied Scientist at Amazon you will work with talented peers to develop novel algorithms and generative AI models to drive the state of the art in audio (and vocal arts) generation. Position Responsibilities: * Participate in the design, development, evaluation, deployment and updating of data-driven models for digital vocal arts applications. * Participate in research activities including the application and evaluation and digital vocal and video arts techniques for novel applications. * Research and implement novel ML and statistical approaches to add value to the business. * Mentor junior engineers and scientists. We are open to hiring candidates to work out of one of the following locations: Cambridge, GBR
US, TX, Austin
The Workforce Solutions Analytics and Tech team is looking for a senior Applied Scientist who is interested in solving challenging optimization problems in the labor scheduling and operations efficiency space. We are actively looking to hire senior scientists to lead one or more of these problem spaces. Successful candidates will have a deep knowledge of Operations Research and Machine Learning methods, experience in applying these methods to large-scale business problems, the ability to map models into production-worthy code in Python or Java, the communication skills necessary to explain complex technical approaches to a variety of stakeholders and customers, and the excitement to take iterative approaches to tackle big research challenges. As a member of our team, you'll work on cutting-edge projects that directly impact over a million Amazon associates. This is a high-impact role with opportunities to designing and improving complex labor planning and cost optimization models. The successful candidate will be a self-starter comfortable with ambiguity, with strong attention to detail and outstanding ability in balancing technical leadership with strong business judgment to make the right decisions about model and method choices. Successful candidates must thrive in fast-paced environments, which encourage collaborative and creative problem solving, be able to measure and estimate risks, constructively critique peer research, and align research focuses with the Amazon's strategic needs. Key job responsibilities • Candidates will be responsible for developing solutions to better manage and optimize flexible labor capacity. The successful candidate should have solid research experience in one or more technical areas of Operations Research or Machine Learning. As a senior scientist, you will also help coach/mentor junior scientists on the team. • In this role, you will be a technical leader in applied science research with significant scope, impact, and high visibility. You will lead science initiatives for strategic optimization and capacity planning. They require superior logical thinkers who are able to quickly approach large ambiguous problems, turn high-level business requirements into mathematical models, identify the right solution approach, and contribute to the software development for production systems. • Invent and design new solutions for scientifically-complex problem areas and identify opportunities for invention in existing or new business initiatives. • Successfully deliver large or critical solutions to complex problems in the support of medium-to-large business goals. • Apply mathematical optimization techniques and algorithms to design optimal or near optimal solution methodologies to be used for labor planning. • Research, prototype, simulate, and experiment with these models and participate in the production level deployment in Python or Java. We are open to hiring candidates to work out of one of the following locations: Arlington, VA, USA | Austin, TX, USA | Bellevue, WA, USA | Nashville, TN, USA | Seattle, WA, USA | Tempe, AZ, USA
US, WA, Bellevue
Amazon SCOT OIH (Supply Chain Optimization Technology - Optimal Inventory Health) team owns inventory health management for Retail worldwide. We use a dynamic programming model to maximize the net present value of inventory driving actions such as pricing markdowns, deals, removals, coupons etc. Our team, the OIH Insights Team energize and empower OIH business with the clarity and conviction required to make impactful business decisions through the generation of actionable and explainable insights, we do so through the following mechanisms: -- Transforming raw, complex datasets into intuitive, and actionable insights that impact OIH strategy and accelerate business decision making. -- Building and maintaining modular, scalable data models that provide the generality, flexibility, intuitiveness, and responsiveness required for seamless self-service insights. -- Generating deeper insights that drive competitive advantage using statistical modeling and machine learning. As a data scientist in the team, you can contribute to each layers of a data solution – you work closely with business intelligence engineers and product managers to obtain relevant datasets and prototype predictive analytic models, you team up with data engineers and software development engineers to implement data pipeline to productionize your models, and review key results with business leaders and stakeholders. Your work exhibits a balance between scientific validity and business practicality. You will be diving deep in our data and have a strong bias for action to quickly produce high quality data analyses with clear findings and recommendations. The ideal candidate is self-motivated, has experience in applying technical knowledge to a business context, can turn ambiguous business questions into clearly defined problems, can effectively collaborate with research scientists, software development engineers, and product managers, and deliver results that meet high standards of data quality, security, and privacy. Key job responsibilities 1. Define and conduct experiments to optimize Long Term Free Cash Flow for Amazon Retail inventory, and communicate insights and recommendations to product, engineering, and business teams 2. Interview stakeholders to gather business requirements and translate them into concrete requirement for data science projects 3. Build models that forecast growth and incorporate inputs from product, engineering, finance and marketing partners 4. Apply data science techniques to automatically identify trends, patterns, and frictions of product life cycle, seasonality, etc 5. Work with data engineers and software development engineers to deploy models and experiments to production 6. Identify and recommend opportunities to automate systems, tools, and processes
US, WA, Seattle
At Amazon, a large portion of our business is driven by third-party Sellers who set their own prices. The Pricing science team is seeking a Sr. Applied Scientist to use statistical and machine learning techniques to design, evangelize, and implement state-of-the-art solutions for never-before-solved problems, helping Marketplace Sellers offer Customers great prices. This role will be a key member of an Advanced Analytics team supporting Pricing related business challenges based in Seattle, WA. The Sr. Applied Scientist will work closely with other research scientists, machine learning experts, and economists to design and run experiments, research new algorithms, and find new ways to improve Seller Pricing to optimize the Customer experience. The Applied Scientist will partner with technology and product leaders to solve business and technology problems using scientific approaches to build new services that surprise and delight our customers. An Applied Scientist at Amazon applies scientific principles to support significant invention, develops code and are deeply involved in bringing their algorithms to production. They also work on cross-disciplinary efforts with other scientists within Amazon. The key strategic objectives for this role include: - Understanding drivers, impacts, and key influences on Pricing dynamics. - Optimizing Seller Pricing to improve the Customer experience. - Drive actions at scale to provide low prices and increased selection for customers using scientifically-based methods and decision making. - Helping to support production systems that take inputs from multiple models and make decisions in real time. - Automating feedback loops for algorithms in production. - Utilizing Amazon systems and tools to effectively work with terabytes of data. You can also learn more about Amazon science here - https://www.amazon.science/