Near-linear scaling of gigantic-model training on AWS

A new distributed-training library achieves near-linear efficiency in scaling from tens to hundreds of GPUs.

State-of-the-art language models have billions of parameters. Training these models within a manageable time requires distributing the workload across a large computing cluster. Ideally, training time would decrease linearly as the cluster size scales up. However, linear scaling is difficult to achieve because the communication required to coordinate the work of the cluster nodes eats into the gains from parallelization.

Related content
Amazon researchers optimize the distributed-training tool to run efficiently on the Elastic Fabric Adapter network interface.

Recently, we put some effort into optimizing the communication efficiency of Microsoft’s DeepSpeed distributed-training library, dramatically improving performance for up to 64 GPUs. However, when we scale from tens of GPUs to hundreds, in the public cloud environment, communication overhead again begins to overwhelm efficiency gains.

In a paper that we'll present in 2023 at the International Conference on Very Large Data Bases (VLDB), we propose a method to make model training scale efficiently on hundreds of GPUs in the cloud. We call this method MiCS, because it minimizes communication scale to bring down communication overhead.

Specifically, where existing distributed-training frameworks such as DeepSpeed and FairScale divide a model state across all GPUs, MiCS makes multiple replicas of the model state and partitions each replica within a subset of GPUs. Depending on the model size, a replica may fit on a single computing node — a single machine with high-speed connections between its GPUs — or on multiple nodes.

Thus, in MiCS, frequent communication operations, like parameter gathering, are restricted to a subset of GPUs. In this way, when we scale a cluster up — by adding new replicas across new nodes — the communication latency of frequent communication operations remains fixed, rather than growing with the size of the cluster.

We also reduce the data volume transmitted between nodes in the event that a copy of the model state won’t fit in a single node. Lastly, MiCS includes a gradient synchronization schedule that amortizes expensive gradient synchronization among all workers.

Our experimental results show significant improvement in throughput and scaling efficiency on different-sized BERT models evaluated on clusters consisting of p3dn.24xlarge instances. MiCS is able to achieve near-linear scalability (denoted by the rectangular frames in the figure below) and provides up to 2.82-fold throughput compared to the second and third states of the three-stage zero-redundancy optimizer, or ZeRO, the communication management method built into DeepSpeed-v0.5.6 .

We have also compared MiCS with our earlier optimizations of ZeRO’s third stage (see figure below), demonstrating improvements even at the lower GPU counts that we investigated previously. We report all these findings in greater detail in a preprint paper on the arXiv.

MiCS results.png
A comparison of MiCS and our earlier optimizations of DeepSpeed Zero’s third stage.

AWS P4d provides up to 400Gbps networking bandwidth for high-performance computing. Unfortunately, the distributed system may not be able to fully utilize 400Gbps efficiently because of communication overhead — especially latency, which increases when adding more GPUs to the cluster.

Related content
Optimizing placement of configuration data ensures that it’s available and consistent during “network partitions”.

We have deployed MiCS to train proprietary models with up to 175 billion parameters on p4d.24xlarge (40GB A100) and p4de.24xlarge (80GB A100) instances. When training a 175-billion-parameter model with a sequence length of 2,048 on 16 p4de.24xlarge instances, we are able to achieve 169-teraflops (54.2% of the theoretical peak) performance on each GPU. When we train a 100-billion-parameter model on 64 p4d.24xlarge instances (512 A100 GPUs), MiCS maintains over 170 teraflops per GPU (54.5% of the theoretical peak).

When the size of the cluster is scaled from 128 GPUs to 512 GPUs, MiCS achieves 99.4% of the linear-scaling efficiency (as measured by the “weak scaling” metric). In contrast, DeepSpeed ZeRO’s third stage achieves only 72% weak-scaling efficiency and saturates at 62 teraflops per GPU (19.9% of the theoretical peak).

Scale-aware model partitioning

By default, DeepSpeed partitions model states across all devices, a strategy that lowers the memory consumption on each GPU in the cluster but incurs large communication overhead in training. More importantly, the overhead scales up with the size of the cluster, which causes the scalability to drop significantly at large scale.

Instead of partitioning model states to all GPUs, MiCS divides GPUs in the cluster into multiple groups and partitions model states within each group. We call these groups partition groups. Each group holds a complete replica of model states. The following figure gives an example of partition groups consisting of two consecutive GPUs. Those GPUs holding the same part of the model state form another kind of group, a replication group.

Graphic shows the relationship between partition groups and replication groups in MiCS.
The relationship between partition groups and replication groups in MiCS.

Partitioning model states within each partition group restricts the most frequent communications, parameter gathering and gradient synchronization, within a fixed number of GPUs. This strategy effectively controls the communication overhead and does not let it grow with the size of the cluster.

Hierarchical communication strategy

When the memory requirement for a single replica of the model state is larger than the total amount of GPU memory in a single node, we need to store the replica on GPUs spanning multiple nodes. In that case, we have to rely on less-efficient internode communication.

Related content
Earlier this year, we reported a speech recognition system trained on a million hours of data, a feat possible through semi-supervised learning, in which training data is annotated by machines rather than by people. These sorts of massive machine learning projects are becoming more common, and they require distributing the training process across multiple processors. Otherwise, training becomes too time consuming.

The volume of transmitted data and the latency in a collective communication are determined by the message size and the number of participants. Particularly, the communication volume is proportional to (p - 1)/p, where p denotes the number of participants, and if the participants use the standard ring-shaped communication pattern, the latency has a linear dependency on the number of participants.

The message size cannot be reduced without compromising data integrity, but we can reduce the number of participants in internode communications. This lowers the communication volume factor to (p - k)/p and latency by p/(p/k + k) times, where k is the number of GPUs on a single node.

Consider the simple example below, involving two nodes with two GPUs each. The standard ring-shaped communication pattern would aggregate data across nodes (left) by passing messages from each GPU to the next, so a single internode communication involves four GPUs.

Internode communication.png
MiCS reduces the number of GPUs that participate in any given internode communication.

MiCS, by contrast, executes these internode operations in parallel, so each internode communication involves only two GPUs (right), which exchange only half the information that we want to communicate. Each node then aggregates the internode data locally to assemble the full message. In this case, the communication volume factor is reduced from ¾ ((4-1)/4) to ½ ((4-2/4).

Two-hop gradient synchronization

Synchronizing gradients among all workers is an expensive operation, required to keep workers working on the same model states. During the training of large neural nets, batch size is typically limited by GPU memory. Gradient accumulation is a technique that splits a batch of samples into several microbatches that will be run sequentially in multiple microsteps.

Related content
“Anytime query” approach adapts to the available resources.

With MiCS, we can accumulate gradients inside each partition group in multiple microbatches until the last microbatch is processed. That is, for each microstep, we can accumulate the full set of gradients for each model replica inside a subset of GPUs (i.e., a partition group). Then, after the last microbatch is handled, each GPU synchronizes gradients with the other GPUs representing the same part of the model state.

This allows us to amortize the synchronization overhead across replication groups to multiple microsteps. The following figure gives an example of two-hop gradient synchronization for training with four microsteps.

Gradient accumulation.png
Two-hop gradient synchronization.

Because of these three techniques, MiCS shows great scalability on large clusters and delivers excellent training throughput performance, and it enables us to achieve a new state-of-the-art performance on AWS p4de.24xlarge machines.

We are working to open-source MiCS for public use, in the belief that it will greatly reduce the time and cost of large-model training on the Amazon EC2 platform. Please refer to our preprint for a more detailed explanation of our system and analysis of its performance.

Acknowledgements: Yida Wang, Justin Chiu, Roshan Makhijani, RJ, Stephen Rawls, Xin Jin

Research areas

Related content

US, MA, North Reading
Are you inspired by invention? Is problem solving through teamwork in your DNA? Do you like the idea of seeing how your work impacts the bigger picture? Answer yes to any of these and you’ll fit right in here at Amazon Robotics. We are a smart team of doers that work passionately to apply cutting edge advances in robotics and software to solve real-world challenges that will transform our customers’ experiences in ways we can’t even imagine yet. We invent new improvements every day. We are Amazon Robotics and we will give you the tools and support you need to invent with us in ways that are rewarding, fulfilling and fun. The Research Science team at Amazon Robotics is seeking interns with a passion for robotic research to work on cutting edge algorithms for robotics. Our team works on challenging and high-impact projects, including allocating resources to complete a million orders a day, coordinating the motion of thousands of robots, autonomous navigation in warehouses, and learning how to grasp all the products Amazon sells. We are seeking internship candidates with backgrounds in computer vision, machine learning, resource allocation, discrete optimization, search, planning/scheduling, and reinforcement learning. As an intern you will develop a new algorithm to solve one of the challenging computer vision and manipulation problems in Amazon's robotic warehouses. Your project will fit your academic research experience and interests. You will code and test out your solutions in increasingly realistic scenarios and iterate on the idea with your mentor to find the best solution to the problem.
US, WA, Seattle
Are you excited about building high-performance robotic systems that can perceive, learn, and act intelligently alongside humans? The Robotics AI team is creating new science products and technologies that make this possible, at Amazon scale. We work at the intersection of computer vision, machine learning, robotic manipulation, navigation, and human-robot interaction.The Amazon Robotics team is seeking broad, curious applied scientists and engineering interns to join our diverse, full-stack team. In addition to designing, building, and delivering end-to-end robotic systems, our team is responsible for core infrastructure and tools that serve as the backbone of our robotic applications, enabling roboticists, applied scientists, software and hardware engineers to collaborate and deploy systems in the lab and in the field. Come join us!
US, VA, Arlington
The Central Science Team within Amazon’s People Experience and Technology org (PXTCS) uses economics, behavioral science, statistics, and machine learning to proactively identify mechanisms and process improvements which simultaneously improve Amazon and the lives, well-being, and the value of work to Amazonians. We are an interdisciplinary team, which combines the talents of science and engineering to develop and deliver solutions that measurably achieve this goal. As Director for PXT Central Science Technology, you will be responsible for leading multiple teams through rapidly evolving complex demands and define, develop, deliver and execute on our science roadmap and vision. You will provide thought leadership to scientists and engineers to invent and implement scalable machine learning recommendations and data driven algorithms supporting flexible UI frameworks. You will manage and be responsible for delivering some of our most strategic technical initiatives. You will design, develop and operate new, highly scalable software systems that support Amazon’s efforts to be Earth’s Best Employer and have a significant impact on Amazon’s commitment to our employees and communities where we both serve and employ 1.3 million Amazonians. As Director of Applied Science, you will be part of the larger technical leadership community at Amazon. This community forms the backbone of the company, plays a critical role in the broad business planning, works closely with senior executives to develop business targets and resource requirements, influences our long-term technical and business strategy, helps hire and develop engineering leaders and developers, and ultimately enables us to deliver engineering innovations.This role is posted for Arlington, VA, but we are flexible on location at many of our offices in the US and Canada.
US, VA, Arlington
Employer: Amazon.com Services LLCPosition: Data Scientist IILocation: Arlington, VAMultiple Positions Available1. Manage and execute entire projects or components of large projects from start to finish including data gathering and manipulation, synthesis and modeling, problem solving, and communication of insights and recommendations.2. Oversee the development and implementation of data integration and analytic strategies to support population health initiatives.3. Leverage big data to explore and introduce areas of analytics and technologies.4. Analyze data to identify opportunities to impact populations.5. Perform advanced integrated comprehensive reporting, consultative, and analytical expertise to provide healthcare cost and utilization data and translate findings into actionable information for internal and external stakeholders.6. Oversee the collection of data, ensuring timelines are met, data is accurate and within established format.7. Act as a data and technical resource and escalation point for data issues, ensuring they are brought to resolution.8. Serve as the subject matter expert on health care benefits data modeling, system architecture, data governance, and business intelligence tools. #0000
US, TX, Dallas
Employer: Amazon.com Services LLCPosition: Data Scientist II (multiple positions available)Location: Dallas, TX Multiple Positions Available:1. Assist customers to deliver Machine Learning (ML) and Deep Learning (DL) projects from beginning to end, by aggregating data, exploring data, building and validating predictive models, and deploying completed models to deliver business impact to the organization;2. Apply understanding of the customer’s business need and guide them to a solution using AWS AI Services, AWS AI Platforms, AWS AI Frameworks, and AWS AI EC2 Instances;3. Use Deep Learning frameworks like MXNet, PyTorch, Caffe 2, Tensorflow, Theano, CNTK, and Keras to help our customers build DL models;4. Research, design, implement and evaluate novel computer vision algorithms and ML/DL algorithms;5. Work with data architects and engineers to analyze, extract, normalize, and label relevant data;6. Work with DevOps engineers to help customers operationalize models after they are built;7. Assist customers with identifying model drift and retraining models;8. Research and implement novel ML and DL approaches, including using FPGA;9. Develop computer vision and machine learning methods and algorithms to address real-world customer use-cases; and10. Design and run experiments, research new algorithms, and work closely with engineers to put algorithms and models into practice to help solve customers' most challenging problems.11. Approximately 15% domestic and international travel required.12. Telecommuting benefits are available.#0000
US, WA, Seattle
MULTIPLE POSITIONS AVAILABLECompany: AMAZON.COM SERVICES LLCPosition Title: Manager III, Data ScienceLocation: Bellevue, WashingtonPosition Responsibilities:Manage a team of data scientists working to build large-scale, technical solutions to increase effectiveness of Amazon Fulfillment systems. Define key business goals and map them to the success of technical solutions. Aggregate, analyze and model data from multiple sources to inform business decisions. Manage and quantify improvement in the customer experience resulting from research outcomes. Develop and manage a long-term research vision and portfolio of research initiatives, with algorithms and models that to be integrated in production systems. Hire and mentor junior scientists.Amazon.com is an Equal Opportunity-Affirmative Action Employer – Minority / Female / Disability / Veteran / Gender Identity / Sexual Orientation #0000
US, VA, Arlington
MULTIPLE POSITIONS AVAILABLECompany: AMAZON.COM SERVICES LLCPosition Title: Data Scientist IILocation: Arlington, VirginiaPosition Responsibilities:Design and implement scalable and reliable approaches to support or automate decision making throughout the business. Apply a range of data science techniques and tools combined with subject matter expertise to solve difficult business problems and cases in which the solution approach is unclear. Acquire data by building the necessary SQL / ETL queries. Import processes through various company specific interfaces for accessing Oracle, RedShift, and Spark storage systems. Build relationships with stakeholders and counterparts. Analyze data for trends and input validity by inspecting univariate distributions, exploring bivariate relationships, constructing appropriate transformations, and tracking down the source and meaning of anomalies. Build models using statistical modeling, mathematical modeling, econometric modeling, network modeling, social network modeling, natural language processing, machine learning algorithms, genetic algorithms, and neural networks. Validate models against alternative approaches, expected and observed outcome, and other business defined key performance indicators. Implement models that comply with evaluations of the computational demands, accuracy, and reliability of the relevant ETL processes at various stages of production.Amazon.com is an Equal Opportunity-Affirmative Action Employer – Minority / Female / Disability / Veteran / Gender Identity / Sexual Orientation #0000
US, WA, Seattle
Are you motivated to explore research in ambiguous spaces? Are you interested in conducting research that will improve the employee and manager experience at Amazon? Do you want to work on an interdisciplinary team of scientists that collaborate rather than compete? Join us at PXT Central Science!The People eXperience and Technology Central Science Team (PXTCS) uses economics, behavioral science, statistics, and machine learning to proactively identify mechanisms and process improvements which simultaneously improve Amazon and the lives, wellbeing, and the value of work to Amazonians. We are an interdisciplinary team that combines the talents of science and engineering to develop and deliver solutions that measurably achieve this goal.We are seeking a senior Applied Scientist with expertise in more than one or more of the following areas: machine learning, natural language processing, computational linguistics, algorithmic fairness, statistical inference, causal modeling, reinforcement learning, Bayesian methods, predictive analytics, decision theory, recommender systems, deep learning, time series modeling. In this role, you will lead and support research efforts within all aspects of the employee lifecycle: from candidate identification to recruiting, to onboarding and talent management, to leadership and development, to finally retention and brand advocacy upon exit.The ideal candidate should have strong problem-solving skills, excellent business acumen, the ability to work independently and collaboratively, and have an expertise in both science and engineering. The ideal candidate is not methods-driven, but driven by the research question at hand; in other words, they will select the appropriate method for the problem, rather than searching for questions to answer with a preferred method. The candidate will need to navigate complex and ambiguous business challenges by asking the right questions, understanding what methodologies to employ, and communicating results to multiple audiences (e.g., technical peers, functional teams, business leaders).About the teamWe are a collegial and multidisciplinary team of researchers in People eXperience and Technology (PXT) that combines the talents of science and engineering to develop innovative solutions to make Amazon Earth's Best Employer. We leverage data and rigorous analysis to help Amazon attract, retain, and develop one of the world’s largest and most talented workforces.
US, WA, Bellevue
Job summaryThe Global Supply Chain-ACES organization aims to raise the bar on Amazon’s customer experience by delivering holistic solutions for Global Customer Fulfillment that facilitate the effective and efficient movement of product through our supply chain. We develop strategies, processes, material handling and technology solutions, reporting and other mechanisms, which are simple, technology enabled, globally scalable, and locally relevant. We achieve this through cross-functional partnerships, listening to the needs of our customers and prioritizing initiatives to deliver maximum impact across the value chain. Within the organization, our Quality team balances tactical operation with operations partners with global engagement on programs to deliver improved inventory accuracy in our network. The organization is looking for an experienced Principal Research Scientist to partner with senior leadership to develop long term strategic solutions. As a Principal Scientist, they will lead critical initiatives for Global Supply Chain, leveraging complex data analysis and visualization to:a. Collaborate with business teams to define data requirements and processes;b. Automate data pipelines;c. Design, develop, and maintain scalable (automated) reports and dashboards that track progress towards plans;d. Define, track and report program success metrics.e. Serve as a technical science lead on our most demanding, cross-functional projects.
US, MA, Cambridge
Job summaryMULTIPLE POSITIONS AVAILABLECompany: AMAZON.COM SERVICES LLCPosition Title: Data Scientist IILocation: Cambridge, MassachusettsPosition Responsibilities:Utilize code (Python, R, etc.) to build ML models to solve specific business problems. Build and measure novel online & offline metrics for personal digital assistants and customer scenarios, on diverse devices and endpoints. Research and implement novel machine learning algorithms and models. Collaborate with researchers, software developers, and business leaders to define product requirements and provide modeling solutions. Communicate verbally and in writing to business customers and leadership team with various levels of technical knowledge, educating them about our systems, as well as sharing insights and recommendations.Amazon.com is an Equal Opportunity-Affirmative Action Employer – Minority / Female / Disability / Veteran / Gender Identity / Sexual Orientation #0000