Accelerating Parallel Training of Neural Nets

Earlier this year, we reported a speech recognition system trained on a million hours of data, a feat possible through semi-supervised learning, in which training data is annotated by machines rather than by people.

These sorts of massive machine learning projects are becoming more common, and they require distributing the training process across multiple processors. Otherwise, training becomes too time consuming.

In theory, doubling the number of processors should halve the training time, but in practice, it doesn’t work that way: the need to synchronize distributed computations results in some inevitable communication overhead. Parallelization schemes always involve some trade-off between training efficiency and the accuracy of the resulting model.

In a paper we’re presenting at this year’s Interspeech, we describe a new approach to parallelizing the training of neural networks that combines two state-of-the-art methods and improves on both. One of the existing methods prioritizes model accuracy, and the other prioritizes training efficiency. In tests that involved training an acoustic model — a vital component of a speech recognition system — our hybrid method proved both more accurate than the accuracy-prioritizing method and more efficient than the efficiency-prioritizing method.

The accuracy-prioritizing method was proposed in 2015 by Amazon senior principal scientist Nikko Strom, who’s also a coauthor on the new paper. At the time, the method — known as synchronous GTC, for gradient threshold compression — was a breakthrough, enabling parallelization to as many as 32 processors with little loss in model accuracy. Our experiments indicate, however, that beyond 32 processors, communications overhead can make GTC much less efficient.

When we needed to increase the processor count to handle a million hours of data, we switched to a different parallelization method, called BMUF, for blockwise model update filtering. BMUF scales better than GTC, but at the price of accuracy. In experiments we report in our new paper, at 64 processors, BMUF triples the training rate achievable with GTC — but it also triples the loss in accuracy.

In our new work, we split the distributed processors into groups, and each group performs GTC within itself. Every so often, however, the groups share their latest models using BMUF, which re-synchronizes models across all processors.

BMUF-GTC.gif._CB436386414_ (1).gif
Our new method splits distributed processors into groups, and within each group, the processors use the highly accurate GTC method to synchronize their models. At regular intervals, designated representatives from all the groups share their models and update their own local models accordingly. Finally, each representative broadcasts its updated model to the rest of its group.
Animation by Nick Little

A neural network consists of thousands or even millions of very simple processors, often called neurons. Each neuron receives data from several other neurons, processes it, and passes the results to still other neurons. Connections between neurons have associated weights, which determine how big a role the outputs of one neuron play in the computations performed by the next.

During training, a network will process a small amount of data, known as a minibatch, evaluate the results, and update its weights accordingly. Then it will process the next minibatch, and so on.

The natural way to parallelize this procedure is to give each processor its own copy of the initial network weights and its own set of minibatches. After each minibatch, each processor will broadcast the updates it needs to make to its weights (technically, “gradients”) to all the other processors. A given processor simply combines all the updates it receives and applies them to its own, local copy of the network. Then it processes its next minibatch.

This method is equivalent to training a model on a single processor. But broadcasting a full set of weight updates after each minibatch is so bandwidth intensive that it eats up the time savings from parallelization. GTC modifies this procedure in a few crucial ways.

First, it exploits the fact that the weight updates that follow the processing of a single minibatch tend to make major modifications to only a few neural connections. GTC requires the establishment of a threshold — the T in GTC — below which weight updates will not be broadcast, saving bandwidth. (Weights that fall below the threshold are, however, stored locally, where they may still factor into later computations.)

The weight threshold, denoted by the Greek letter tau, is determined empirically and varies from application to application. In our experiments, the optimal setting of tau turned out to be 8.

Next, when broadcasting its weight updates, each processor sends only one of two values, tau or -tau. Those two values can be represented by a single bit of information, compressing (the C in GTC) the update message. (Like updates that fall below the threshold, the residuals of weights above the threshold are stored locally and factor into later computations.)

These two modifications do sacrifice some accuracy. That’s why, in our experiments, we evaluate relative increase in error rate. All three methods we compare — GTC, BMUF, and our GTC-BMUF hybrid — increase error rate. The question is how much, and what we gain in efficiency in exchange.

With BMUF, each processor continually updates its own local copy of the neural model. After a fixed number of minibatches — say, 50 — it broadcasts its model, and all the processors update their models accordingly. This drastically cuts down on bandwidth consumption, but it decreases model accuracy.

By combining these approaches — GTC locally, BMUF globally — we get the best of both worlds. On the acoustic-modeling task we used for testing, our hybrid is slightly less efficient than BMUF at 32 cores, offering a 31-fold speedup relative to single-core processing; BMUF’s speedup is actually superlinear, at 36.6-fold. But our method reduces the error rate, while BMUF increases it by 3.5%. GTC, with a 26-fold speedup, increases error rate by 1.4%.

At 64 cores, GTC offers the best accuracy, with an error rate increase of 2.8%, versus 3.1% for our method and 8.9% for BMUF. But GTC’s efficiency actually falls, to 17-fold, versus 57-fold for BMUF and 42-fold for our method.

At 128 cores, our method is the undisputed champion, offering a 97-fold speedup and a 4.7% error rate increase, to 80-fold/9.6% for BMUF and 11-fold/15.6% for GTC.

About the Author
Pranav Ladkat is a research engineer in Alexa AI’s Machine Learning Platform Services group.

Related content

US, WA, Seattle
Job summaryAre you passionate about conducting measurement research and experiments to assess and evaluate talent? Would you like to see your research in products that will drive key talent management behaviors globally to ensure we are raising the bar on our talent? If so, you should consider joining the CXNS team!Amazon CXNS team is an innovative organization that exists to propel Amazon HR toward being the most scientific HR organization on earth. CNXS mission is to use Science to assist and measurably improve every talent decision made at Amazon. CXNS does this by discovering signals in workforce data, infusing intelligence into Amazon’s talent products, and guiding the broader CXNS team to pursue high-impact opportunities with tangible returns. This multi-disciplinary approach spans capabilities, including: data engineering, reporting and analytics, research and behavioral sciences, and applied sciences such as economics and machine learning.In this role, you will support measurement efforts for Amazon Connections (an innovative program that gives Amazonians a confidential and effective way to give feedback on the workplace to help shape the future of the company and improve the employee experience). You will own the research development strategy to evaluate, diagnose, understand, and surface drivers and moderators for key research streams. These include (but are not limited to) attrition, engagement, productivity, diversity, and Amazon culture. You will deep dive and analyze what research should be conducted and to what end, develop hypotheses that can be tested, and support a larger research program to deliver deeper insights that we can surface to leaders on our platform (short term and long).You will use both quantitative and qualitative data as well as conduct research studies to test your hypotheses. You will use a variety of statistical approaches to model and understand behavior. You will develop algorithms and thresholds to surface personalized results to managers/leaders, and partner with machine learning scientists to build these statistical models into production that scales. You will work with an interdisciplinary team of psychologists, economists, ML scientists, UX researchers, engineers, and product managers to inform and build product features to surface deeper people and business insights for our leaders.What you'll do:· Lead a global research strategy to drive more effective decisions and improve the employee experience across all of Amazon· Execute a scalable global content development and research strategy Amazon-wide· Conduct psychometrics analyses to evaluate integrity and practical application of content· Identify research streams to evaluate how to mitigate or remove sources of measurement error· Partner closely and drive effective collaborations across multi-disciplinary research and product teams· Manage full life cycle of large scale research programs (Develop strategy, gather requirements, execute, and evaluate)This person will possess knowledge of different assessment approaches to evaluate performance, a strong psychometrics background, scientific survey methodology, and computing various content validity analyses.
US, WA, Seattle
Job summaryWW Installments is one of the fastest growing businesses within Amazon and we are looking for an Economist to join the team. This group has been entrusted with a massive charter that will impact every customer that visits Amazon.com. We are building the next generation of features and payment products that maximize customer enablement in a simple, transparent, and customer obsessed way. Through these products, we will deliver value directly to Amazon customers improving the shopping experience for hundreds of millions of customers worldwide. Our mission is to delight our customers by building payment experiences and financial services that are trusted, valued, and easy to use from anywhere in any way.Economists at Amazon are solving some of the most challenging applied economics questions in the tech sector. Amazon economists apply the frontier of economic thinking to market design, pricing, forecasting, program evaluation, online advertising and other areas. Our economists build econometric models using our world class data systems, and apply economic theory to solve business problems in a fast-moving environment. A career at Amazon affords economists the opportunity to work with data of unparalleled quality, apply rigorous applied econometric approaches, and work with some of the most talented applied econometricians in the trade.As the Economist within WW Installments, you will be responsible for building long-term causal inference models and experiments. These analysis represent a core capability for WW Installments and businesses across Amazon. Your work will directly impact customers by influencing how objective functions are designed and which inputs are consumed for modeling. You will work across functions including machine learning, business intelligence, data engineering, software development, and finance to induce data driven decisions at every level of the organization.Key job responsibilitiesThis role will be responsible for:• Developing a causal inference and experimentation roadmap for the WW Installments Competitive Pricing team.• Apply expertise in causal and econometric modeling to develop large-scale systems that are deployed across Amazon businesses.• Identify business opportunities, define and execute modeling approach, then deliver outcomes to various Amazon businesses with an Amazon-wide perspective for solutions.• Lead the project plan from a scientific perspective on product launches including identifying potential risks, key milestones, and paths to mitigate risks• Own key inputs to reports consumed by VPs and Directors across Amazon.• Identifying new opportunities to influence business strategy and product vision using causal inference.• Continually improve the WW Installments experimentation roadmap automating and simplifying whenever possible.• Coordinate support across engineers, scientists, and stakeholders to deliver analytical projects and build proof of concept applications.• Work through significant business and technical ambiguity delivering on analytics roadmap across the team with autonomy.
US, WA, Seattle
Job summaryWW Installments is one of the fastest growing businesses within Amazon and we are looking for an Applied Scientist to join the team. This group has been entrusted with a massive charter that will impact every customer that visits Amazon.com. We are building the next generation of features and payment products that maximize customer enablement in a simple, transparent, and customer obsessed way. Through these products, we will deliver value directly to Amazon customers improving the shopping experience for hundreds of millions of customers worldwide. Our mission is to delight our customers by building payment experiences and financial services that are trusted, valued, and easy to use from anywhere in any way.As an Applied Scientist within WW Installments, you will be responsible for building machine learning models and pipelines with direct customer impact. These models represent a core capability for WW Installments and businesses across Amazon. Your work will directly impact customers by influencing how they interact with financing options to make purchases. You will work across functions including data engineering, software development, and business to induce data driven decisions at every level of the organization.Key job responsibilitiesThis role will be responsible for:• Developing production machine learning models and pipelines for the WW Installments Competitive Pricing team that directly impact customers.• Apply expertise in machine learning to develop large-scale production systems that are deployed across Amazon businesses.• Identify business opportunities, define and execute modeling approach, then deliver outcomes to various Amazon businesses with an Amazon-wide perspective for solutions.• Lead the implementation of production ML from a scientific perspective including identifying potential risks, key milestones, and paths to mitigate risks.• Identifying new opportunities to influence business strategy and product vision using data science and machine learning.• Continually improve the WW Installments ML roadmap automating and simplifying whenever possible.• Coordinate support across engineers, scientists, and stakeholders to deliver ML pipelines, analytics projects, and build proof of concept applications.• Work through significant business and technical ambiguity delivering on analytics roadmap across the team with autonomy.
US, WA, Seattle
Job summaryWW Installments is one of the fastest growing businesses within Amazon and we are looking for a Data Scientist to join the team. This group has been entrusted with a massive charter that will impact every customer that visits Amazon.com. We are building the next generation of features and payment products that maximize customer enablement in a simple, transparent, and customer obsessed way. Through these products, we will deliver value directly to Amazon customers improving the shopping experience for hundreds of millions of customers worldwide. Our mission is to delight our customers by building payment experiences and financial services that are trusted, valued, and easy to use from anywhere in any way.As a Data Scientist within WW Installments, you will be responsible for building machine learning models and pipelines with direct customer impact. These models represent a core capability for WW Installments and businesses across Amazon. Your work will directly impact customers by influencing how they interact with financing options to make purchases. You will work across functions including data engineering, software development, and business to induce data driven decisions at every level of the organization.Key job responsibilitiesThis role will be responsible for:• Developing machine learning models and pipelines for the WW Installments Competitive Pricing team.• Apply expertise in machine learning to develop large-scale systems that are deployed across Amazon businesses.• Identify business opportunities, define and execute modeling approach, then deliver outcomes to various Amazon businesses with an Amazon-wide perspective for solutions.• Lead the project plan from a scientific perspective on product launches including identifying potential risks, key milestones, and paths to mitigate risks.• Own key inputs to reports consumed by VPs and Directors across Amazon.• Identifying new opportunities to influence business strategy and product vision using data science and machine learning.• Continually improve the WW Installments ML roadmap automating and simplifying whenever possible.• Coordinate support across engineers, scientists, and stakeholders to deliver ML pipelines, analytics projects, and build proof of concept applications.• Work through significant business and technical ambiguity delivering on analytics roadmap across the team with autonomy.
US, CA, Palo Alto
Job summaryAmazon is the 4th most popular site in the US (http://www.alexa.com/topsites/countries/US). Our product search engine is one of the most heavily used services in the world, indexes billions of products, and serves hundreds of millions of customers world-wide. We are working on a new AI-first initiative to re-architect and reinvent the way we do search through the use of extremely large scale next-generation deep learning techniques. Our goal is to make step function improvements in the use of advanced Machine Learning (ML) on very large scale datasets, specifically through the use of aggressive systems engineering and hardware accelerators. This is a rare opportunity to develop cutting edge ML solutions and apply them to a problem of this magnitude. Some exciting questions that we expect to answer over the next few years include:· Can a focus on compilers and custom hardware help us accelerate model training and reduce hardware costs?· Can combining supervised multi-task training with unsupervised training help us to improve model accuracy?· Can we transfer our knowledge of the customer to every language and every locale ? The Search Science team is looking for a Senior Applied Science Manager to drive roadmap on making large business impact through application of Deep Learning models via close collaboration with partner teams. The team also has a focus on technology solution for deep-learning based embedding generation, sensitive data ingestion and applications, data quality measurement, improvement, data bias identification and reduction to achieve model fairness.Success in this role will require the courage to chart a new course. You will manage your own team to understand all aspects of the customer journey. You and your team will inform other scientists and engineers by providing insights and building models to help improving training data quality and reducing bias. The research focus includes but not limited to Natural Language Processing, recommendation, applications relevant to Amazon buyers, sellers and more. You will be working with cutting edge technologies that enable big data and parallelizable algorithms. You will play an active role in translating business and functional requirements into concrete deliverables and working closely with software development teams to put solutions into production.
US, WA, Seattle
Job summaryAmazon EC2 provides cloud computing which forms the foundation for the majority of AWS services, as well as a large portion of compute use cases for businesses and individuals around the world. A critical factor in the continued success of EC2 is the ability to provide reliable and cost effective computing. The EC2 Fleet Health and Lifecycle (EC2 FHL) organization is responsible for ensuring that the global EC2 server fleet continues to raise the bar for reliability, security, and efficiency. We are looking for seasoned engineering leaders with passion for technology and an entrepreneurial mindset. At Amazon, it is all about working hard, having fun and making history. If you are ready to make history, we want to hear from you!Come join a brand new team, EC2 Health Analytics, under EC2 Foundational Technology, to solve complex cutting-edge problems to power a faster, more robust and performant EC2 of tomorrow. The charter of our team is to improve customer experience on the EC2 fleet by analyzing hundreds of signals and driving next-generation detection and remediation tools. We apply Machine Learning to predict outcomes and optimize decisions that improve customer experience and operational efficiency. As an Applied Scientist in the EC2 Health Analytics team, you will join an industry-leading engineering team solving challenging problems at massive scale.· Build a world-class forecasting platform that scales to handling billions of time series data in real time.· Drive fleet utilization improvement where each 1% means tens of millions of additional free cash flow.· Automate tactical and strategic capacity planning tools to optimize for service availability and infrastructure cost.· Build recommendation algorithms for improving the AWS customer experience.· · Reduce dependence on manual troubleshooting for deep-dives.What you will learn:· State-of-the-art analytics and forecasting methodologies.· Application of machine learning to large-scale data sets.· · Product recommendation algorithms.· Resource management and admission control for the Cloud.· The internals of all AWS services.Inclusive Team CultureHere at AWS, we embrace our differences. We are committed to furthering our culture of inclusion. We have ten employee-led affinity groups, reaching 40,000 employees in over 190 chapters globally. We have innovative benefit offerings, and host annual and ongoing learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences.Work/Life BalanceOur team puts a high value on work-life balance. It isn’t about how many hours you spend at home or at work; it’s about the flow you establish that brings energy to both parts of your life. We believe striking the right balance between your personal and professional life is critical to life-long happiness and fulfillment. We offer flexibility in working hours and encourage you to find your own balance between your work and personal lives.Mentorship & Career GrowthOur team is dedicated to supporting new members. We have a broad mix of experience levels and tenures, and we’re building an environment that celebrates knowledge sharing and mentorship. Our senior members enjoy one-on-one mentoring and thorough, but kind, code reviews. We care about your career growth and strive to assign projects based on what will help each team member develop into a better-rounded engineer and enable them to take on more complex tasks in the future.
US, CA, Palo Alto
Job summaryThe Amazon Search team creates powerful, customer-focused search and advertising solutions and technologies. Whenever a customer visits an Amazon site worldwide and types in a query or browses through product categories, the Amazon Search services go to work. We design, develop, and deploy high performance, fault-tolerant distributed search systems used by millions of Amazon customers every day. Our team works to maximize the quality and effectiveness of the search experience for visitors to Amazon websites worldwide.
US, WA, Bellevue
Job summaryThe primary mission of ADECT Monitoring team is to protect customer trust and improve customer experience with Alexa skills and devices. As part of this role, you will build models to improve customer’s experience on Alexa. The team uses various signals to ensure that customers get delightful experiences. This could be through experience improvements or ensuring that only high quality experiences reach customers. We use a lot of data along with multiple approaches such as machine learning and other algorithmic approaches to solve challenges that customers face interacting with Alexa. The ideal candidate will be an expert in the areas of data science, machine learning and statistics, having hands-on experience with multiple improvement initiatives as well as balancing technical and business judgment to make the right decisions about technology, models and methodologies. This involves building conversation arbitration models, which validate conversation quality and metrics to measure and continuously improve on it. These are some of the challenges that have not been solved in the industry before. The candidate needs experience with data science / business intelligence, analytics, and reporting systems while striving for simplicity, and demonstrating significant creativity and high judgment backed by statistical proof. The candidate is also expected to take these models into production, so they need to have some experience with software systems as well. There will be guidance provided on the software front though.
US, WA, Bellevue
Job summaryThe primary mission of ADECT Monitoring team is to protect customer trust and improve customer experience with Alexa skills and devices. The team uses various signals to ensure that customers get delightful experiences. This could be through experience improvements or ensuring that only high quality experiences reach customers. We use a lot of data along with multiple approaches such as machine learning and other algorithmic approaches to solve challenges that customers face interacting with Alexa. The ideal candidate will be an expert in the areas of data science, machine learning and statistics, having hands-on experience with multiple improvement initiatives as well as balancing technical and business judgment to make the right decisions about technology, models and methodologies. As part of this role, you will build models to improve customer’s experience on Alexa. This involves building conversation arbitration models, which validate conversation quality and metrics to measure and continuously improve on it. These are some of the challenges that have not been solved in the industry before. The candidate needs experience with data science / business intelligence, analytics, and reporting systems while striving for simplicity, and demonstrating significant creativity and high judgment backed by statistical proof. The candidate is also expected to work on ML models to improve customer trust. This role will have an opportunity to convert to an Applied Scientist.
US, CA, Sunnyvale
Job summaryAmazon Lab126 is an inventive research and development company that designs and engineers high-profile consumer electronics. Lab126 began in 2004 as a subsidiary of Amazon.com, Inc., originally creating the best-selling Kindle family of products. Since then, we have produced groundbreaking devices like Fire tablets, Fire TV and Amazon Echo. What will you help us create?The Role:As a Design Analysis Engineer, you will be responsible for bringing new product designs through to manufacturing. Thermal and structural engineering contributes unique, in-depth technical knowledge to solve complex engineering problems in concert with multi-disciplinary teams including Industrial Design, Hardware Engineering, and Operations.You will work closely with multi-disciplinary groups including Product Design, Industrial Design, Hardware Engineering, and Operations, to drive key aspects of engineering of consumer electronics products. In this role, you will:· Perform analysis and testing of complex electronic assemblies using advanced simulation and experimentation tools and techniques· Strong fundamentals in dynamics with emphasis on system dynamics, mechanism analysis (Multi Body Dynamics analysis) and co-simulation· Develop, analyze and test thermal, acoustic and structural solutions; from concept design, feature development, product architecture, through system validation· Support creative developments through application of analysis and testing of complex electronic assemblies using advanced simulation and experimentation tools and techniques· Use simulation tools like Abaqus, LS-Dyna, Simpack for analysis and design of products· Validate design modifications using simulation and actual prototypes· Use of programming languages like Python and Matlab for analytical/statistical analyses and automation· Establish noise thresholds for usability and compliance requirements· Determine and validate structural performance under use and test conditions· Have strong knowledge of various materials such as heat spreaders solutions to resolve thermal issues, damping materials for noise and vibration suppression· Use various data acquisition systems with thermocouples, accelerometers, strain gauges and IR cameras· Collaborate as part of the device team to iterate and optimize design parameters of enclosures and structural parts to establish and deliver project performance objectives· Design and execute tests using statistical tools to validate analytical models, identify risks and assess design margins· Create and present analytical and experimental results· Develop and apply design guidelines based on project results