Amazon takes top three spots in Audio Anomaly Detection Challenge

Team from Amazon Web Services also wins the best-paper award at the Workshop on Detection and Classification of Acoustic Scenes and Events.

This week at Amazon Web Services’ re:Invent 2020 conference, Amazon announced Amazon Monitron, an end-to-end machine-monitoring system composed of sensors, a gateway, and a machine learning model that detects anomalies in vibration (structure-borne sound) or temperature and predicts when equipment may require maintenance. 

Machine condition monitoring was also the topic of a challenge at the Workshop on the Detection and Classification of Acoustic Scenes and Events (DCASE 2020), in November, in which Amazon took the top three spots, out of 117 submissions.

The challenge was to determine whether the sounds emitted by a machine — such as a fan, pump, or valve — were normal or anomalous. Forty academic and industry teams submitted entries, an average of almost three submissions per team.

In a pair of papers (paper 1|paper 2) we presented at the workshop, we describe the two different neural-network-based approaches we took in our submissions to the challenge. The first of those papers won the workshop’s best-paper award.

Spectrograms of audio clips recorded from a normal valve and a faulty valve.
Spectrograms of audio clips recorded from a normal valve (top) and a faulty valve (bottom). The magnified details show the difference between the normal signal and the anomalous signal.

Auditory machine condition monitoring has been common in industrial settings for several decades. Seasoned maintenance experts can identify problems in the machines they monitor just by listening to them and realizing that “something doesn’t sound right.” But by the time anomalies are audible to the human ear, the underlying problems may already be well advanced.

With the advent of machine learning and big data, there has been a lot of interest in teaching machines to detect anomalies sooner, to help predict when preventative maintenance might be necessary.

Data, labels, and rare failures

In general, anomaly detection is the problem of identifying abnormal inputs in a stream of inputs. Depending on the available data, there are three different ways to train anomaly detection systems: (i) fully supervised, in which labeled examples of normal and abnormal data are presented; (ii) semi-supervised, in which only normal data is presented; and (iii) unsupervised, in which there are no labels in the data set, and outliers have to be classified automatically. 

Anomalies can manifest themselves in different ways. For instance, you can have slow concept drift or sudden, instantaneous outliers. Typically, the data is also highly imbalanced — a lot more “normal” examples than “abnormal.”

Machines worth monitoring carefully — especially those that are critical or expensive — are usually also well maintained. This means that they rarely fail, and gathering anomalous data from them is challenging and may take many years and lots of effort.

Additionally, machines operate in different modes and under variable load or performance conditions, and their characteristics can change over time as they age and approach steady state. Some industries’ operational profiles have seasonal variations as well. 

All of these factors make anomaly detection challenging in the industrial setting. When implementing an anomaly detection system, one has to depend mostly on “normal” data, gathering additional data over time and eliciting user feedback. 

If accurate physical models of machines are available, it may be possible to simulate failures and generate “abnormal” data that way. One can also generate anomalous data by inducing hardware failures in the lab. But one has to be prepared to work with minimal data when a machine is instrumented for the first time (the so-called cold-start problem).

Anomaly detection and our two neural approaches

The papers we presented at DCASE (paper1|paper2) describe two different neural-network-based approaches to anomaly detection.

The first approach builds on recent advances in autoregressive neural-density estimation, or calculating a data distribution for streaming data by trying to predict each new data item on the basis of those that preceded it. As might be expected, such models are very sensitive to the order in which data arrives.

An earlier model, called the masked autoencoder for density estimation (MADE), makes a separate prediction for each feature — each dimension — of the input. With audio signals, however, the dimensions of the input are the energies in different frequency bands, which produce a composite picture of the signal that individual frequencies won’t capture. 

We introduce a variation of MADE that bases its predictions on groups of input features — in this case, groups of frequency bands — and which we accordingly call Group MADE.

In the second paper, we use a self-supervised approach for representation learning, which has been successful recently in solving problems in vision and speech. We believe that we are the first to apply it to audio anomaly detection. 

In the absence of anomalies in the training data, we trained a network to instead learn to distinguish multiple instances of machines within a given machine type. We found that the features learned by such a network were sensitive enough to detect delicate, previously unseen anomalies in the evaluation set. We used spectral warping and random mixing to simulate new machine instances in addition to the ones provided in the dataset. 

Two-dimensional visualizations of two different representations of the Toy Car sounds in the DCASE data set.
Two-dimensional visualizations of two different representations of the Toy Car sounds in the DCASE data set: the raw spectrograms (left) and the features learned by our self-supervised model (right). The blue samples represent data from a normal machine, the red samples data from an anomalous one. In the raw spectrograms, there is little separation between the normal and anomalous samples, while the learned features can much more clearly separate out the anomalous samples.

The DCASE challenge provided data from six different machines: fan, pump, slide rail, valve, toy car, and toy conveyor. DCASE also provided a development data set and a separate evaluation data set. Scoring was calculated using area under the ROC curve (AUC) and partial area under the ROC curve. The ROC curve maps false-positive rate against false-negative rate, so the area under the curve indicates how well a given system manages that trade-off; partial AUC is the AUC over a small false-positive-rate range, in this case [0, 0.1]. 

The table below shows the accuracies we were able to obtain, both for the challenge and since the challenge. We have developed a third approach that helped improve some of these numbers, which we will detail in a future publication. 

The challenge ranking method involved two steps, to account for the the disparate difficulty levels across various machine types. First, machine-specific rankings were assigned to all submissions, based on AUC and pAUC. The submissions were then ranked by the average of their machine-specific ranks. Please see the full leaderboard here.

While our models won the challenge using the across-all-machine-types scoring described above, fine-tuning them for specific machine types yielded the results in the last row.

DCASE results table

We believe that as more industrial machine data is accumulated and curated over the next few years, machine learning and neural-network-based approaches will start making a huge difference in the monitoring and maintenance of machines, and AWS and its services will be at the forefront of this revolution.

Research areas

Related content

US, CA, Palo Alto
The Amazon Search team creates powerful, customer-focused search and advertising solutions and technologies. Whenever a customer visits an Amazon site worldwide and types in a query or browses through product categories, the Amazon Search services go to work. We design, develop, and deploy high performance, fault-tolerant distributed search systems used by millions of Amazon customers every day. Our team works to maximize the quality and effectiveness of the search experience for visitors to Amazon websites worldwide.
JP, Tokyo
The Amazon Logistics (AMZL) Team is responsible for the acquisition, design, construction, and management of all facilities in the Amazon Delivery Station Network. AMZL is looking for a talented and passionate Data Scientist to help shape its Last Mile business with technical strategies and solutions, by processing, analyzing and interpreting huge data sets. You should be comfortable with ambiguity, problem solving and enjoy working in a fast-paced, diverse and dynamic environment. Using analytical rigor and statistical methods, you mine through data to identify opportunities for Amazon and our delivery channels. And you collaborate with other scientists, engineers, Product and Program Managers to deploy new products and solutions. [More Information] Last Mile Department Data Analyst/BI Engineer Tokyo Office *Amazon is committed to a diverse and inclusive workplace. Amazon is an equal opportunity employer and does not discriminate on the basis of race, national origin, gender, gender identity, sexual orientation, protected veteran status, disability, age, or other legally protected status. For individuals with disabilities who would like to request an accommodation, visit https://www.amazon.jobs/disability/jp Key job responsibilities Creating a roadmap of the most challenging business questions and use data to articulate possible root cause analysis and solutions Managing and executing entire projects or components of large projects from start to finish including project management, data gathering and manipulation, synthesis and modeling, problem solving, and communication of insights Partnering with Product, Program and Engineering teams to design and run models, research new algorithms, and prove incrementality and drive growth Understanding drivers, impacts, and key influences on seller growth dynamics Developing and scaling end-to-end ML Models and solutions Automating feedback loops for algorithms in production Utilizing Amazon systems and tools to effectively work with terabytes of data About the team Last Mile Execution Analytics (LMEA) team of JP works as an integral part of Amazon Logistics to ensure that its business intelligence, analytics, tools and planning needs are met. By providing information, insight, and decision support, we strive to enable success of all parts of AMZL. Our customer set includes senior management, station operations, external vendors, long-term planning, Ops technology (Voice of the Delivery Station, Voice of the Customer), network planning, and pretty much every BI and Ops teams. Voice of Employee [Work Life Harmony] We believe, it is important to spend private time such as spending time with your family or doing anything you like to spur innovation. Amazon promotes a fulfilling and flexible work style according to the work volume and lifestyle of each employee.
US, CA, San Francisco
About Twitch Launched in 2011, Twitch is a global community that comes together each day to create multiplayer entertainment: unique, live, unpredictable experiences created by the interactions of millions. We bring the joy of co-op to everything, from casual gaming to world-class esports to anime marathons, music, and art streams. Twitch also hosts TwitchCon, where we bring everyone together to celebrate, learn, and grow their personal interests and passions. We’re always live at Twitch. Stay up to date on all things Twitch on Linkedin, Twitter and on our Blog. About the role: Twitch builds data-driven machine learning solutions across several rich problem spaces: Natural Language Processing (NLP), Recommendations, Semantic Search, Classification/Categorization, Anomaly Detection, Forecasting, Safety, and HCI/Social Computing/Computational Social Science. As an Intern, you will work with a dedicated Mentor and Manager on a project in one of these problem areas. You will also be supported by an Advisor and participate in cohort activities such as research teach backs and leadership talks. This position can also be located in San Francisco, CA or virtual. You Will: Solve large-scale data problems. Design solutions for Twitch's problem spaces Explore ML and data research
US, CA, San Francisco
About Twitch Launched in 2011, Twitch is a global community that comes together each day to create multiplayer entertainment: unique, live, unpredictable experiences created by the interactions of millions. We bring the joy of co-op to everything, from casual gaming to world-class esports to anime marathons, music, and art streams. Twitch also hosts TwitchCon, where we bring everyone together to celebrate, learn, and grow their personal interests and passions. We’re always live at Twitch. Stay up to date on all things Twitch on Linkedin, Twitter and on our Blog. About the role: Twitch builds data-driven machine learning solutions across several rich problem spaces: Natural Language Processing (NLP), Recommendations, Semantic Search, Classification/Categorization, Anomaly Detection, Forecasting, Safety, and HCI/Social Computing/Computational Social Science. As an Intern, you will work with a dedicated Mentor and Manager on a project in one of these problem areas. You will also be supported by an Advisor and participate in cohort activities such as research teach backs and leadership talks. This position can also be located in San Francisco, CA or virtual. You Will: Solve large-scale data problems. Design solutions for Twitch's problem spaces Explore ML and data research
LU, Luxembourg
Are you a talented and inventive scientist with a strong passion about modern data technologies and interested to improve business processes, extracting value from the data? Would you like to be a part of an organization that is aiming to use self-learning technology to process data in order to support the management of the procurement function? The Global Procurement Technology, as a part of Global Procurement Operations, is seeking a skilled Data Scientist to help build its future data intelligence in business ecosystem, working with large distributed systems of data and providing Machine Learning (ML) and Predictive Modeling expertise. You will be a member of the Data Engineering and ML Team, joining a fast-growing global organization, with a great vision to transform the Procurement field, and become the role model in the market. This team plays a strategic role supporting the core Procurement business domains as well as it is the cornerstone of any transformation and innovation initiative. Our mission is to provide a high-quality data environment to facilitate process optimization and business digitalization, on a global scale. We are supporting business initiatives, including but not limited to, strategic supplier sourcing (e.g. contracting, negotiation, spend analysis, market research, etc.), order management, supplier performance, etc. We are seeking an individual who can thrive in a fast-paced work environment, be collaborative and share knowledge and experience with his colleagues. You are expected to deliver results, but at the same time have fun with your teammates and enjoy working in the company. In Amazon, you will find all the resources required to learn new skills, grow your career, and become a better professional. You will connect with world leaders in your field and you will be tackling Data Science challenges to ensure business continuity, by taking the right decisions for your customers. As a Data Scientist in the team, you will: -be the subject matter expert to support team strategies that will take Global Procurement Operations towards world-class predictive maintenance practices and processes, driving more effective procurement functions, e.g. supplier segmentation, negotiations, shipping supplies volume forecast, spend management, etc. -have strong analytical skills and excel in the design, creation, management, and enterprise use of large data sets, combining raw data from different sources -provide technical expertise to support the development of ML models to facilitate intelligent digital services, such as Contract Lifecycle Management (CLM) and Negotiations platform -cooperate closely with different groups of stakeholders, e.g. data/software engineers, product/program managers, analysts, senior leadership, etc. to evaluate business needs and objectives to set up the best data management environment -create and share with audiences of varying levels technical papers and presentations -deal with ambiguity, prioritizing needs, and delivering results in a dynamic environment Basic qualifications -Master’s Degree in Computer Science/Engineering, Informatics, Mathematics, or a related technical discipline -3+ years of industry experience in data engineering/science, business intelligence or related field -3+ years experience in algorithm design, engineering and implementation for very-large scale applications to solve real problems -Very good knowledge of data modeling and evaluation -Very good understanding of regression modeling, forecasting techniques, time series analysis, machine-learning concepts such as supervised and unsupervised learning, classification, random forest, etc. -SQL and query performance tuning skills Preferred qualifications -2+ years of proficiency in using R, Python, Scala, Java or any modern language for data processing and statistical analysis -Experience with various RDBMS, such as PostgreSQL, MS SQL Server, MySQL, etc. -Experience architecting Big Data and ML solutions with AWS products (Redshift, DynamoDB, Lambda, S3, EMR, SageMaker, Lex, Kendra, Forecast etc.) -Experience articulating business questions and using quantitative techniques to arrive at a solution using available data -Experience with agile/scrum methodologies and its benefits of managing projects efficiently and delivering results iteratively -Excellent written and verbal communication skills including data visualization, especially in regards to quantitative topics discussed with non-technical colleagues
US, CA, San Francisco
About Twitch Launched in 2011, Twitch is a global community that comes together each day to create multiplayer entertainment: unique, live, unpredictable experiences created by the interactions of millions. We bring the joy of co-op to everything, from casual gaming to world-class esports to anime marathons, music, and art streams. Twitch also hosts TwitchCon, where we bring everyone together to celebrate, learn, and grow their personal interests and passions. We’re always live at Twitch. Stay up to date on all things Twitch on Linkedin, Twitter and on our Blog. About the role: Twitch builds data-driven machine learning solutions across several rich problem spaces: Natural Language Processing (NLP), Recommendations, Semantic Search, Classification/Categorization, Anomaly Detection, Forecasting, Safety, and HCI/Social Computing/Computational Social Science. As an Intern, you will work with a dedicated Mentor and Manager on a project in one of these problem areas. You will also be supported by an Advisor and participate in cohort activities such as research teach backs and leadership talks. This position can also be located in San Francisco, CA or virtual. You Will: Solve large-scale data problems. Design solutions for Twitch's problem spaces Explore ML and data research
US, CA, San Francisco
About Twitch Launched in 2011, Twitch is a global community that comes together each day to create multiplayer entertainment: unique, live, unpredictable experiences created by the interactions of millions. We bring the joy of co-op to everything, from casual gaming to world-class esports to anime marathons, music, and art streams. Twitch also hosts TwitchCon, where we bring everyone together to celebrate, learn, and grow their personal interests and passions. We’re always live at Twitch. Stay up to date on all things Twitch on Linkedin, Twitter and on our Blog. About the role: Twitch builds data-driven machine learning solutions across several rich problem spaces: Natural Language Processing (NLP), Recommendations, Semantic Search, Classification/Categorization, Anomaly Detection, Forecasting, Safety, and HCI/Social Computing/Computational Social Science. As an Intern, you will work with a dedicated Mentor and Manager on a project in one of these problem areas. You will also be supported by an Advisor and participate in cohort activities such as research teach backs and leadership talks. This position can also be located in San Francisco, CA or virtual. You Will: Solve large-scale data problems. Design solutions for Twitch's problem spaces Explore ML and data research
US, CA, San Francisco
About Twitch Launched in 2011, Twitch is a global community that comes together each day to create multiplayer entertainment: unique, live, unpredictable experiences created by the interactions of millions. We bring the joy of co-op to everything, from casual gaming to world-class esports to anime marathons, music, and art streams. Twitch also hosts TwitchCon, where we bring everyone together to celebrate, learn, and grow their personal interests and passions. We’re always live at Twitch. Stay up to date on all things Twitch on Linkedin, Twitter and on our Blog. About the role: Twitch builds data-driven machine learning solutions across several rich problem spaces: Natural Language Processing (NLP), Recommendations, Semantic Search, Classification/Categorization, Anomaly Detection, Forecasting, Safety, and HCI/Social Computing/Computational Social Science. As an Intern, you will work with a dedicated Mentor and Manager on a project in one of these problem areas. You will also be supported by an Advisor and participate in cohort activities such as research teach backs and leadership talks. This position can also be located in San Francisco, CA or virtual. You Will: Solve large-scale data problems. Design solutions for Twitch's problem spaces Explore ML and data research
US, CA, San Francisco
About Twitch Launched in 2011, Twitch is a global community that comes together each day to create multiplayer entertainment: unique, live, unpredictable experiences created by the interactions of millions. We bring the joy of co-op to everything, from casual gaming to world-class esports to anime marathons, music, and art streams. Twitch also hosts TwitchCon, where we bring everyone together to celebrate, learn, and grow their personal interests and passions. We’re always live at Twitch. Stay up to date on all things Twitch on Linkedin, Twitter and on our Blog. About the role: Twitch builds data-driven machine learning solutions across several rich problem spaces: Natural Language Processing (NLP), Recommendations, Semantic Search, Classification/Categorization, Anomaly Detection, Forecasting, Safety, and HCI/Social Computing/Computational Social Science. As an Intern, you will work with a dedicated Mentor and Manager on a project in one of these problem areas. You will also be supported by an Advisor and participate in cohort activities such as research teach backs and leadership talks. This position can also be located in San Francisco, CA or virtual. You Will: Solve large-scale data problems. Design solutions for Twitch's problem spaces Explore ML and data research
US, WA, Seattle
We are a team of doers working passionately to apply cutting-edge advances in deep learning in the life sciences to solve real-world problems. As a Senior Applied Science Manager you will participate in developing exciting products for customers. Our team rewards curiosity while maintaining a laser-focus in bringing products to market. Competitive candidates are responsive, flexible, and able to succeed within an open, collaborative, entrepreneurial, startup-like environment. At the leading edge of both academic and applied research in this product area, you have the opportunity to work together with a diverse and talented team of scientists, engineers, and product managers and collaborate with others teams. Location is in Seattle, US Embrace Diversity Here at Amazon, we embrace our differences. We are committed to furthering our culture of inclusion. We have ten employee-led affinity groups, reaching 40,000 employees in over 190 chapters globally. We have innovative benefit offerings, and host annual and ongoing learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences. Amazon’s culture of inclusion is reinforced within our 14 Leadership Principles, which remind team members to seek diverse perspectives, learn and be curious, and earn trust Balance Work and Life Our team puts a high value on work-life balance. It isn’t about how many hours you spend at home or at work; it’s about the flow you establish that brings energy to both parts of your life. We believe striking the right balance between your personal and professional life is critical to life-long happiness and fulfillment. We offer flexibility in working hours and encourage you to find your own balance between your work and personal lives Mentor & Grow Careers Our team is dedicated to supporting new members. We have a broad mix of experience levels and tenures, and we’re building an environment that celebrates knowledge sharing and mentorship. Our senior members enjoy one-on-one mentoring and thorough, but kind, code reviews. We care about your career growth and strive to assign projects based on what will help each team member develop into a better-rounded engineer and enable them to take on more complex tasks in the future. Key job responsibilities • Manage high performing engineering and science teams • Hire and develop top-performing engineers, scientists, and other managers • Develop and execute on project plans and delivery commitments • Work with business, data science, software engineer, biological, and product leaders to help define product requirements and with managers, scientists, and engineers to execute on them • Build and maintain world-class customer experience and operational excellence for your deliverables