Get hands on with “a significant and fundamental advance” in AI
AWS DeepComposer is now available for preview. Learn more about the science behind the musical keyboard designed to expand your machine learning skills.
Amazon announced AWS DeepComposer at re:Invent 2019; it includes tutorials, sample code, and training data that can be used to begin building generative AI models, all without having to write a single line of code.
AWS DeepComposer gives students, and developers a creative way to get started with generative adversarial networks (GANs), a type of generative AI model. Developers can use the keyboard to create a melody that will transform into a completely original song in seconds, all powered by AI. However, the primary purpose of AWS Deep Composer isn’t to create the next “Shake it Off”, “Four Seasons” or “Watermelon Man.”
Instead, AWS DeepComposer is a component of Amazon’s mission to make the power of machine learning available to all developers. It’s the third product launched as part of Amazon’s “Deep” series of products. The company launched AWS DeepLens in 2017, and AWS DeepRacer in 2018.
AWS DeepLens enables developers to get familiar with computer vision through projects, tutorials, and real-world exploration with a physical device. AWS DeepRacer is a fully autonomous 1/18th scale race car that allows developers to get hands on with reinforcement learning – a machine learning technique that allows systems to learn complex behaviors without requiring any labeled training data.
Now, with AWS DeepComposer, developers can get hands on with generative AI – more specifically with generative adversarial networks (GANs). GANs are a class of machine learning systems made up of two neural networks.
Historically, deep learning models have made predictions after being trained on “ground truth” data. Provide a deep learning model with tons of images, and eventually it will learn to identify a cat, or a pedestrian crossing a road.
GANs flip this paradigm on its head.
Instead of merely making predictions, GANs use the sample inputs to produce entirely new and original digital outputs. Andrew Ng, one of the world’s most influential computer scientists, has called GANs “a significant and fundamental advance” in the field.
The many applications of generative AI models
No matter whether you’re a student, an academic, or a developer, you can use generative AI to create practical applications across industries, from turning sketches into images for accelerated product development, to improving computer-aided design of complex objects.
“At AWS, we’re seeing generative AI being applied in the most unexpected and effective ways,” says Rahul Suresh, a software development manager in AWS’ AI division. “It’s why I was so excited to work on AWS DeepComposer, and enable businesses to do so much more with machine learning.”
Suresh points to several industries where generative AI is advancing innovation.
“Airbus is reimagining multiple structural aircraft components, and developing lighter-weight parts that exceed performance and safety standards. Glidewell Dental is training GPU-powered GANs to produce dental crowns. And JPL and Autodesk are collaborating to explore new approaches to design and manufacturing processes for space exploration. With AWS DeepComposer, Suresh says. “We are giving developers the ability to learn about generative AI in a hands-on way, so that they can create entirely new ways of doing old and familiar things.”
We are giving developers the ability to learn about generative AI in a hands-on way, so that they can create entirely new ways of doing old and familiar things.
The thinking behind AWS DeepComposer
AWS DeepComposer allows developers to train and optimize GANs to create original music.
A GAN contains two models. Both models are trained on the same data set. The first model generates new musical accompaniments based on what it has learned from the training data. The second “adversarial” model or “critic” then compares the creations with real-world compositions, and provides feedback to the generator. Based on this feedback, the generator improves itself and creates music that’s a lot like what you would hear in the real world. The critic also uses the feedback to learn and get better until the models converge.
“I like to use the metaphor of an orchestra and a conductor to explain generative AI,” says Ambika Pajjuri, a product leader within the AWS AI organization. “An orchestra doesn’t create amazing music the first time they get together. Instead, there’s always a conductor who both judges their output, and coaches them to improve.”
In the case of AWS DeepComposer, the ‘conductor’ model judges the quality of the output – for example, were the right notes played with the right tempo—and provides feedback to say, make the strings go louder, and the horns play softer. This ultimately leads to a composition that is recognizable to the conductor.
The generative AI models that AWS DeepComposer teaches developers uses a similar concept.
“We have two machine learning models that work together in order to learn how to generate musical compositions in distinctive styles,” Pajjuri says. “The ultimate goal is for developers at startups and enterprises to get familiar with generative AI, so that they can use this exciting advancement to build startlingly differentiated solutions for their company or industry.”
An orchestra doesn’t create amazing music the first time they get together. Instead, there’s always a conductor who both judges their output, and coaches them to improve.
Getting started with AWS DeepComposer
Students, academics, and developers can get started with AWS DeepComposer in three simple steps:
- Input a melody by connecting the AWS DeepComposer keyboard to your computer, or play the virtual keyboard in the AWS DeepComposer console.
- If you don’t want to generate your own melody, you can choose from one of the readily available melodies on the console, to generate an original musical composition in seconds using the pre-trained genre models (jazz, rock, pop, symphony) or custom genres curated by American singer-songwriter Jonathan Coulton. In addition to the pre-trained genres models, you can also build your own custom genre model in Amazon SageMaker.
- Publish your tracks to SoundCloud in one click, or export MIDI files to your favorite digital audio workstation (like Garage Band).