Amazon Mentors Help UMass Graduate Students Make Concrete Advances on Vital Machine Learning Problems
Earlier this month, Varun Sharma and Akshit Tyagi, two master’s students from the University of Massachusetts Amherst, began summer internships at Amazon, where, like many other scientists in training, they will be working on Alexa’s spoken-language-understanding systems.
But for Sharma and Tyagi, the internship is the culmination of a relationship that began last winter, when they enrolled in a course in UMass Amherst’s College of Information and Computer Sciences called Industry Mentorship Independent Study, taught by distinguished professor Andrew McCallum and managed by the college’s Center for Data Science.
Students in the class were divided into four- to five-person teams, each of which spent the entire spring semester working on a single project, with the guidance of industry mentors from a company with a strong artificial-intelligence research program. Sharma and Tyagi were part of a five-member team mentored by Rahul Gupta, a senior applied scientist, and Bill Campbell, an applied science manager, both of the Alexa Natural Understanding group based in Cambridge, MA.
The entire class met once a week for a two-hour session with McCallum, in which students reported their progress to each other and received feedback from McCallum, the course teaching assistant, and several other PhD-level volunteers. But each team also met separately with its mentors.
“We would talk weekly to brainstorm ideas and discuss current progress and also try and divide tasks among the team members,” Sharma says. “Plus, they have a ton of experience that we don’t have, so they would tell us about things to watch out for or help out with stuff that we were stuck on.”
“But the most beneficial thing, I’d say, would be the access,” Sharma adds. “You don’t have that in other classes. I never had one-on-one office hours that would go for an hour before.”
At the beginning of the semester, Gupta and Campbell presented the UMass students with a set of possible research topics that they had developed with other members of the Alexa Natural Understanding group. The students eventually chose “early exit” strategies for neural networks as their topic.
Most recent advances in artificial intelligence — including Alexa’s latest natural-language-understanding systems — are the result of neural networks, dense networks of simple information processors that collectively execute some computation. The more complex the computation, the larger the network tends to be. But larger networks are also slower, presenting challenges for real-time systems such as Alexa.
Typically, neural networks are arranged into layers, with data bubbling up through the layers until, finally, the output of the top layer represents the result of the computation. Early-exit strategies are techniques for “bailing out” when the outputs of lower layers already represent reliable computation results, reducing processing time. The key is making this determination on the fly, so that more-challenging inputs are still processed by the full network.
“There’s a need in devices and clouds and also in edge computing” — or decentralized computing schemes that push computational resources closer to the edge of the network — “to potentially split the computation or to reduce the load,” Campbell says. “That also has the advantage that you may get insight into what kind of features are being extracted by the system. If you early exit, you say, ‘Well, the neural net has pretty good features at this point already for this particular problem.’ So the motivation is computational but also a qualitative understanding of how things are making decisions and potentially splitting the computation between some edge device and the cloud.”
“This is of particular importance to our devices that are in offline mode,” Gupta adds. “We support a very limited set of functionalities offline. With this we can expand the set of functionalities, where more of those decisions can be made on the device. Even devices that require an Internet connection, if the Internet connection goes down, they can still maintain this model functionality.”
Sharma, Tyagi, and the other members of their UMass team — Nan Zhuang, Zihang Wang, and Lynn Samson — experimented with a neural net consisting of three stacked long short-term memory layers, or LSTMs. LSTMs process ordered inputs in sequence, so that the output corresponding to any given input factors in both the inputs and outputs that preceded it. This is a useful property in natural-language processing, where word order is a valuable source of information.
Neural networks are typically trained on labeled data, and during training, their goal is to minimize “loss”, or the difference between the labels they apply to the data and the true labels. Usually, the loss function applies only to the output of the network’s last layer.
In their experiments, the UMass students instead correlated labels with the outputs of each of the network’s three layers, and the loss function factored in all three layers’ outputs. In fact, the loss function assigned greater weight to the outputs of the networks’ lower layers, essentially forcing them to produce labels that were as accurate as possible.
The outputs of neural networks are also probabilistic. Suppose, for instance, that a request to the Alexa music service is classified according to one of a dozen “intents”, such as playing music, playing a radio station, creating a new station, getting details about music, or the like. Then the output of the intent classification network would indicate the probability that the request belonged to each of those classes.
At each layer of their network, the UMass students used those probabilities as a confidence measure, to determine whether or not to exit early. Where previous early-exit strategies had used a threshold confidence score as a hard cutoff, the UMass system instead uses entropy, an information measure that considers not only the likelihood of the most probable classification but also the relative probabilities of all the others.
Sharma, Tyagi, and their teammates found that with their LSTM network, the number of operations the system had to perform (floating-point operations, or FLOPs) was roughly proportional to the number of network layers that processed an input: 23,084 FLOPs with exit after one layer, 46,143 with exit after two, and 69,202 with exit after three. A reference model without early exit required 69,192 FLOPs on the same input, so the additional machinery for early exit added very little overhead.
Moreover, the early-exit model was actually, on average, more accurate than the reference model, despite reducing computation time significantly. The researchers suspect that that’s because forcing the network’s early layers to produce more-accurate representations “regularized” the network, or ensured that computations were evenly distributed across it. This prevents overfitting, or tailoring the network’s computations too narrowly to the training data.
Results like these mean that the UMass students’ project was no mere academic exercise. “Programs like the UMass Amherst Center for Data Science mentorship class not only strengthen our ties to the academic community and help us identify promising young researchers, but they also help us make real progress on projects that will help Alexa become smarter and more trustworthy,” Gupta says.