“Alexa, how do you know everything?”
How Amazon intern Michael Saxon uses his experience with automatic speech recognition models to help Alexa answer complex queries.
“Alexa, play ‘Rhapsody in Blue’.”
“Playing ‘Rhapsody in Blue’.”
Customers often describe this kind of interaction with Alexa as magical; less than a decade ago it would have seemed fanciful.
One component of the science behind Alexa is automatic speech recognition — the process that Alexa utilizes to interpret semantic meaning from a speech signal. And scientists like Michael Saxon, PhD student and three-time Amazon applied science intern, encounter interesting challenges when a customer’s request is more complex than asking for a song to play.
Saxon is one of more than 10,000 interns Amazon hosted virtually this summer. More than 10 percent of those internships were for applied science and data science roles with teams across the company. The majority of science-related internships run between 12 and 16 weeks.
A growing interest in NLP
Saxon completed his undergraduate degree in electrical engineering and received a master’s in computer engineering at Arizona State University. He’s now completing his PhD in computer science at the University of California, Santa Barbara, with a core focus on natural language processing (NLP).
He became interested in speech and NLP as an undergrad; in his final year, a professor recruited him for a project. Saxon studied the progression of neurological disorders by using automatic speech recognition models to detect and track hypernasality in dysarthric speech.
Saxon later met some Amazon recruiters who were looking for applied science interns at the AAAI Conference on Artificial Intelligence. “Based on my interests in speech and NLP, they offered for me to join the Alexa Hybrid Science team in Pittsburgh,” Saxon says. “And my experience with automatic speech recognition models was a plus.”
Solving end-to-end SLU
A core research direction of the Alexa Hybrid Science team has been the development of neural end-to-end spoken language understanding (SLU) models. For his 2019 internship project, Saxon was given a task that seemed relatively easy to him at the outset: develop an end-to-end intent SLU system that can make a decision after hearing as few words as possible.
However, he found the project proved to be deceptively difficult. Using training data, Saxon and the team were unable to replicate high-performance results from prior SLU publications.
Toward the end of the summer 2019 internship, the team identified the reason why. There was a mismatch between levels of semantic complexity in the training data and the publicly available datasets from the existing literature.
Semantic complexity refers to the number of possible expressions and their various meanings that a collection of language data contains. The more semantically complex the collection, the more ways a program can interpret a single utterance from it.
Due to their relatively low semantic complexity, the publicly available datasets required less training data and ultimately restricted the research systems to choose from a fixed list of predetermined exact command permutations.
Saxon’s team applied the model architecture from the existing literature to Amazon’s training data, which has much higher semantic complexity.
“We found for similarly sized datasets, and similar architectures, that we couldn’t reproduce these strong results from prior work, and we suspected that it was due to this semantic-complexity mismatch,” says Saxon. “The models were fundamentally designed for domains with lower semantic complexity.”
However, this setback in his first internship project inspired the direction for the next one.
When Saxon returned to the Alexa Hybrid Science team for his second internship in January 2020, the team hit the ground running. While he was finishing his master’s coursework at ASU, the team began a research effort toward demonstrating usable measures of semantic complexity to facilitate objective comparisons of SLU tasks.
To produce useful measures, the team needed to compare the relationship between an SLU task’s complexity measures and the accuracy they could achieve with a model if they applied it to different datasets, each less semantically complex than the last.
The team artificially generated datasets of different levels of semantic complexity by repeatedly removing batches of rare words. This led to a continuum of virtual SLU problems ranging from Alexa-level tasks in large artificial datasets to effectively spotting keywords from a short list.
“There is a strong, nearly linear relationship between these semantic complexity measures and the maximum accuracy we were able to get across several different models,” Saxon says. “So that suggests that there is a fundamental relationship between a given model’s performance ceiling and the semantic complexity of the task it solves.”
Saxon and team published their findings on the importance of contextualizing results to demonstrate an SLU system’s scope of applicability in “Semantic Complexity in End-to-End Spoken Language Understanding” and presented them at Interspeech 2020.
Considering the challenges of semantic complexity, the team then set out to develop an end-to-end model for generalized SLU that could enable voice assistants like Alexa to process any utterance with improved accuracy over other models.
The result: a second publication, “End-to-End Spoken Language Understanding for Generalized Voice Assistants.” The team produced an end-to-end SLU system that could both be pretrained on speech and accept the drop-in insertion of a large language model. This allowed the team to separately adjust the system’s transcription and interpretation capabilities.
Consequently, the system could process many more combinations of intent and argument interpretations. Of note, the SLU system’s speech-to-interpretation accuracy achieved a 43 percent improvement over similarly capable end-to-end baselines.
Answering any question using the web
This summer, Saxon is completing his third applied science internship at Amazon, working remotely for the Alexa AI team in Manhattan Beach, Calif. The team’s work focuses on getting Alexa to provide highly accurate responses to customers’ questions.
“I’ve been on this journey where I've started on the speech side of things and transitioned further down the technology stack to where I am now in the web information domain, where there are still echoes of this previous work,” explains Saxon.
Michael’s internship helped us build substantial expertise and reach the level of maturity that we have in the team today in end-to-end SLU.
The challenge this time involves an even more semantically complex use case: the Alexa AI team needs to train web information–based models that can correctly answer any possible question — even the most confounding ones — so that Alexa can provide useful responses to customers’ questions.
Often, the most important words in a question sentence that an ASR system needs to transcribe correctly are very rare. They increase the sentence’s semantic complexity and are also the hardest words for the system to transcribe.
Without correctly hearing one of those words, the system won’t be able to answer the question. Saxon’s current work brings his previous experiences in end-to-end SLU to bear on this task.
“Michael’s internship helped us build substantial expertise and reach the level of maturity that we have in the team today in end-to-end SLU,” says his former manager, Athanasios Mouchtaris. “Everything we learned from Michael’s work during his internship was crucial to our success.”
Having only completed the first year of his PhD, Saxon is still in an exploratory phase of finding a research direction. He has four years left of his PhD and intends to complete additional internships — and he said he can see himself returning to Amazon again.
“I’ve really bought into the leadership principles and culture here. And I particularly like the emphasis on taking ownership and ‘disagree and commit,’ which have served me well during these research projects,” he says. “I would definitely consider coming back for full-time work after I graduate.”