Teaching speech recognizers new words — without retraining

Using lists of rare or out-of-vocabulary words to bias connectionist temporal classification models enables personalization.

In recent years, automatic speech recognition (ASR) has moved to all-neural models. Connectionist-temporal-classification loss functions are an attractive option for ASR (and specifically end-to-end ASR) because they make predictions without conditioning on previous context, thereby yielding simple models with low inference latency.

Unlike earlier, hybrid ASR models, which used lexicons to match phonemes to word candidates, all-neural models are hard to adapt to rare or unfamiliar words. Biasing connectionist-temporal-classification (CTC) models to new words is particularly difficult because of the lack of context: i.e., the model’s prediction at any given time step is independent of the outputs at the previous time steps, the same prediction scheme that enables decoding with low inference latency.

Related content
Accounting for data heterogeneity across edge devices enables more useful model updates, both locally and globally.

This is a problem for ASR applications in which the operational vocabulary is constantly changing, as when new names — say, “Zelenskyy” — enter the conversation, or when users add new names to their address books. Retraining the ASR model on new datasets featuring new words is a prohibitively time-consuming and computationally intensive way to update large models.

In a paper we presented at this year’s Spoken Language Technologies (SLT) Workshop, we describe a method for enabling a CTC model to correctly transcribe new entity names without the need for retraining. The method includes a variety of techniques for biasing the model toward names on a list. These techniques apply to both the model’s encoder, which converts inputs into vector representations, and its beam search decoder, which evaluates candidate output sequences. The techniques can be applied in combination to maximize the likelihood of accurate transcription.

CTC architecture-high-res.png
The architecture of a connectionist-temporal-classification (CTC) model for automatic speech recognition whose output can be biased toward names on an updatable entity list.

On a dataset with difficult medical terminology like names of diseases and medicines, our method improves the ASR model’s F1 score (which factors in both false negatives and false positives) on these entities from 39% in a model without biasing to 62%. Similarly, on a publicly available Vox Populi benchmark that contains recordings of the European Parliament, our method improves the F1 recognition scores of rare entities (names of cities, people, etc.) from 49% to 80% without any retraining of the base ASR model.

Biasing

Our baseline CTC model is an all-neural network that takes frames of audio (snapshots of the signal spectrum across small durations) as input and converts them into a sequence of probability distributions over subword units — word fragments that can be composed into full words. These probability distributions are represented by a weighted graph of possible subword sequences. To rank candidate word sequences, the model decoder uses beam search combined with an external language model (LM), which encodes the probabilities of sequences of words.

Related content
EMNLP papers examine constrained generation of rewrite candidates and automatic selection of information-rich training data.

Encoder biasing 

To bias the CTC model’s encoder, we use a contextual adapter, a separate module that is trained after we have frozen the weights of the base CTC model. The adapter takes the set of rare words in training examples as inputs and learns a mapping between the words’ subword-unit sequences and their audio representations.

In our base network, we use additional CTC losses to train representations from intermediate layers of the encoder (the 6thand the 12th) to produce subword sequences. This enables the model to use approximations of the outputs in previous time steps to influence prediction of the current frame. Our adapter uses a weighted sum of representations from these intermediate layers as audio representations, thereby countering the conditional-independence assumption of CTC models.

At inference time, we use the contextual adapter to embed a list of rare or out-of-vocabulary (OOV) entity names, and at every time frame of the audio, an attention module tries to match the name embeddings with the audio representation. The attention module can also choose to ignore all of the names by attending to a special <no-bias> token. If the audio does contain some entity from the provided list, the probability of the corresponding sequence of subword units is increased.

attention-plot-high-res.jpg
A plot indicating the weight that an attention mechanism gives to names on a custom entity list when encoding audio.

Decoder biasing

We obtained positive results with the following techniques for decoder biasing. All of these techniques are applied directly at inference time:

Related content
Data augmentation makes examples more realistic, while continual-learning techniques prevent “catastrophic forgetting”.

  1. Adaptive subword boosting in beam search decoding: We dynamically boost the probability of a top-k subword sequence if it begins with a subword that appears on the custom entity list. For example, if “Fremont” is one of the custom words, then if the subword “fre” appears, we boost the probabilities of the subsequent subwords “mo” and “nt”. The boosting score for each subword candidate at time step t is determined dynamically by the difference between its log probability and that of the top-1 hypothesis.
  2. Unigram boosting: We boost the probabilities of words on the list of entity names by adding them to the external LM through an OOV/BOOST class, to keep the LM unmodified during inference.
  3. Phonetic-distance-based rescoring: We take the outputs of the intermediate-layer network — which are phones, or phonetic representations of short speech sounds — and perform forced alignment between them and the output of the CTC model. We compute the cost of this alignment and use it to rescore the n-best lists.
  4. Pronunciation-based lexicon lookup: For rare and OOV words, our phone prediction hypotheses are more accurate than our subword predictions. Therefore, we used forced alignment with the phone predictions of the intermediate-layer network to identify the boundaries between words in the phone sequence. If the sequence of phones corresponding to a word is an exact match for the pronunciation of a word in the lexicon, we replace the word with the lexicon entity.
  5. Grapheme-to-grapheme (G2G) techniques: A grapheme is the smallest meaningful unit of written text. We use a table that maps individual graphemes to their multiple possible pronunciations (i.e., phones) to resolve alternative pronunciations of the words on our list of entity names. The probability of predicting the actual word improves with an increase in the number of these G2G variants.

Joint model

Finally, we present a joint model that combines the encoder- and decoder-biasing techniques described above, and as expected, the techniques are complementary to each other and result in additive gains. Conceptually, the encoder-biasing method aids in generating higher-probability scores for the rare subwords it copies, which helps prevent rare subwords from getting pruned during the beam-search decoding of the subword graph. The rare and OOV words get a further boost from the decoder-biasing techniques, which promote the rare-word candidate paths through the graph to top ranking.

We hope our methodology advances the speech community in the direction of zero-shot personalized ASR for CTC models, which are becoming an increasingly prevalent choice for ASR systems.

Research areas

Related content

US, WA, Seattle
Amazon is looking for a passionate, talented, and inventive Senior Applied Scientist with a strong machine learning background to help build industry-leading language technology. Our mission is to provide a delightful experience to Amazon’s customers by pushing the envelope in Natural Language Processing (NLP), Generative AI, Large Language Model (LLM), Natural Language Understanding (NLU), Machine Learning (ML), Retrieval-Augmented Generation, Responsible AI, Agent, Evaluation, and Model Adaptation. As part of our AI team in Amazon AWS, you will work alongside internationally recognized experts to develop novel algorithms and modeling techniques to advance the state-of-the-art in human language technology. Your work will directly impact millions of our customers in the form of products and services, as well as contributing to the wider research community. You will gain hands on experience with Amazon’s heterogeneous text and structured data sources, and large-scale computing resources to accelerate advances in language understanding. The Science team at AWS Bedrock builds science foundations of Bedrock, which is a fully managed service that makes high-performing foundation models available for use through a unified API. We are adamant about continuously learning state-of-the-art NLP/ML/LLM technology and exploring creative ways to delight our customers. In our daily job we are exposed to large scale NLP needs and we apply rigorous research methods to respond to them with efficient and scalable innovative solutions. At AWS Bedrock, you’ll experience the benefits of working in a dynamic, entrepreneurial environment, while leveraging AWS resources, one of the world’s leading cloud companies and you’ll be able to publish your work in top tier conferences and journals. We are building a brand new team to help develop a new NLP service for AWS. You will have the opportunity to conduct novel research and influence the science roadmap and direction of the team. Come join this greenfield opportunity! Amazon Bedrock team is part of Utility Computing (UC) About the team AWS Utility Computing (UC) provides product innovations — from foundational services such as Amazon’s Simple Storage Service (S3) and Amazon Elastic Compute Cloud (EC2), to consistently released new product innovations that continue to set AWS’s services and features apart in the industry. As a member of the UC organization, you’ll support the development and management of Compute, Database, Storage, Internet of Things (Iot), Platform, and Productivity Apps services in AWS, including support for customers who require specialized security solutions for their cloud services. Diverse Experiences AWS values diverse experiences. Even if you do not meet all of the qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why AWS? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Inclusive Team Culture Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences, inspire us to never stop embracing our uniqueness. Mentorship & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. Hybrid Work We value innovation and recognize this sometimes requires uninterrupted time to focus on a build. We also value in-person collaboration and time spent face-to-face. Our team affords employees options to work in the office every day or in a flexible, hybrid work model near one of our U.S. Amazon offices. We are open to hiring candidates to work out of one of the following locations: Seattle, WA, USA
US, WA, Seattle
Alexa Personality Fundamentals is chartered with infusing Alexa's trustworthy, reliable, considerate, smart, and playful personality. Come join us in creating the future of personality forward AI here at Alexa. Key job responsibilities As a Data Scientist with Alexa Personality, your work will involve machine learning, Large Language Model (LLM) and other generative technologies. You will partner with engineers, applied scientists, voice designers, and quality assurance to ensure that Alexa can sing, joke, and delight our customers in every interaction. You will take a central role in defining our experimental roadmap, sourcing training data, authoring annotation criteria and building automated benchmarks to track the improvement of our Alexa's personality. We are open to hiring candidates to work out of one of the following locations: Bellevue, WA, USA | Seattle, WA, USA
US, CA, Palo Alto
The Amazon Search Mission Understanding (SMU) team is at the forefront of revolutionizing the online shopping experience through the Amazon search page. Our ambition extends beyond facilitating a seamless shopping journey; we are committed to creating the next generation of intelligent shopping assistants. Leveraging cutting-edge Large Language Models (LLMs), we aim to redefine navigation and decision-making in e-commerce by deeply understanding our users' shopping missions, preferences, and goals. By developing responsive and scalable solutions, we not only accomplish the shopping mission but also foster unparalleled trust among our customers. Through our advanced technology, we generate valuable insights, providing a guided navigation system into various search missions, ensuring a comprehensive and holistic shopping experience. Our dedication to continuous improvement through constant measurement and enhancement of the shopper experience is crucial, as we strategically navigate the balance between immediate results and long-term business growth. We are seeking an Applied Scientist who is not just adept in the theoretical aspects of Machine Learning (ML), Artificial Intelligence (AI), and Large Language Models (LLMs) but also possesses a pragmatic, hands-on approach to navigating the complexities of innovation. The ideal candidate will have a profound expertise in developing, deploying, and contributing to the next-generation shopping search engine, including but not limited to Retrieval-Augmented Generation (RAG) models, specifically tailored towards enhancing the Rufus application—an integral part of our mission to revolutionize shopping assistance. You will take the lead in conceptualizing, building, and launching groundbreaking models that significantly improve our understanding of and capabilities in enhancing the search experience. A successful applicant will display a comprehensive skill set across machine learning model development, implementation, and optimization. This includes a strong foundation in data management, software engineering best practices, and a keen awareness of the latest developments in distributed systems technology. We are looking for individuals who are determined, analytically rigorous, passionate about applied sciences, creative, and possess strong logical reasoning abilities. Join the Search Mission Understanding team, a group of pioneering ML scientists and engineers dedicated to building core ML models and developing the infrastructure for model innovation. As part of Amazon Search, you will experience the dynamic, innovative culture of a startup, backed by the extensive resources of Amazon.com (AMZN), a global leader in internet services. Our collaborative, customer-centric work environment spans across our offices in Palo Alto, CA, and Seattle, WA, offering a unique blend of opportunities for professional growth and innovation. Key job responsibilities Collaborate with cross-functional teams to identify requirements for ML model development, focusing on enhancing mission understanding through innovative AI techniques, including retrieval-Augmented Generation or LLM in general. Design and implement scalable ML models capable of processing and analyzing large datasets to improve search and shopping experiences. Must have a strong background in machine learning, AI, or computational sciences. Lead the management and experiments of ML models at scale, applying advanced ML techniques to optimize science solution. Serve as a technical lead and liaison for ML projects, facilitating collaboration across teams and addressing technical challenges. Requires strong leadership and communication skills, with a PhD in Computer Science, Machine Learning, or a related field. We are open to hiring candidates to work out of one of the following locations: Palo Alto, CA, USA | Seattle, WA, USA
US, WA, Bellevue
The Artificial General Intelligence (AGI) team is looking for a passionate, talented, and inventive Applied Science Manager with a strong deep learning background, to lead the development of industry-leading technology with multimodal systems. Key job responsibilities As an Applied Science Manager with the AGI team, you will lead the development of novel algorithms and modeling techniques to advance the state of the art with multimodal systems. Your work will directly impact our customers in the form of products and services that make use of vision and language technology. You will leverage Amazon’s heterogeneous data sources and large-scale computing resources to accelerate development with multimodal Large Language Models (LLMs) and Generative Artificial Intelligence (GenAI) in Computer Vision. About the team The AGI team has a mission to push the envelope with multimodal LLMs and GenAI in Computer Vision, in order to provide the best-possible experience for our customers. We are open to hiring candidates to work out of one of the following locations: Bellevue, WA, USA | Seattle, WA, USA | Sunnyvale, CA, USA
US, MA, Boston
The Artificial General Intelligence (AGI) - Automations team is developing AI technologies to automate workflows, processes for browser automation, developers and ops teams. As part of this, we are developing services and inference engine for these automation agents, and techniques for reasoning, planning, and modeling workflows. If you are interested in a startup mode team in Amazon to build the next level of agents then come join us. Scientists in AGI - Automations will develop cutting edge multimodal LLMs to observe, model and derive insights from manual workflows to automate them. You will get to work in a joint scrum with engineers for rapid invention, develop cutting edge automation agent systems, and take them to launch for millions of customers. Key job responsibilities - Build automation agents by developing novel multimodal LLMs. A day in the life An Applied Scientist with the AGI team will support the science solution design, run experiments, research new algorithms, and find new ways of optimizing the customer experience.; while setting examples for the team on good science practice and standards. Besides theoretical analysis and innovation, an Applied Scientist will also work closely with talented engineers and scientists to put algorithms and models into practice. We are open to hiring candidates to work out of one of the following locations: Boston, MA, USA
US, MA, Boston
The Artificial General Intelligence (AGI) - Automations team is developing AI technologies to automate workflows, processes for browser automation, developers and ops teams. As part of this, we are developing services and inference engine for these automation agents, and techniques for reasoning, planning, and modeling workflows. If you are interested in a startup mode team in Amazon to build the next level of agents then come join us. Scientists in AGI - Automations will develop cutting edge multimodal LLMs to observe, model and derive insights from manual workflows to automate them. You will get to work in a joint scrum with engineers for rapid invention, develop cutting edge automation agent systems, and take them to launch for millions of customers. Key job responsibilities - Build automation agents by developing novel multimodal LLMs. A day in the life An Applied Scientist with the AGI team will support the science solution design, run experiments, research new algorithms, and find new ways of optimizing the customer experience.; while setting examples for the team on good science practice and standards. Besides theoretical analysis and innovation, an Applied Scientist will also work closely with talented engineers and scientists to put algorithms and models into practice. We are open to hiring candidates to work out of one of the following locations: Boston, MA, USA
US, CA, Palo Alto
The Amazon Search Mission Understanding (SMU) team is at the forefront of revolutionizing the online shopping experience through the Amazon search page. Our ambition extends beyond facilitating a seamless shopping journey; we are committed to creating the next generation of intelligent shopping assistants. Leveraging cutting-edge Large Language Models (LLMs), we aim to redefine navigation and decision-making in e-commerce by deeply understanding our users' shopping missions, preferences, and goals. By developing responsive and scalable solutions, we not only accomplish the shopping mission but also foster unparalleled trust among our customers. Through our advanced technology, we generate valuable insights, providing a guided navigation system into various search missions, ensuring a comprehensive and holistic shopping experience. Our dedication to continuous improvement through constant measurement and enhancement of the shopper experience is crucial, as we strategically navigate the balance between immediate results and long-term business growth. We are seeking an Applied Scientist who is not just adept in the theoretical aspects of Machine Learning (ML), Artificial Intelligence (AI), and Large Language Models (LLMs) but also possesses a pragmatic, hands-on approach to navigating the complexities of innovation. The ideal candidate will have a profound expertise in developing, deploying, and contributing to the next-generation shopping search engine, including but not limited to Retrieval-Augmented Generation (RAG) models, specifically tailored towards enhancing the Rufus application—an integral part of our mission to revolutionize shopping assistance. You will take the lead in conceptualizing, building, and launching groundbreaking models that significantly improve our understanding of and capabilities in enhancing the search experience. A successful applicant will display a comprehensive skill set across machine learning model development, implementation, and optimization. This includes a strong foundation in data management, software engineering best practices, and a keen awareness of the latest developments in distributed systems technology. We are looking for individuals who are determined, analytically rigorous, passionate about applied sciences, creative, and possess strong logical reasoning abilities. Join the Search Mission Understanding team, a group of pioneering ML scientists and engineers dedicated to building core ML models and developing the infrastructure for model innovation. As part of Amazon Search, you will experience the dynamic, innovative culture of a startup, backed by the extensive resources of Amazon.com (AMZN), a global leader in internet services. Our collaborative, customer-centric work environment spans across our offices in Palo Alto, CA, and Seattle, WA, offering a unique blend of opportunities for professional growth and innovation. Key job responsibilities Collaborate with cross-functional teams to identify requirements for ML model development, focusing on enhancing mission understanding through innovative AI techniques, including retrieval-Augmented Generation or LLM in general. Design and implement scalable ML models capable of processing and analyzing large datasets to improve search and shopping experiences. Must have a strong background in machine learning, AI, or computational sciences. Lead the management and experiments of ML models at scale, applying advanced ML techniques to optimize science solution. Serve as a technical lead and liaison for ML projects, facilitating collaboration across teams and addressing technical challenges. Requires strong leadership and communication skills, with a PhD in Computer Science, Machine Learning, or a related field. We are open to hiring candidates to work out of one of the following locations: Palo Alto, CA, USA | Seattle, WA, USA
US, WA, Seattle
The Artificial General Intelligence (AGI) team is looking for a passionate, talented, and inventive Senior Applied Scientist with a strong deep learning background, to lead the development of industry-leading technology with multimodal systems. Key job responsibilities As a Senior Applied Scientist with the AGI team, you will lead the development of novel algorithms and modeling techniques to advance the state of the art with multimodal systems. Your work will directly impact our customers in the form of products and services that make use of vision and language technology. You will leverage Amazon’s heterogeneous data sources and large-scale computing resources to accelerate development with multimodal Large Language Models (LLMs) and Generative Artificial Intelligence (GenAI) in Computer Vision. About the team The AGI team has a mission to push the envelope with multimodal LLMs and GenAI in Computer Vision, in order to provide the best-possible experience for our customers. We are open to hiring candidates to work out of one of the following locations: Cambridge, MA, USA | New York, NY, USA | Seattle, WA, USA | Sunnyvale, CA, USA
US, WA, Bellevue
The Artificial General Intelligent team (AGI) seeks an Applied Scientist with a strong background in machine learning and production level software engineering to spearhead the advancement and deployment of cutting-edge ML systems. As part of this team, you will collaborate with talented peers to create scalable solutions for an innovative conversational assistant, aiming to revolutionize user experiences for millions of Alexa customers. The ideal candidate possesses a solid understanding of machine learning fundamentals and has experience writing high quality software in production setting. The candidate is self-motivated, thrives in ambiguous and fast-paced environments, possess the drive to tackle complex challenges, and excel at swiftly delivering impactful solutions while iterating based on user feedback. Join us in our mission to redefine industry standards and provide unparalleled experiences for our customers. Key job responsibilities You will be expected to: · Analyze, understand, and model customer behavior and the customer experience based on large scale data · Build and measure novel online & offline metrics for personal digital assistants and customer scenarios, on diverse devices and endpoints · Create, innovate and deliver deep learning, policy-based learning, and/or machine learning based algorithms to deliver customer-impacting results · Build and deploy automated model training and evaluation pipelines · Perform model/data analysis and monitor metrics through online A/B testing · Research and implement novel machine learning and deep learning algorithms and models. We are open to hiring candidates to work out of one of the following locations: Bellevue, WA, USA | Boston, MA, USA
ZA, Cape Town
We are a new team in AWS' Kumo organisation - a combination of software engineers and AI/ML experts. Kumo is the software engineering organization that scales AWS’ support capabilities. Amazon’s mission is to be earth’s most customer-centric company and this also applies when it comes to helping our own Amazon employees with their everyday IT Support needs. Our team is innovating for the Amazonian, making the interaction with IT Support as smooth as possible. We achieve this through multiple mechanisms which eliminate root causes altogether, automate issue resolution or point customers towards the optimal troubleshooting steps for their situation. We deliver the support solutions plus the end-user content with instructions to help them self-serve. We employ machine learning solutions on multiple ends to understand our customer's behavior, predict customer's intent, deliver personalized content and automate issue resolution through chatbots. As an applied scientist on our team, you will help to build the next generation of case routing using artificial intelligence to optimize business metric targets addressing the business challenge of ensuring that the right case gets worked by the right agent within the right time limit whilst meeting the target business success metric. You will develop machine learning models and pipelines, harness and explain rich data at Amazon scale, and provide automated insights to improve case routing that impact millions of customers every day. You will be a pragmatic technical leader comfortable with ambiguity, capable of summarizing complex data and models through clear visual and written explanations. About AWS Diverse Experiences AWS values diverse experiences. Even if you do not meet all of the qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why AWS? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Inclusive Team Culture Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences, inspire us to never stop embracing our uniqueness. Mentorship & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why flexible work hours and arrangements are part of our culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. Sales, Marketing and Global Services (SMGS) AWS Sales, Marketing, and Global Services (SMGS) is responsible for driving revenue, adoption, and growth from the largest and fastest growing small- and mid-market accounts to enterprise-level customers including public sector. Amazon knows that a diverse, inclusive culture empowers us all to deliver the best results for our customers. We celebrate diversity in our workforce and in the ways we work. As part of our inclusive culture, we offer accommodations during the interview and onboarding process. If you’d like to discuss your accommodation options, please contact your recruiter, who will partner you with the Applicant-Candidate Accommodation Team (ACAT). You may also contact ACAT directly by emailing acat-africa@amazon.com. We want all Amazonians to have the best possible Day 1 experience. If you’ve already completed the interview process, you can contact ACAT for accommodation support before you start to ensure all your needs are met Day 1. Key job responsibilities Deliver real world production systems at AWS scale. Work closely with the business to understand the problem space, identify the opportunities and formulate the problems. Use machine learning, data mining, statistical techniques, Generative AI and others to create actionable, meaningful, and scalable solutions for the business problems. Establish scalable, efficient, automated processes for large scale data analyses, model development, model validation and model implementation Analyze complex support case datasets and metrics to drive insight Design, build, and deploy effective and innovative ML solutions to optimize case routing Evaluate the proposed solutions via offline benchmark tests as well as online A/B tests in production. Drive collaborative research and creative problem solving across science and software engineering team Propose and validate hypothesis to deliver and direct our product road map Work with engineers to deliver low latency model predictions to production We are open to hiring candidates to work out of one of the following locations: Cape Town, ZAF