Jon Tamir_Lab_Photos_0001.jpg
Jon Tamir, an assistant professor of electrical and computer engineering at the University of Texas at Austin, wants to improve how MRI data is acquired. In 2020, he received an Amazon Machine Learning Research Award to support the work.
The University of Texas at Austin

How new machine learning techniques could improve MRI scans

Amazon Research Award recipient Jonathan Tamir is focusing on deriving better images faster.

For many patients, time moves at a glacial pace during a magnetic resonance imaging (MRI) scan. Those who have had one know the challenge of holding impossibly still inside a buzzing, knocking scanner for anywhere from several minutes to more than an hour.

Jonathan (Jon) Tamir is developing machine learning methods to shorten exam times and extract more data from this essential — but often uncomfortable — imaging process.

AWS re:Invent 2022: Impact through cutting-edge ML research with Amazon Research Awards

MRI machines use the body's response to strong magnetic fields and radiofrequency waves to produce pictures of our insides, helping to detect disease and monitor treatments. Just like any image, an MRI scan begins with raw data. Tamir, who is an assistant professor of electrical and computer engineering at the University of Texas at Austin, wants to improve how that data is acquired and derive better images faster. In 2020, he received an Amazon Machine Learning Research Award from Amazon Web Services (AWS) to support the work.

A lack of 'ground-truth' MRI data

Contrary to how the experience might feel to patients inside them, MRI machines move incredibly fast, collecting thousands of measurements at intervals spanning tens or hundreds of milliseconds. The measurements depend on the order and frequency of how magnetic forces and radiofrequency currents are applied to the area being surveyed. Clinicians run specific sequences tailored to the body part and purpose for the MRI.

CT scanner
MRI machines move incredibly fast, collecting thousands of measurements at intervals spanning tens or hundreds of milliseconds. The measurements depend on the order and frequency of how magnetic forces and radiofrequency currents are applied to the area being surveyed. Clinicians run specific sequences tailored to the body part and purpose for the MRI.
Engelstad Photography/Image Supply Co/Adobe

To get the highest possible image quality, an MRI technologist must collect all possible measurements, building from low to high frequency. Each layer of added data results in clearer and more detailed images, but collecting that much data takes far too long. Given the need for expedience, only a subset of the data can be acquired. Which data? "That depends on how we're planning to reconstruct the image," Tamir explained.

At his Computational Sensing and Imaging Lab, Tamir is working with colleagues to optimize both the methods for capturing scans and the image reconstruction algorithms that process the raw information. A key problem: lack of available "ground-truth" data: "That's a very big issue in medical imaging compared to the rest of the machine learning world,” he says.

Related content
Gari Clifford, the chair of the Department of Biomedical Informatics at Emory University and an Amazon Research Award recipient, wants to transform healthcare.

With millions of MRIs generated each year in the United States alone, it might seem surprising that Tamir and colleagues lack data. The final image of an MRI, however, has been post-processed down to a few megabytes. The raw measurements, on the other hand, might amount to hundreds of megabytes or gigabytes that aren't saved by the scanner.

"Different research groups spend a lot of effort building high-quality datasets of ground-truth data so that researchers can use it to train algorithms," Tamir said. "But these datasets are very, very limited."

Another issue, he added, is the fact that many MRIs aren't static images. They are movies of a biological process, such as a heart beating. An MRI scanner is not fast enough to collect fully sampled data in those cases.

Random sampling

Tamir and colleagues are working on machine learning algorithms that can learn from limited data to fill in the blanks, so to speak, on images. One tactic being explored by Tamir and others is to randomly collect about 25% of the possible data from a scan and train a neural network to reconstruct an entire image based on that under-sampled data. Another strategy is to use machine learning to optimize the sampling trajectory in the first place.

Related content
With an encoder-decoder architecture — rather than decoder only — the Alexa Teacher Model excels other large language models on few-shot tasks such as summarization and machine translation.

"Random sampling is a very convenient approach, but we could use machine learning to decide the best sampling trajectory and figure out which points are most important," he said.

In “Robust Compressed Sensing MRI with Deep Generative Priors”, which was presented at the Neural Information Processing Systems (NeurIPS) 2021 conference, Tamir and colleagues at UT-Austin demonstrated a deep learning technique that achieves high-quality image reconstructions based on under-sampled scans from New York University’s fastMRI dataset and the MRIData.org dataset from Stanford University and University of California (UC) Berkeley. Both are publicly available for research and education purposes.

MRI scan stock image
At his Computational Sensing and Imaging Lab, Jon Tamir is working with colleagues to optimize both the methods for capturing scans and the image reconstruction algorithms that process the raw information.
Engelstad Photography/Image Supply Co/Adobe

Other approaches to the problem of image reconstruction have utilized end-to-end supervised learning, which performs well when trained on specific anatomy and measurement models but tends to degrade when faced with the aberrations common in clinical practice.

Instead, Tamir and colleagues used distribution learning, in which a probabilistic model learns to approximate images without reference to measurements. In this case, the model can be used both when the measurement process changes, for example, when changing the sampling trajectory, as well as when the imaging anatomy changes, such as when switching from brain scans to knee scans that the model hasn’t seen before.

'"We're really excited to use this as a base model for tackling these bigger issues we’ve been talking about, such as optimally choosing the measurements to collect, and working with less fully available ground-truth data," Tamir said.

Tamir and his colleagues have published three additional papers related to the Amazon Research Award. One focuses on using hyberbolic geometry to represent data; another uses unrolled alternating optimization to speed MRI reconstruction. Tamir has also developed an open-source simulator for MRI that can be run on GPUs in a distributed way to find the best scan parameters for a specific reconstruction.

The road to clinical adoption

A conventional MRI assembles the image via calculations based on the fast Fourier transform, a bedrock algorithm that resolves combinations of different frequencies. "An inverse fast Fourier transform is all it takes to turn the raw data into an image," he said. "That can happen in less than a few milliseconds. It's very simple."

But in his work with machine learning, Tamir is doing those basic operations in an iterative way, performing a Fourier transform operation hundreds or thousands of times and then layering on additional types of computation.

We're not just trying to come up with cool methods that beat the state of the art in this controlled lab environment. We actually want to use it in the hospital, with the goal of improving patient outcomes.
Jon Tamir

Those calculations are performed in the Amazon Web Services cloud. The ability to do so as quickly as possible is key not only from a research perspective but also a clinical one. That's because even if the method of taking the raw measurements speeds up the MRI, the clinician still must check the quality of the image while the patient is present.

“If we have a fast scan, but now the reconstruction takes 10 minutes or an hour, then that's not going to be clinically feasible," he said. "We're extending this computation, but we need to do it in a way that maintains efficiency."

In addition to AWS cloud services, Tamir has used AWS Lambda to break the image reconstruction down pixel-by-pixel, sending small bits of data to different Lambda nodes, running the computation, and then aggregating the results.

Related content
Science-based recommendations from the Digital Wellness Lab could inform the development of digital products that help children.

Tamir was already familiar with AWS from his work as a graduate student at UC Berkeley, where he earned his doctorate in electrical engineering. There, he worked with Michael (Miki) Lustig, a professor of electrical engineering and computer science, on using deep learning to reduce knee scan times for patients at Stanford Children's Hospital.

As an undergrad, Tamir explored his interest in digital signal processing through unmanned aerial vehicles (UAVs), working on methods for detecting objects on the ground. After taking Lustig's Principles of MRI course at UC Berkeley, he fell in love with MRI: "It had all of the same mathematical excitement that imaging for UAVs had, but it was also something you could visually see, which was just so cool, and it had a really important societal impact."

Tamir also works with clinicians to understand MRI issues in practice. He and Léorah Freeman, a neurologist who works with multiple sclerosis (MS) patients at UT Health Austin, are trying to figure out how machine learning approaches could make brain scans faster while also detecting attributes that humans might not see.

Related content
Using social media data, the University of Maryland's Philip Resnik aims to help clinicians prioritize individuals who may need immediate attention.

"Tissues that look healthy to the naked eye on the brain MRI may not be healthy if we were to look at them under the microscope," Freeman said. "When we use artificial intelligence, we can look broadly into the brain and try to identify changes that may not be perceptible to the naked eye that can relate to how a patient is doing, how they're going to do in the future, and how they respond to a therapy."

Tamir and Freeman are starting by scanning the brains of healthy volunteers to establish control images to compare with those of MS patients. He hopes that the machine learning method presented at NeurIPS can be tailored to patients with MS at the Dell Medical School in Austin. It could be five to 10 years, he said, before a given method makes its way into standard MRI protocols. But that is Tamir's main goal: clinical adoption.

"We're not just trying to come up with cool methods that beat the state of the art in this controlled lab environment," he said. "We actually want to use it in the hospital, with the goal of improving patient outcomes.”

Research areas

Related content

US, WA, Bellevue
The Artificial General Intelligence (AGI) team is seeking a dedicated, skilled, and innovative Applied Scientist with a robust background in machine learning, statistics, quality assurance, auditing methodologies, and automated evaluation systems to ensure the highest standards of data quality, to build industry-leading technology with Large Language Models (LLMs) and multimodal systems. Key job responsibilities As part of the AGI team, an Applied Scientist will collaborate closely with core scientist team developing Amazon Nova models. They will lead the development of comprehensive quality strategies and auditing frameworks that safeguard the integrity of data collection workflows. This includes designing auditing strategies with detailed SOPs, quality metrics, and sampling methodologies that help Nova improve performances on benchmarks. The Applied Scientist will perform expert-level manual audits, conduct meta-audits to evaluate auditor performance, and provide targeted coaching to uplift overall quality capabilities. A critical aspect of this role involves developing and maintaining LLM-as-a-Judge systems, including designing judge architectures, creating evaluation rubrics, and building machine learning models for automated quality assessment. The Applied Scientist will also set up the configuration of data collection workflows and communicate quality feedback to stakeholders. An Applied Scientist will also have a direct impact on enhancing customer experiences through high-quality training and evaluation data that powers state-of-the-art LLM products and services. A day in the life An Applied Scientist with the AGI team will support quality solution design, conduct root cause analysis on data quality issues, research new auditing methodologies, and find innovative ways of optimizing data quality while setting examples for the team on quality assurance best practices and standards. Besides theoretical analysis and quality framework development, an Applied Scientist will also work closely with talented engineers, domain experts, and vendor teams to put quality strategies and automated judging systems into practice.
US, MA, Boston
The Artificial General Intelligence (AGI) team is seeking a dedicated, skilled, and innovative Applied Scientist with a robust background in machine learning, statistics, quality assurance, auditing methodologies, and automated evaluation systems to ensure the highest standards of data quality, to build industry-leading technology with Large Language Models (LLMs) and multimodal systems. Key job responsibilities As part of the AGI team, an Applied Scientist will collaborate closely with core scientist team developing Amazon Nova models. They will lead the development of comprehensive quality strategies and auditing frameworks that safeguard the integrity of data collection workflows. This includes designing auditing strategies with detailed SOPs, quality metrics, and sampling methodologies that help Nova improve performances on benchmarks. The Applied Scientist will perform expert-level manual audits, conduct meta-audits to evaluate auditor performance, and provide targeted coaching to uplift overall quality capabilities. A critical aspect of this role involves developing and maintaining LLM-as-a-Judge systems, including designing judge architectures, creating evaluation rubrics, and building machine learning models for automated quality assessment. The Applied Scientist will also set up the configuration of data collection workflows and communicate quality feedback to stakeholders. An Applied Scientist will also have a direct impact on enhancing customer experiences through high-quality training and evaluation data that powers state-of-the-art LLM products and services. A day in the life An Applied Scientist with the AGI team will support quality solution design, conduct root cause analysis on data quality issues, research new auditing methodologies, and find innovative ways of optimizing data quality while setting examples for the team on quality assurance best practices and standards. Besides theoretical analysis and quality framework development, an Applied Scientist will also work closely with talented engineers, domain experts, and vendor teams to put quality strategies and automated judging systems into practice.
US, MA, Boston
The Artificial General Intelligence (AGI) team is seeking a dedicated, skilled, and innovative Applied Scientist with a robust background in machine learning, statistics, quality assurance, auditing methodologies, and automated evaluation systems to ensure the highest standards of data quality, to build industry-leading technology with Large Language Models (LLMs) and multimodal systems. Key job responsibilities As part of the AGI team, an Applied Scientist will collaborate closely with core scientist team developing Amazon Nova models. They will lead the development of comprehensive quality strategies and auditing frameworks that safeguard the integrity of data collection workflows. This includes designing auditing strategies with detailed SOPs, quality metrics, and sampling methodologies that help Nova improve performances on benchmarks. The Applied Scientist will perform expert-level manual audits, conduct meta-audits to evaluate auditor performance, and provide targeted coaching to uplift overall quality capabilities. A critical aspect of this role involves developing and maintaining LLM-as-a-Judge systems, including designing judge architectures, creating evaluation rubrics, and building machine learning models for automated quality assessment. The Applied Scientist will also set up the configuration of data collection workflows and communicate quality feedback to stakeholders. An Applied Scientist will also have a direct impact on enhancing customer experiences through high-quality training and evaluation data that powers state-of-the-art LLM products and services. A day in the life An Applied Scientist with the AGI team will support quality solution design, conduct root cause analysis on data quality issues, research new auditing methodologies, and find innovative ways of optimizing data quality while setting examples for the team on quality assurance best practices and standards. Besides theoretical analysis and quality framework development, an Applied Scientist will also work closely with talented engineers, domain experts, and vendor teams to put quality strategies and automated judging systems into practice.
US, MA, Boston
The Artificial General Intelligence (AGI) team is seeking a dedicated, skilled, and innovative Applied Scientist with a robust background in machine learning, statistics, quality assurance, auditing methodologies, and automated evaluation systems to ensure the highest standards of data quality, to build industry-leading technology with Large Language Models (LLMs) and multimodal systems. Key job responsibilities As part of the AGI team, an Applied Scientist will collaborate closely with core scientist team developing Amazon Nova models. They will lead the development of comprehensive quality strategies and auditing frameworks that safeguard the integrity of data collection workflows. This includes designing auditing strategies with detailed SOPs, quality metrics, and sampling methodologies that help Nova improve performances on benchmarks. The Applied Scientist will perform expert-level manual audits, conduct meta-audits to evaluate auditor performance, and provide targeted coaching to uplift overall quality capabilities. A critical aspect of this role involves developing and maintaining LLM-as-a-Judge systems, including designing judge architectures, creating evaluation rubrics, and building machine learning models for automated quality assessment. The Applied Scientist will also set up the configuration of data collection workflows and communicate quality feedback to stakeholders. An Applied Scientist will also have a direct impact on enhancing customer experiences through high-quality training and evaluation data that powers state-of-the-art LLM products and services. A day in the life An Applied Scientist with the AGI team will support quality solution design, conduct root cause analysis on data quality issues, research new auditing methodologies, and find innovative ways of optimizing data quality while setting examples for the team on quality assurance best practices and standards. Besides theoretical analysis and quality framework development, an Applied Scientist will also work closely with talented engineers, domain experts, and vendor teams to put quality strategies and automated judging systems into practice.
US, MA, Boston
The Artificial General Intelligence (AGI) team is seeking a dedicated, skilled, and innovative Applied Scientist with a robust background in machine learning, statistics, quality assurance, auditing methodologies, and automated evaluation systems to ensure the highest standards of data quality, to build industry-leading technology with Large Language Models (LLMs) and multimodal systems. Key job responsibilities As part of the AGI team, an Applied Scientist will collaborate closely with core scientist team developing Amazon Nova models. They will lead the development of comprehensive quality strategies and auditing frameworks that safeguard the integrity of data collection workflows. This includes designing auditing strategies with detailed SOPs, quality metrics, and sampling methodologies that help Nova improve performances on benchmarks. The Applied Scientist will perform expert-level manual audits, conduct meta-audits to evaluate auditor performance, and provide targeted coaching to uplift overall quality capabilities. A critical aspect of this role involves developing and maintaining LLM-as-a-Judge systems, including designing judge architectures, creating evaluation rubrics, and building machine learning models for automated quality assessment. The Applied Scientist will also set up the configuration of data collection workflows and communicate quality feedback to stakeholders. An Applied Scientist will also have a direct impact on enhancing customer experiences through high-quality training and evaluation data that powers state-of-the-art LLM products and services. A day in the life An Applied Scientist with the AGI team will support quality solution design, conduct root cause analysis on data quality issues, research new auditing methodologies, and find innovative ways of optimizing data quality while setting examples for the team on quality assurance best practices and standards. Besides theoretical analysis and quality framework development, an Applied Scientist will also work closely with talented engineers, domain experts, and vendor teams to put quality strategies and automated judging systems into practice.
US, MA, Boston
The Artificial General Intelligence (AGI) team is seeking a dedicated, skilled, and innovative Applied Scientist with a robust background in machine learning, statistics, quality assurance, auditing methodologies, and automated evaluation systems to ensure the highest standards of data quality, to build industry-leading technology with Large Language Models (LLMs) and multimodal systems. Key job responsibilities As part of the AGI team, an Applied Scientist will collaborate closely with core scientist team developing Amazon Nova models. They will lead the development of comprehensive quality strategies and auditing frameworks that safeguard the integrity of data collection workflows. This includes designing auditing strategies with detailed SOPs, quality metrics, and sampling methodologies that help Nova improve performances on benchmarks. The Applied Scientist will perform expert-level manual audits, conduct meta-audits to evaluate auditor performance, and provide targeted coaching to uplift overall quality capabilities. A critical aspect of this role involves developing and maintaining LLM-as-a-Judge systems, including designing judge architectures, creating evaluation rubrics, and building machine learning models for automated quality assessment. The Applied Scientist will also set up the configuration of data collection workflows and communicate quality feedback to stakeholders. An Applied Scientist will also have a direct impact on enhancing customer experiences through high-quality training and evaluation data that powers state-of-the-art LLM products and services. A day in the life An Applied Scientist with the AGI team will support quality solution design, conduct root cause analysis on data quality issues, research new auditing methodologies, and find innovative ways of optimizing data quality while setting examples for the team on quality assurance best practices and standards. Besides theoretical analysis and quality framework development, an Applied Scientist will also work closely with talented engineers, domain experts, and vendor teams to put quality strategies and automated judging systems into practice.
US, MA, Boston
The Artificial General Intelligence (AGI) team is seeking a dedicated, skilled, and innovative Applied Scientist with a robust background in machine learning, statistics, quality assurance, auditing methodologies, and automated evaluation systems to ensure the highest standards of data quality, to build industry-leading technology with Large Language Models (LLMs) and multimodal systems. Key job responsibilities As part of the AGI team, an Applied Scientist will collaborate closely with core scientist team developing Amazon Nova models. They will lead the development of comprehensive quality strategies and auditing frameworks that safeguard the integrity of data collection workflows. This includes designing auditing strategies with detailed SOPs, quality metrics, and sampling methodologies that help Nova improve performances on benchmarks. The Applied Scientist will perform expert-level manual audits, conduct meta-audits to evaluate auditor performance, and provide targeted coaching to uplift overall quality capabilities. A critical aspect of this role involves developing and maintaining LLM-as-a-Judge systems, including designing judge architectures, creating evaluation rubrics, and building machine learning models for automated quality assessment. The Applied Scientist will also set up the configuration of data collection workflows and communicate quality feedback to stakeholders. An Applied Scientist will also have a direct impact on enhancing customer experiences through high-quality training and evaluation data that powers state-of-the-art LLM products and services. A day in the life An Applied Scientist with the AGI team will support quality solution design, conduct root cause analysis on data quality issues, research new auditing methodologies, and find innovative ways of optimizing data quality while setting examples for the team on quality assurance best practices and standards. Besides theoretical analysis and quality framework development, an Applied Scientist will also work closely with talented engineers, domain experts, and vendor teams to put quality strategies and automated judging systems into practice.
US, MA, Boston
The Artificial General Intelligence (AGI) team is seeking a dedicated, skilled, and innovative Applied Scientist with a robust background in machine learning, statistics, quality assurance, auditing methodologies, and automated evaluation systems to ensure the highest standards of data quality, to build industry-leading technology with Large Language Models (LLMs) and multimodal systems. Key job responsibilities As part of the AGI team, an Applied Scientist will collaborate closely with core scientist team developing Amazon Nova models. They will lead the development of comprehensive quality strategies and auditing frameworks that safeguard the integrity of data collection workflows. This includes designing auditing strategies with detailed SOPs, quality metrics, and sampling methodologies that help Nova improve performances on benchmarks. The Applied Scientist will perform expert-level manual audits, conduct meta-audits to evaluate auditor performance, and provide targeted coaching to uplift overall quality capabilities. A critical aspect of this role involves developing and maintaining LLM-as-a-Judge systems, including designing judge architectures, creating evaluation rubrics, and building machine learning models for automated quality assessment. The Applied Scientist will also set up the configuration of data collection workflows and communicate quality feedback to stakeholders. An Applied Scientist will also have a direct impact on enhancing customer experiences through high-quality training and evaluation data that powers state-of-the-art LLM products and services. A day in the life An Applied Scientist with the AGI team will support quality solution design, conduct root cause analysis on data quality issues, research new auditing methodologies, and find innovative ways of optimizing data quality while setting examples for the team on quality assurance best practices and standards. Besides theoretical analysis and quality framework development, an Applied Scientist will also work closely with talented engineers, domain experts, and vendor teams to put quality strategies and automated judging systems into practice.
GB, London
As a STRUC Economist Intern, you'll specialize in structural econometric analysis to estimate fundamental preferences and strategic effects in complex business environments. Your responsibilities include: Analyze large-scale datasets using structural econometric techniques to solve complex business challenges Applying discrete choice models and methods, including logistic regression family models (such as BLP, nested logit) and models with alternative distributional assumptions Utilizing advanced structural methods including dynamic models of customer or firm decisions over time, applied game theory (entry and exit of firms), auction models, and labor market models Building datasets and performing data analysis at scale Collaborating with economists, scientists, and business leaders to develop data-driven insights and strategic recommendations Tackling diverse challenges including pricing analysis, competition modeling, strategic behavior estimation, contract design, and marketing strategy optimization Helping business partners formalize and estimate business objectives to drive optimal decision-making and customer value Build and refine comprehensive datasets for in-depth structural economic analysis Present complex analytical findings to business leaders and stakeholders
US, WA, Seattle
At Amazon Selection and Catalog Systems (ASCS), our mission is to power the online buying experience for customers worldwide so they can find, discover, and buy any product they want. We innovate on behalf of our customers to ensure uniqueness and consistency of product identity and to infer relationships between products in Amazon Catalog to drive the selection gateway for the search and browse experiences on the website. We're solving a fundamental AI challenge: establishing product identity and relationships at unprecedented scale. Using Generative AI, Visual Language Models (VLMs), and multimodal reasoning, we determine what makes each product unique and how products relate to one another across Amazon's catalog. The scale is staggering: billions of products, petabytes of multimodal data, millions of sellers, dozens of languages, and infinite product diversity—from electronics to groceries to digital content. The research challenges are immense. GenAI and VLMs hold transformative promise for catalog understanding, but we operate where traditional methods fail: ambiguous problem spaces, incomplete and noisy data, inherent uncertainty, reasoning across both images and textual data, and explaining decisions at scale. Establishing product identities and groupings requires sophisticated models that reason across text, images, and structured data—while maintaining accuracy and trust for high-stakes business decisions affecting millions of customers daily. Amazon's Item and Relationship Platform group is looking for an innovative and customer-focused applied scientist to help us make the world's best product catalog even better. In this role, you will partner with technology and business leaders to build new state-of-the-art algorithms, models, and services to infer product-to-product relationships that matter to our customers. You will pioneer advanced GenAI solutions that power next-generation agentic shopping experiences, working in a collaborative environment where you can experiment with massive data from the world's largest product catalog, tackle problems at the frontier of AI research, rapidly implement and deploy your algorithmic ideas at scale, across millions of customers. Key job responsibilities Key job responsibilities include: * Formulate open research problems at the intersection of GenAI, multimodal reasoning, and large-scale information retrieval—defining the scientific questions that transform ambiguous, real-world catalog challenges into publishable, high-impact research * Push the boundaries of VLMs, foundation models, and agentic architectures by designing novel approaches to product identity, relationship inference, and catalog understanding—where the problem complexity (billions of products, multimodal signals, inherent ambiguity) demands methods that don't yet exist * Advance the science of efficient model deployment—developing distillation, compression, and LLM/VLM serving optimization strategies that preserve frontier-level multimodal reasoning in compact, production-grade architectures while dramatically reducing latency, cost, and infrastructure footprint at billion-product scale * Make frontier models reliable—advancing uncertainty calibration, confidence estimation, and interpretability methods so that frontier-scale GenAI systems can be trusted for autonomous catalog decisions impacting millions of customers daily * Own the full research lifecycle from problem formulation through production deployment—designing rigorous experiments over petabytes of multimodal data, iterating on ideas rapidly, and seeing your research directly improve the shopping experience for hundreds of millions of customers * Shape the team's research vision by defining technical roadmaps that balance foundational scientific inquiry with measurable product impact * Mentor scientists and engineers on advanced ML techniques, experimental design, and scientific rigor—building deep organizational capability in GenAI and multimodal AI * Represent the team in the broader science community—publishing findings, delivering tech talks, and staying at the forefront of GenAI, VLM, and agentic system research