How computer vision will help Amazon customers shop online

Three papers at CVPR present complementary methods to improve product discovery.

The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) is the premier conference in the field of computer vision, and the Amazon papers accepted there this year range in topic from neural-architecture search to human-pose tracking to handwritten-text generation.

But retail sales are still at the heart of what Amazon does, and three of Amazon’s 10 CVPR papers report ways in which computer vision could help customers shop for clothes.

One paper describes a system that lets customers sharpen a product query by describing variations on a product image. The customer could, for instance, alter the image by typing or saying “I want it to have a light floral pattern”.

A second paper reports a system that suggests items to complement those the customer has already selected, based on features such as color, style, and texture.

The third paper reports a system that can synthesize an image of a model wearing clothes from different product pages, to demonstrate how they would work together as an ensemble. All three systems use neural networks.

Outfit composite.png
A query image (left) is combined with images from different product pages to produce a synthetic composite (right).

Visiolinguistic product discovery

Using text to refine an image that matches a product query poses three main challenges. The first is finding a way to fuse textual descriptions and image features into a single representation. The second is performing that fusion at different levels of resolution: the customer should be able to say something as abstract as “Something more formal” or as precise as “change the neck style”. And the third is training the network to preserve some image features while following customers' instructions to change others.

Yanbei Chen, a graduate student at Queen Mary University of London, who was an intern at Amazon when the work was done; Chen’s advisor, professor of visual computation Shaogang Gong; and Loris Bazzani, a senior computer vision scientist at Amazon, address these challenges with a neural network that’s trained on triples of inputs: a source image, a textual revision, and a target image that matches the revision.

Essentially, the three inputs pass through three different neural networks in parallel. But at three distinct points in the pipeline, the current representation of the source image is fused with the current representation of the text, and the fused representation is correlated with the current representation of the target image.

Because the lower levels of a neural network tend to represent lower-level features of the input (such as textures and colors) and higher levels higher-level features (such as sleeve length or tightness of fit), using this “hierarchical matching” objective to train the model ensures that it can handle textual modifications of different resolutions.

Visiolinguistic architecture.png
A new system that enables textual modification of product images fuses visual and linguistic information at three different levels of a neural network, to accommodate different degrees of textual granularity.
Apparel images from the Fashion IQ data set (Xiaoxiao Guo, et al.), used with permission under the Community Data License Agreement.

Each fusion of linguistic and visual representations is performed by a neural network with two components. One component uses a joint attention mechanism to identify visual features that should be the same in the source and target images. The other is a transformer network that uses self-attention to identify features that should change.

In tests, the researchers found that the new system could find a valid match to a textual modification 58% more frequently than its best-performing predecessor.

Complementary-item retrieval

In the past, researchers have developed systems that took outfit items as inputs and predicted their compatibility, but these systems were not optimized for large-scale data retrieval.

Amazon applied scientist Yen-Liang Lin and his colleagues wanted a system that would enable product discovery at scale, and they wanted it to take multiple inputs, so that a customer could, for instance, select shirt, pants, and jacket and receive a recommendation for shoes.

The network they devised takes as inputs any number of garment images, together with a vector indicating the category of each — such as shirt, pants, or jacket. It also takes the category vector of the item the customer seeks.

The images pass through a convolutional neural network that produces a vector representation of each. Each representation then passes through a set of “masks”, which attenuate some representation features and amplify others.

The masks are learned during training, and the resulting representations encode product information (such as color and style) relevant to only a subset of complementary items. That is, some of the representations that result from the masking — called subspace representations — will be relevant to shoes, others to handbags, others to hats, and so on.

Complementarity network.png
The architecture of the neural network used for complementary-item retrieval. From vectors representing the product categories of both input items and a target item, the network produces a set of weights (w1 – wk) that indicate which input-item features should be prioritized in selecting a complementary item.

In parallel, another network takes as input the category for each input image and the category of the target item. Its output is a set of weights, for prioritizing the subspace representations.

The network is trained using an evaluation criterion that operates on the entire outfit. Each training example includes an outfit, an item that goes well with that outfit, and a group of items that do not.

Once the network has been trained, it can produce a vector representation of every item in a catalogue. Finding the best complement for a particular outfit is then just a matter of looking up the corresponding vectors.

In experiments that used two standard measures in the literature on garment complementarity — fill-in-the-blank accuracy and compatibility area under the curve — the researchers’ system outperformed its three top predecessors, while enabling much more efficient item retrieval.

Virtual try-on network

Previously, researchers have trained machine learning systems to synthesize images of figures wearing clothes from different sources by using training data that featured the same garment photographed from different perspectives. But that kind of data is extremely labor intensive to produce.

Senior applied scientist Assaf Neuberger and his colleagues at Amazon’s Lab126 instead built a system that can be trained on single images, using generative adversarial networks, or GANs. A GAN has a component known as a discriminator, which, during training, learns to distinguish network-generated images from real images. Simultaneously, the generator learns to fool the discriminator.

The researchers’ system has three components. The first is the shape generation network, whose inputs are a query image, which will serve as the template for the final image, and any number of reference images, which depict clothes that will be transferred to the model from the query image.

Complementarity system.png
Amazon researchers’ “virtual try-on network” uses a three-step process to synthesize an image of a model wearing garments from different sources.

In preprocessing, established techniques segment all the input images and compute the query figure’s body model, which represents pose and body shape. The segments selected for inclusion in the final image pass to the shape generation network, which combines them with the body model and updates the query image’s shape representation. That shape representation passes to a second network, called the appearance generation network.

The architecture of the appearance generation network is much like that of the shape generation network, except that it encodes information about texture and color rather than shape. The representation it produces is combined with the shape representation to produce a photorealistic visualization of the query model wearing the reference garments.

The third component of the network fine-tunes the parameters of the appearance generation network to preserve features such as logos or distinctive patterns without compromising the silhouette of the model.

The outputs of the new system are more natural looking than those of previous systems. In the figure below, the first column is the query image, the second the reference image, the third the output of the best-performing previous system, and the fourth and fifth the outputs of the new system, without and with appearance refinement, respectively.

Logos.png
From left to right: query samples, reference samples, the previous system’s output, and the new system’s outputs, without and with the appearance refinement network.

Research areas

Related content

US, WA, Seattle
Here at Amazon, we embrace our differences. We are committed to furthering our culture of diversity and inclusion of our teams within the organization. How do you get items to customers quickly, cost-effectively, and—most importantly—safely, in less than an hour? And how do you do it in a way that can scale? Our teams of hundreds of scientists, engineers, aerospace professionals, and futurists have been working hard to do just that! We are delivering to customers, and are excited for what’s to come. Check out more information about Prime Air on the About Amazon blog (https://www.aboutamazon.com/news/transportation/amazon-prime-air-delivery-drone-reveal-photos). If you are seeking an iterative environment where you can drive innovation, apply state-of-the-art technologies to solve real world delivery challenges, and provide benefits to customers, Prime Air is the place for you. Come work on the Amazon Prime Air Team! We are seeking a highly skilled Navigation Scientist to help develop advanced algorithms and software for our Prime Air delivery drone program. In this role, you will conduct comprehensive navigation analysis to support cross-functional decision-making, define system architecture and requirements, contribute to the development of flight algorithms, and actively identify innovative technological opportunities that will drive significant enhancements to meet our customers' evolving demands. Export Control License: This position may require a deemed export control license for compliance with applicable laws and regulations. Placement is contingent on Amazon’s ability to apply for and obtain an export control license on your behalf.
US, NY, New York
About Sponsored Products and Brands The Sponsored Products and Brands team at Amazon Ads is re-imagining the advertising landscape through generative AI technologies, revolutionizing how millions of customers discover products and engage with brands across Amazon.com and beyond. We are at the forefront of re-inventing advertising experiences, bridging human creativity with artificial intelligence to transform every aspect of the advertising lifecycle from ad creation and optimization to performance analysis and customer insights. We are a passionate group of innovators dedicated to developing responsible and intelligent AI technologies that balance the needs of advertisers, enhance the shopping experience, and strengthen the marketplace. If you're energized by solving complex challenges and pushing the boundaries of what's possible with AI, join us in shaping the future of advertising. About our team The Targeting and Recommendations team within Sponsored Products and Brands empowers advertisers with intelligent targeting controls and one-click campaign recommendations that automatically populate optimal settings based on ASIN data. This comprehensive suite provides advanced targeting capabilities through AI-generated keyword and ASIN suggestions, sophisticated targeting controls including Negative Targeting, Manual Targeting with Product Attribute Targeting (PAT) and Keyword Targeting (KWT), and Automated Targeting (ATv2). Our vision is to build a revolutionary, highly personalized and context-aware agentic advertiser guidance system that seamlessly integrates Large Language Models (LLMs) with sophisticated tooling, operating across both conversational and traditional ad console experiences while scaling from natural language queries to proactive, intelligent guidance delivery based on deep advertiser understanding, ultimately enhancing both targeting precision and one-click campaign optimization. Through strategic partnerships across Ad Console, Sales, and Marketing teams, we identify high-impact opportunities spanning from strategic product guidance to granular keyword optimization and deliver them through personalized, scalable experiences grounded in state-of-the-art agent architectures, reasoning frameworks, sophisticated tool integration, and model customization approaches including tuning, MCP, and preference optimization. This presents an exceptional opportunity to shape the future of e-commerce advertising through advanced AI technology at unprecedented scale, creating solutions that directly impact millions of advertisers. Key job responsibilities * Design and build targeting and 1 click recommendation agents to guide advertisers in conversational and non-conversational experience. * Design and implement advanced model and agent optimization techniques, including supervised fine-tuning, instruction tuning and preference optimization (e.g., DPO/IPO). * Collaborate with peers across engineering and product to bring scientific innovations into production. * Stay current with the latest research in LLMs, RL, and agent-based AI, and translate findings into practical applications. * Develop agentic architectures that integrate planning, tool use, and long-horizon reasoning. A day in the life As an Applied Scientist on our team, your days will be immersed in collaborative problem-solving and strategic innovation. You'll partner closely with expert applied scientists, software engineers, and product managers to tackle complex advertising challenges through creative, data-driven solutions. Your work will center on developing sophisticated machine learning and AI models, leveraging state-of-the-art techniques in natural language processing, recommendation systems, and agentic AI frameworks. From designing novel targeting algorithms to building personalized guidance systems, you'll contribute to breakthrough innovations
IN, KA, Bengaluru
Alexa+ is Amazon’s next-generation, AI-powered assistant. Building on the original Alexa, it uses generative AI to deliver a more conversational, personalized, and effective experience. The Trust CX Innovations team is looking for an Applied Scientist with strong background in Generative AI space to build solutions that help in upholding customer trust for Alexa+. As an Applied Scientist in Trust CX innovations, you will be at the forefront of developing innovative solutions to critical challenges in AI trust and privacy. You'll lead research in trust-preserving machine learning techniques. We are working on revolutionizing the way Amazonians work and collaborate. You will help us achieve new heights of productivity through the power of advanced generative AI technologies. Key job responsibilities - Lead research initiatives in generative AI, focusing on LLMs, multimodal models, and frontier AI capabilities - Develop innovative approaches for model optimization, including prompt engineering, few-shot learning, and efficient fine-tuning - Pioneer new methods for AI safety, alignment, and responsible AI development - Design and execute sophisticated experiments to evaluate model performance and behavior - Lead the development of production-ready AI solutions that scale efficiently - Collaborate with product teams to translate research innovations into practical applications - Guide engineering teams in implementing AI models and systems at scale - Author technical papers for top-tier conferences - File patents for novel AI technologies and applications A day in the life You will be working with a group of talented scientists on researching algorithm and running experiments to test scientific proposal/solutions to improve our trust-preserving experiences. This will involve collaboration with partner teams including engineering, PMs, data annotators, and other scientists to discuss data quality, policy, and model development. You work closely with partner teams across Alexa to deliver platform features that require cross-team leadership. About the team Who We Are: Trust CX Innovations is a strategic innovation team within Amazon Devices & Services that focuses on advancing AI technology while prioritizing customer trust and experience. Our team operates at the intersection of artificial intelligence, privacy engineering and customer-centric design. Our Mission: To pioneer trustworthy AI innovations that delight customers while setting new standards for privacy and responsible technology development. We aim to transform how Amazon builds AI products by creating solutions that balance innovation with customer trust.
IN, KA, Bengaluru
Alexa+ is Amazon’s next-generation, AI-powered assistant. Building on the original Alexa, it uses generative AI to deliver a more conversational, personalized, and effective experience. The Trust CX Innovations team is looking for an Applied Scientist with strong background in Generative AI space to build solutions that help in upholding customer trust for Alexa+. As an Applied Scientist in Trust CX innovations, you will be at the forefront of developing innovative solutions to critical challenges in AI trust and privacy. You'll lead research in trust-preserving machine learning techniques. We are working on revolutionizing the way Amazonians work and collaborate. You will help us achieve new heights of productivity through the power of advanced generative AI technologies. Key job responsibilities - Lead research initiatives in generative AI, focusing on LLMs, multimodal models, and frontier AI capabilities - Develop innovative approaches for model optimization, including prompt engineering, few-shot learning, and efficient fine-tuning - Pioneer new methods for AI safety, alignment, and responsible AI development - Design and execute sophisticated experiments to evaluate model performance and behavior - Lead the development of production-ready AI solutions that scale efficiently - Collaborate with product teams to translate research innovations into practical applications - Guide engineering teams in implementing AI models and systems at scale - Author technical papers for top-tier conferences - File patents for novel AI technologies and applications A day in the life You will be working with a group of talented scientists on researching algorithm and running experiments to test scientific proposal/solutions to improve our trust-preserving experiences. This will involve collaboration with partner teams including engineering, PMs, data annotators, and other scientists to discuss data quality, policy, and model development. You work closely with partner teams across Alexa to deliver platform features that require cross-team leadership. About the team Who We Are: Trust CX Innovations is a strategic innovation team within Amazon Devices & Services that focuses on advancing AI technology while prioritizing customer trust and experience. Our team operates at the intersection of artificial intelligence, privacy engineering and customer-centric design. Our Mission: To pioneer trustworthy AI innovations that delight customers while setting new standards for privacy and responsible technology development. We aim to transform how Amazon builds AI products by creating solutions that balance innovation with customer trust.
US, CA, San Francisco
Amazon has launched a new research lab in San Francisco to develop foundational capabilities for useful AI agents. We’re enabling practical AI to make our customers more productive, empowered, and fulfilled. In particular, our work combines large language models (LLMs) with reinforcement learning (RL) to solve reasoning, planning, and world modeling in both virtual and physical environments. Our research builds on that of Amazon’s broader AGI organization, which recently introduced Amazon Nova, a new generation of state-of-the-art foundation models (FMs). Our lab is a small, talent-dense team with the resources and scale of Amazon. Each team in the lab has the autonomy to move fast and the long-term commitment to pursue high-risk, high-payoff research. We’re entering an exciting new era where agents can redefine what AI makes possible. We’d love for you to join our lab and build it from the ground up! Key job responsibilities You will contribute directly to AI agent development in a research engineering role: running experiments, building tools to accelerate scientific workflows, and scaling up AI systems. Key responsibilities include: * Design, maintain, and enhance tools and workflows that support cutting-edge research * Adapt quickly to evolving research priorities and team needs * Stay informed on the latest advancements in large language models and related research * Collaborate closely with researchers to develop new techniques and tools around emerging agent capabilities * Drive project execution, including scoping, prioritization, timeline management, and stakeholder communication * Thrive in a fast-paced, iterative environment, delivering high-quality software on tight schedules * Apply strong software engineering fundamentals to produce clean, reliable, and maintainable code About the team The Amazon AGI SF Lab is focused on developing new foundational capabilities for enabling useful AI agents that can take actions in the digital and physical worlds. In other words, we’re enabling practical AI that can actually do things for us and make our customers more productive, empowered, and fulfilled. The lab is designed to empower AI researchers and engineers to make major breakthroughs with speed and focus toward this goal. Our philosophy combines the agility of a startup with the resources of Amazon. By keeping the team lean, we’re able to maximize the amount of compute per person. Each team in the lab has the autonomy to move fast and the long-term commitment to pursue high-risk, high-payoff research.
US, CA, Sunnyvale
Prime Video is a first-stop entertainment destination offering customers a vast collection of premium programming in one app available across thousands of devices. Prime members can customize their viewing experience and find their favorite movies, series, documentaries, and live sports – including Amazon MGM Studios-produced series and movies; licensed fan favorites; and programming from Prime Video subscriptions such as Apple TV+, HBO Max, Peacock, Crunchyroll and MGM+. All customers, regardless of whether they have a Prime membership or not, can rent or buy titles via the Prime Video Store, and can enjoy even more content for free with ads. Are you interested in shaping the future of entertainment? Prime Video's technology teams are creating best-in-class digital video experience. As a Prime Video team member, you’ll have end-to-end ownership of the product, user experience, design, and technology required to deliver state-of-the-art experiences for our customers. You’ll get to work on projects that are fast-paced, challenging, and varied. You’ll also be able to experiment with new possibilities, take risks, and collaborate with remarkable people. We’ll look for you to bring your diverse perspectives, ideas, and skill-sets to make Prime Video even better for our customers. With global opportunities for talented technologists, you can decide where a career Prime Video Tech takes you! Key job responsibilities As an Applied Scientist at Prime Video, you will have end-to-end ownership of the product, related research and experimentation, applying advanced machine learning techniques in computer vision (CV), Generative AI, multimedia understanding and so on. You’ll work on diverse projects that enhance Prime Video’s content localization, image/video understanding, and content personalization, driving impactful innovations for our global audience. Other responsibilities include: - Research and develop generative models for controllable synthesis across images, video, vector graphics, and multimedia - Innovate in advanced diffusion and flow-based methods (e.g., inverse flow matching, parameter efficient training, guided sampling, test-time adaptation) to improve efficiency, controllability, and scalability. - Advance visual grounding, depth and 3D estimation, segmentation, and matting for integration into pre-visualization, compositing, VFX, and post-production pipelines. - Design multimodal GenAI workflows including visual-language model tooling, structured prompt orchestration, agentic pipelines. A day in the life Prime Video is pioneering the use of Generative AI to empower the next generation of creatives. Our mission is to make world-class media creation accessible, scalable, and efficient. We are seeking an Applied Scientist to advance the state of the art in Generative AI and to deliver these innovations as production-ready systems at Amazon scale. Your work will give creators unprecedented freedom and control while driving new efficiencies across Prime Video’s global content and marketing pipelines. This is a newly formed team within Prime Video Science!
US, WA, Seattle
The Sponsored Products and Brands (SPB) team at Amazon Ads is re-imagining the advertising landscape through the latest generative AI technologies, revolutionizing how millions of customers discover products and engage with brands across Amazon.com and beyond. We are at the forefront of re-inventing advertising experiences, bridging human creativity with artificial intelligence to transform every aspect of the advertising lifecycle from ad creation and optimization to performance analysis and customer insights. We are a passionate group of innovators dedicated to developing responsible and intelligent AI technologies that balance the needs of advertisers, enhance the shopping experience, and strengthen the marketplace. If you're energized by solving complex challenges and pushing the boundaries of what's possible with AI, join us in shaping the future of advertising. Key job responsibilities This role will be pivotal in redesigning how ads contribute to a personalized, relevant, and inspirational shopping experience, with the customer value proposition at the forefront. Key responsibilities include, but are not limited to: - Contribute to the design and development of GenAI, deep learning, multi-objective optimization and/or reinforcement learning empowered solutions to transform ad retrieval, auctions, whole-page relevance, and/or bespoke shopping experiences. - Collaborate cross-functionally with other scientists, engineers, and product managers to bring scalable, production-ready science solutions to life. - Stay abreast of industry trends in GenAI, LLMs, and related disciplines, bringing fresh and innovative concepts, ideas, and prototypes to the organization. - Contribute to the enhancement of team’s scientific and technical rigor by identifying and implementing best-in-class algorithms, methodologies, and infrastructure that enable rapid experimentation and scaling. - Mentor and grow junior scientists and engineers, cultivating a high-performing, collaborative, and intellectually curious team. A day in the life As an Applied Scientist on the Sponsored Products and Brands Off-Search team, you will contribute to the development in Generative AI (GenAI) and Large Language Models (LLMs) to revolutionize our advertising flow, backend optimization, and frontend shopping experiences. This is a rare opportunity to redefine how ads are retrieved, allocated, and/or experienced—elevating them into personalized, contextually aware, and inspiring components of the customer journey. You will have the opportunity to fundamentally transform areas such as ad retrieval, ad allocation, whole-page relevance, and differentiated recommendations through the lens of GenAI. By building novel generative models grounded in both Amazon’s rich data and the world’s collective knowledge, your work will shape how customers engage with ads, discover products, and make purchasing decisions. If you are passionate about applying frontier AI to real-world problems with massive scale and impact, this is your opportunity to define the next chapter of advertising science. About the team The Off-Search team within Sponsored Products and Brands (SPB) is focused on building delightful ad experiences across various surfaces beyond Search on Amazon—such as product detail pages, the homepage, and store-in-store pages—to drive monetization. Our vision is to deliver highly personalized, context-aware advertising that adapts to individual shopper preferences, scales across diverse page types, remains relevant to seasonal and event-driven moments, and integrates seamlessly with organic recommendations such as new arrivals, basket-building content, and fast-delivery options. To execute this vision, we work in close partnership with Amazon Stores stakeholders to lead the expansion and growth of advertising across Amazon-owned and -operated pages beyond Search. We operate full stack—from backend ads-retail edge services, ads retrieval, and ad auctions to shopper-facing experiences—all designed to deliver meaningful value. Curious about our advertising solutions? Discover more about Sponsored Products and Sponsored Brands to see how we’re helping businesses grow on Amazon.com and beyond!
US, CA, Sunnyvale
Prime Video is a first-stop entertainment destination offering customers a vast collection of premium programming in one app available across thousands of devices. Prime members can customize their viewing experience and find their favorite movies, series, documentaries, and live sports – including Amazon MGM Studios-produced series and movies; licensed fan favorites; and programming from Prime Video subscriptions such as Apple TV+, HBO Max, Peacock, Crunchyroll and MGM+. All customers, regardless of whether they have a Prime membership or not, can rent or buy titles via the Prime Video Store, and can enjoy even more content for free with ads. Are you interested in shaping the future of entertainment? Prime Video's technology teams are creating best-in-class digital video experience. As a Prime Video team member, you’ll have end-to-end ownership of the product, user experience, design, and technology required to deliver state-of-the-art experiences for our customers. You’ll get to work on projects that are fast-paced, challenging, and varied. You’ll also be able to experiment with new possibilities, take risks, and collaborate with remarkable people. We’ll look for you to bring your diverse perspectives, ideas, and skill-sets to make Prime Video even better for our customers. With global opportunities for talented technologists, you can decide where a career Prime Video Tech takes you! Key job responsibilities As an Applied Scientist at Prime Video, you will have end-to-end ownership of the product, related research and experimentation, applying advanced machine learning techniques in computer vision (CV), Generative AI, multimedia understanding and so on. You’ll work on diverse projects that enhance Prime Video’s content localization, image/video understanding, and content personalization, driving impactful innovations for our global audience. Other responsibilities include: - Research and develop generative models for controllable synthesis across images, video, vector graphics, and multimedia - Innovate in advanced diffusion and flow-based methods (e.g., inverse flow matching, parameter efficient training, guided sampling, test-time adaptation) to improve efficiency, controllability, and scalability. - Advance visual grounding, depth and 3D estimation, segmentation, and matting for integration into pre-visualization, compositing, VFX, and post-production pipelines. - Design multimodal GenAI workflows including visual-language model tooling, structured prompt orchestration, agentic pipelines. A day in the life Prime Video is pioneering the use of Generative AI to empower the next generation of creatives. Our mission is to make world-class media creation accessible, scalable, and efficient. We are seeking an Applied Scientist to advance the state of the art in Generative AI and to deliver these innovations as production-ready systems at Amazon scale. Your work will give creators unprecedented freedom and control while driving new efficiencies across Prime Video’s global content and marketing pipelines. This is a newly formed team within Prime Video Science!
US, MA, Boston
AI is the most transformational technology of our time, capable of tackling some of humanity’s most challenging problems. That is why Amazon is investing in generative AI (GenAI) and the responsible development and deployment of large language models (LLMs) across all of our businesses. Come build the future of human-technology interaction with us. We are looking for an Applied Scientist with strong technical skills which includes coding and natural language processing experience in dataset construction, training and evaluating models, and automatic processing of large datasets. You will play a critical role in driving innovation and advancing the state-of-the-art in natural language processing and machine learning. You will work closely with cross-functional teams, including product managers, language engineers, and other scientists. Key job responsibilities Specifically, the Applied Scientist will: • Ensure quality of speech/language/other data throughout all stages of acquisition and processing, including data sourcing/collection, ground truth generation, normalization, transformation, cross-lingual alignment/mapping, etc. • Clean, analyze and select speech/language/other data to achieve goals • Build and test models that elevate the customer experience • Collaborate with colleagues from science, engineering and business backgrounds • Present proposals and results in a clear manner backed by data and coupled with actionable conclusions • Work with engineers to develop efficient data querying infrastructure for both offline and online use cases
US, CA, San Francisco
The Artificial General Intelligence (AGI) team is looking for a passionate, talented, and inventive Member of Technical Staff with a strong deep learning background, to build industry-leading Generative Artificial Intelligence (GenAI) technology with Large Language Models (LLMs) and multimodal systems. Key job responsibilities As a Member of Technical Staff with the AGI team, you will lead the development of algorithms and modeling techniques, to advance the state of the art with LLMs. You will lead the foundational model development in an applied research role, including model training, dataset design, and pre- and post-training optimization. Your work will directly impact our customers in the form of products and services that make use of GenAI technology. You will leverage Amazon’s heterogeneous data sources and large-scale computing resources to accelerate advances in LLMs. About the team The AGI team has a mission to push the envelope in GenAI with LLMs and multimodal systems, in order to provide the best-possible experience for our customers.