AWS VP of AI and data on computer vision research at Amazon

In his keynote address at CVPR, Swami Sivasubramanian considers the many ways that Amazon incorporates computer vision technology into its products and makes it directly available to Amazon Web Services’ customers.

At this year’s Computer Vision and Pattern Recognition Conference (CVPR) — the premier computer vision conference — Amazon Web Services’ vice president for AI and data, Swami Sivasubramanian, gave a keynote address titled “Computer vision at scale: Driving customer innovation and industry adoption”. What follows is an edited version of that talk.

Related content
As in other areas of AI, generative models and foundation models — such as vision-language models — are a hot topic.

Amazon has been working on AI for more than 25 years, and that includes our ongoing innovations in computer vision. Computer vision is part of Amazon’s heritage, ethos, and future — and today, we’re using it in many parts of the company.

Computer vision technology helps power our e-commerce recommendations engine on Amazon.com, as well as the customer reviews you see on our product pages. Our Prime Air drones use computer vision and deep learning, and the Amazon Show uses computer vision to streamline customer interactions with Alexa. Every day, more than half a million vision-enabled robots assist with stocking inventory, filling orders, and sorting packages for delivery.

I’d like to take a closer look at a few such applications, starting with Amazon Ads.

Amazon Ads Image Generator

Advertisers often struggle to create visually appealing and effective ads, especially when it comes to generating multiple variations and optimizing for different placements and audiences. That’s why we developed an AI-powered image generation tool called Amazon Ads Image Generator.

With this tool, advertisers can input product images, logos, and text prompts, and an AI model will generate multiple versions of visually appealing ads tailored to their brands and messaging. The tool aims to simplify and streamline the ad creation process for advertisers, allowing them to produce engaging visuals more efficiently and cost effectively.

Ad Generator.png
Examples of the types of ad variations generated by the Amazon Ads Image Generator.

To build the Image Generator, we used both Amazon machine learning services such as Amazon SageMaker and Amazon SageMaker Jumpstart and human-in-the-loop workflows that ensure high-quality and appropriate images. The architecture consists of modular microservices and separate components for model development, registry, model lifecycle management, selecting the appropriate model, and tracking the job throughout the service, as well as a customer-facing API.

Amazon One

In the retail setting, we’re reimagining identification, entry, and payment with Amazon One, a fast, convenient, and contactless experience that lets customers leave their wallets — and even their phones — at home. Instead, they can use the palms of their hands to enter a facility, identify themselves, pay, present loyalty cards or event tickets, and even verify their ages.

Amazon One is able to recognize the unique lines, grooves, and ridges of your palm and the pattern of veins just under the skin using infrared light. At registration, proprietary algorithms capture and encrypt your palm image within seconds. The Amazon One device uses this information to create your palm signature and connect it to your credit card or your Amazon account.

To ensure Amazon One’s accuracy, we trained it on millions of synthetically generated images with subtle variations, such as illumination conditions and hand poses. We also trained our system to detect fake hands, such as a highly detailed silicon hand replica, and reject them.

Amazon One synthetic images.jpg
Examples of the types of synthetic images used to train the Amazon One model.

Protecting customer data and safeguarding privacy are foundational design principles with Amazon One. Palm images are never stored on-device. Rather, the images are immediately encrypted and sent to a highly secure zone in the Amazon Web Services (AWS) cloud, custom-built for Amazon One, where the customer’s palm signature is created.

Customers like Crunch Fitness are taking advantage of Amazon One and features like the membership linking capability, which addresses a traditional pain point for both customers and the fitness industry. Crunch Fitness announced that it was the first fitness brand to introduce Amazon One as an entry option for its members at select locations nationwide.

NFL Next Gen Stats

Related content
Spliced binned-Pareto distributions are flexible enough to handle symmetric, asymmetric, and multimodal distributions, offering a more consistent metric.

Twenty-five years ago, the height of innovation in NFL broadcasts was the superimposition of a yellow line on the field to mark the first-down distance. These types of on-screen fan experiences have come a long way since then, thanks in large part to AI and machine learning (ML) technologies.

For example, as part of our ongoing partnership with the NFL, we’re delivering Prime Vision with Next Gen Stats during Thursday Night Football to provide insights gleaned by tracking RFID chips embedded in players’ shoulder pads.

One of our most recent innovations is the Defensive Alerts feature shown below, which tracks the movements of defensive players before the snap and uses an ML model to identify “players of interest” most likely to rush the quarterback (circled in red). This unique capability came out of a collaboration between the Thursday Night Football producers, engineers, and our computer vision team.

Defensive alerts.png
The new defensive-alert feature from NFL Nex Gen Stats.

In recent months, Amazon Science has profiled a range of other Amazon computer vision projects, from Project P.I., a fulfillment center technology that uses generative AI and computer vision to help spot, isolate, and remove imperfect products before they’re delivered to customers, to Virtual Try-All, which enables customers to visualize any product in any personal setting.

But for now, I’d like to turn from Amazon products and services that rely on computer vision to the ways in which AWS puts computer vision technologies directly into our customers’ hands.

The AWS ML stack

At AWS, our mission is to make it easy for every developer, data scientist, and researcher to build intelligent applications and leverage AI-enabled services that unlock new value from their data. We do this with the industry’s most comprehensive set of ML tools, which we think of as constituting a three-layer stack.

At the top of the stack are applications that rely on large language models (LLMs), like Amazon Q, our generative-AI-powered assistant for accelerating software development and helping customers extract useful information from their data.

Related content
AWS service enables machine learning innovation on a robust foundation.

At the middle layer, we offer a wide variety of services that enable developers to build powerful AI applications, from our computer vision services and devices to Amazon Bedrock, a secure and easy way to build generative-AI apps with the latest and greatest foundation models and the broadest set of capabilities for security, privacy, and responsible AI.

And at the bottom layer, we provide high-performance, cost-effective infrastructure that is purpose-built for ML.

Let’s look at few examples in more detail, starting with one our most popular vision services: Amazon Rekognition.

Amazon Rekognition

Amazon Rekognition is a fully managed service that uses ML to automatically extract information from images and video files so that customers can build computer vision models and apps more quickly, at lower cost, and with customization for different business needs.

This includes support for a variety of use cases, from content moderation, which enables the detection of unsafe or inappropriate content across images and videos, to custom labels that enable customers to detect objects like brand logos. And most recently we introduced an anti-spoofing feature to help customers verify that only real users, and not spoofs or bad actors, can access their services.

Amazon Textract

Amazon Textract uses optical character recognition to convert images or text — whether from a scanned document, PDF, or a photo of a document — into machine-encoded text. But it goes beyond traditional OCR technology by not only identifying each character, word, and letter but also the contents of fields in forms and information stored in tables.

For example, when presented with queries like the ones below, Textract can create specialized response objects by leveraging a combination of visual, spatial, and language cues. Each object assigns its query a short label, or “alias”. It then provides an answer to the query, the confidence it has in that answer, and the location of the answer on the page.

Textract.png
An example of the outputs of a specialized Textract response object.

Amazon Bedrock

Finally, let’s look at how we’re enabling computer vision technologies with Amazon Bedrock, a fully managed service that makes it easy for customers to build and scale generative-AI applications. Tens of thousands of customers have already selected Amazon Bedrock as the foundation for their generative-AI strategies because it gives them access to the broadest selection of first- and third-party LLMs and foundation models. This includes models from AI21 Labs, Anthropic, Cohere, Meta, Mistral AI, and Stability AI, as well as our own Titan family of models.

Related content
Novel architectures and carefully prepared training data enable state-of-the-art performance.

One of those models is the Titan Image Generator, which enables customers to produce high-quality, realistic images or enhance existing images using natural-language prompts. Amazon Science reported on the Titan Image Generator when we launched it last year at our re:Invent conference.

Responsible AI

We remain committed to the responsible development and deployment of AI technology, around which we made a series of voluntary commitments at the White House last year. To that end, we’ve launched new features and techniques such as invisible watermarks and a new method for assessing “hallucinations” in generative models.

By default, all Titan-generated images contain invisible watermarks, which are designed to help reduce the spread of misinformation by providing a discreet mechanism for identifying AI-generated images. AWS is among the first model providers to widely release built-in invisible watermarks that are integrated into the image outputs and are designed to be tamper-resistant.

Related content
Real-world deployment requires notions of fairness that are task relevant and responsive to the available data, recognition of unforeseen variation in the “last mile” of AI delivery, and collaboration with AI activists.

Hallucination occurs when the data generated by a generative model do not align with reality, as represented by a knowledge base of “facts”. The alignment between representation and fact is referred to as grounding. In the case of vision-language models, the knowledge base to which generated text must align is the evidence provided in images. There is a considerable amount of work ongoing at Amazon on visual grounding, some of which was presented at CVPR.

One of the necessary elements of controlling hallucinations is to be able to measure them. Consider, for example, the following image-prompt pair and the output generated by a vision-language (VL) model. If the model extends its output with the highest-probability next word, it will hallucinate a fridge where the image includes none:

VL kitchen.png
Input image, prompt, and output probabilities from a vision-language model.

 Existing datasets for evaluating hallucinations typically consist of specific questions like “Is there a refrigerator in this image?” But at CVPR, our team presented a paper describing a new benchmark called THRONE, which leverages LLMs themselves to evaluate hallucinations in response to free-form, open-ended prompts such as “Describe what you see”.

In other work, AWS researchers have found that one of the reasons modern transformer-based vision-language models hallucinate is that they cannot retain information about the input image prompt: they progressively “forget” it as more tokens are generated and longer contexts used.

Related content
Method preserves knowledge encoded in teacher model’s attention heads even when student model has fewer of them.

Recently, state space models have resurfaced ideas from the ’70s in a modern key, stacking dynamical models into modular architectures that have arbitrarily long memory residing in their state. But that memory — much like human memory — grows lossier over time, so it cannot be used effectively for grounding. Hybrid models that combine state space models and attention-based networks (such as transformers) are also gaining popularity, given their high recall capabilities over longer contexts. Literally every week, a growing number of variants appear in the literature.

At Amazon, we want to not only make the existing models available for builders to use but also empower researchers to explore and expand the current set of hybrid models. For this reason, we plan to open-source a class of modular hybrid architectures that are designed to make both memory and inference computation more efficient.

To enable efficient memory, these architectures use a more general elementary module that seamlessly integrates both eidetic (exact) and fading (lossy) memory, so the model can learn the optimal tradeoff. To make inference more efficient, we optimize core modules to run on the most efficient hardware — specifically, AWS Trainium, our purpose-built chip for training machine learning models.

It's an exciting time for AI research, with innovations emerging at a breakneck pace. Amazon is committed to making those innovations available to our customers, both indirectly, in the AI-enabled products and services we offer, and directly, through AWS’s commitment to democratize AI.

Research areas

Related content

US, WA, Bellevue
The Artificial General Intelligence (AGI) team is looking for a passionate, talented, and inventive Senior Applied Scientist to work on methodologies for Generative Artificial Intelligence (GenAI) models. As a Senior Applied Scientist, you will be responsible for leading the development of novel algorithms and modeling techniques to advance the state of the art. Your work will directly impact our customers and will leverage Amazon’s heterogeneous data sources and large-scale computing resources to accelerate development with multi-modal Large Language Models (LLMs) and GenAI. You will have significant influence on our overall strategy by working at the intersection of engineering and applied science to scale pre-training and post-training workflows and build efficient models. You will support the system architecture and the best practices that enable a quality infrastructure. Key job responsibilities Join us to work as an integral part of a team that has experience with GenAI models in this space. We work on these areas: - Pre-training and post-training multimodal LLMs - Scale training, optimization methods, and learning objectives - Utilize, build, and extend upon industry-leading frameworks - Work with other team members to investigate design approaches, prototype new technology, scientific techniques and evaluate technical feasibility - Deliver results independently in a self-organizing Agile environment while constantly embracing and adapting new scientific advances About the team The AGI team has a mission to push the envelope in GenAI with Large Language Models (LLMs) and multimodal systems, in order to provide the best-possible experience for our customers.
CA, BC, Vancouver
Join our Amazon Private Brands Selection Guidance organization in building science and tech solutions at scale to delight our customers with products across our leading private brands such as Amazon Basics, Amazon Essentials, and by Amazon. The Selection Guidance team applies Generative AI, Machine Learning, Statistics, and Economics solutions to drive our private brands product assortment, strategic business decisions, and product inputs such as title, price, merchandising and ordering. We are an interdisciplinary team of Scientists, Economists, Engineers, and Product Managers incubating and building day one solutions using novel technology, to solve some of the toughest business problems at Amazon. As a Sr. Data Scientist you will invent novel solutions and prototypes, and directly contribute to bringing your ideas to life through production implementation. Current research areas include entity resolution, agentic AI, large language models, and product substitutes. You will review and guide scientists across the team on their designs and implementations, and raise the team bar for science research and prototypes. This is a unique, high visibility opportunity for someone who wants to develop ambitious science solutions and have direct business and customer impact. Key job responsibilities - Partner with business stakeholders to deeply understand APB business problems and frame ambiguous business problems as science problems and solutions. - Invent novel science solutions, develop prototypes, and deploy production software to solve business problems. - Review and guide science solutions across the team. - Publish and socialize your and the team's research across Amazon and external avenues as appropriate - Leverage industry best practices to establish repeatable applied science practices, principles & processes.
US, WA, Seattle
We are looking for a passionate Applied Scientist to help pioneer the next generation of agentic AI applications for Amazon advertisers. In this role, you will design agentic architectures, develop tools and datasets, and contribute to building systems that can reason, plan, and act autonomously across complex advertiser workflows. You will work at the forefront of applied AI, developing methods for fine-tuning, reinforcement learning, and preference optimization, while helping create evaluation frameworks that ensure safety, reliability, and trust at scale. You will work backwards from the needs of advertisers—delivering customer-facing products that directly help them create, optimize, and grow their campaigns. Beyond building models, you will advance the agent ecosystem by experimenting with and applying core primitives such as tool orchestration, multi-step reasoning, and adaptive preference-driven behavior. This role requires working independently on ambiguous technical problems, collaborating closely with scientists, engineers, and product managers to bring innovative solutions into production. Key job responsibilities - Design and build agents to guide advertisers in conversational and non-conversational experience. - Design and implement advanced model and agent optimization techniques, including supervised fine-tuning, instruction tuning and preference optimization (e.g., DPO/IPO). - Curate datasets and tools for MCP. - Build evaluation pipelines for agent workflows, including automated benchmarks, multi-step reasoning tests, and safety guardrails. - Develop agentic architectures (e.g., CoT, ToT, ReAct) that integrate planning, tool use, and long-horizon reasoning. - Prototype and iterate on multi-agent orchestration frameworks and workflows. - Collaborate with peers across engineering and product to bring scientific innovations into production. - Stay current with the latest research in LLMs, RL, and agent-based AI, and translate findings into practical applications. About the team The Sponsored Products and Brands team at Amazon Ads is re-imagining the advertising landscape through the latest generative AI technologies, revolutionizing how millions of customers discover products and engage with brands across Amazon.com and beyond. We are at the forefront of re-inventing advertising experiences, bridging human creativity with artificial intelligence to transform every aspect of the advertising lifecycle from ad creation and optimization to performance analysis and customer insights. We are a passionate group of innovators dedicated to developing responsible and intelligent AI technologies that balance the needs of advertisers, enhance the shopping experience, and strengthen the marketplace. If you're energized by solving complex challenges and pushing the boundaries of what's possible with AI, join us in shaping the future of advertising. The Advertiser Guidance team within Sponsored Products and Brands is focused on guiding and supporting 1.6MM advertisers to meet their advertising needs of creating and managing ad campaigns. At this scale, the complexity of diverse advertiser goals, campaign types, and market dynamics creates both a massive technical challenge and a transformative opportunity: even small improvements in guidance systems can have outsized impact on advertiser success and Amazon’s retail ecosystem. Our vision is to build a highly personalized, context-aware agentic advertiser guidance system that leverages LLMs together with tools such as auction simulations, ML models, and optimization algorithms. This agentic framework, will operate across both chat and non-chat experiences in the ad console, scaling to natural language queries as well as proactively delivering guidance based on deep understanding of the advertiser. To execute this vision, we collaborate closely with stakeholders across Ad Console, Sales, and Marketing to identify opportunities—from high-level product guidance down to granular keyword recommendations—and deliver them through a tailored, personalized experience. Our work is grounded in state-of-the-art agent architectures, tool integration, reasoning frameworks, and model customization approaches (including tuning, MCP, and preference optimization), ensuring our systems are both scalable and adaptive.
US, CA, Sunnyvale
Prime Video is a first-stop entertainment destination offering customers a vast collection of premium programming in one app available across thousands of devices. Prime members can customize their viewing experience and find their favorite movies, series, documentaries, and live sports – including Amazon MGM Studios-produced series and movies; licensed fan favorites; and programming from Prime Video add-on subscriptions such as Apple TV+, Max, Crunchyroll and MGM+. All customers, regardless of whether they have a Prime membership or not, can rent or buy titles via the Prime Video Store, and can enjoy even more content for free with ads. Are you interested in shaping the future of entertainment? Prime Video's technology teams are creating best-in-class digital video experience. As a Prime Video technologist, you’ll have end-to-end ownership of the product, user experience, design, and technology required to deliver state-of-the-art experiences for our customers. You’ll get to work on projects that are fast-paced, challenging, and varied. You’ll also be able to experiment with new possibilities, take risks, and collaborate with remarkable people. We’ll look for you to bring your diverse perspectives, ideas, and skill-sets to make Prime Video even better for our customers. With global opportunities for talented technologists, you can decide where a career Prime Video Tech takes you! Key job responsibilities - Develop ML models for various recommendation & search systems using deep learning, online learning, and optimization methods - Work closely with other scientists, engineers and product managers to expand the depth of our product insights with data, create a variety of experiments to determine the high impact projects to include in planning roadmaps - Stay up-to-date with advancements and the latest modeling techniques in the field - Publish your research findings in top conferences and journals A day in the life We're using advanced approaches such as foundation models to connect information about our videos and customers from a variety of information sources, acquiring and processing data sets on a scale that only a few companies in the world can match. This will enable us to recommend titles effectively, even when we don't have a large behavioral signal (to tackle the cold-start title problem). It will also allow us to find our customer's niche interests, helping them discover groups of titles that they didn't even know existed. We are looking for creative & customer obsessed machine learning scientists who can apply the latest research, state of the art algorithms and ML to build highly scalable page personalization solutions. You'll be a research leader in the space and a hands-on ML practitioner, guiding and collaborating with talented teams of engineers and scientists and senior leaders in the Prime Video organization. You will also have the opportunity to publish your research at internal and external conferences. About the team Prime Video Recommendation Science team owns science solution to power recommendation and personalization experience on various Prime Video surfaces and devices. We work closely with the engineering teams to launch our solutions in production.
US, CA, Sunnyvale
Prime Video is a first-stop entertainment destination offering customers a vast collection of premium programming in one app available across thousands of devices. Prime members can customize their viewing experience and find their favorite movies, series, documentaries, and live sports – including Amazon MGM Studios-produced series and movies; licensed fan favorites; and programming from Prime Video add-on subscriptions such as Apple TV+, Max, Crunchyroll and MGM+. All customers, regardless of whether they have a Prime membership or not, can rent or buy titles via the Prime Video Store, and can enjoy even more content for free with ads. Are you interested in shaping the future of entertainment? Prime Video's technology teams are creating best-in-class digital video experience. As a Prime Video technologist, you’ll have end-to-end ownership of the product, user experience, design, and technology required to deliver state-of-the-art experiences for our customers. You’ll get to work on projects that are fast-paced, challenging, and varied. You’ll also be able to experiment with new possibilities, take risks, and collaborate with remarkable people. We’ll look for you to bring your diverse perspectives, ideas, and skill-sets to make Prime Video even better for our customers. With global opportunities for talented technologists, you can decide where a career Prime Video Tech takes you! Key job responsibilities - Develop ML models for various recommendation & search systems using deep learning, online learning, and optimization methods - Work closely with other scientists, engineers and product managers to expand the depth of our product insights with data, create a variety of experiments to determine the high impact projects to include in planning roadmaps - Stay up-to-date with advancements and the latest modeling techniques in the field - Publish your research findings in top conferences and journals A day in the life We're using advanced approaches such as foundation models to connect information about our videos and customers from a variety of information sources, acquiring and processing data sets on a scale that only a few companies in the world can match. This will enable us to recommend titles effectively, even when we don't have a large behavioral signal (to tackle the cold-start title problem). It will also allow us to find our customer's niche interests, helping them discover groups of titles that they didn't even know existed. We are looking for creative & customer obsessed machine learning scientists who can apply the latest research, state of the art algorithms and ML to build highly scalable page personalization solutions. You'll be a research leader in the space and a hands-on ML practitioner, guiding and collaborating with talented teams of engineers and scientists and senior leaders in the Prime Video organization. You will also have the opportunity to publish your research at internal and external conferences. About the team Prime Video Recommendation Science team owns science solution to power recommendation and personalization experience on various Prime Video surfaces and devices. We work closely with the engineering teams to launch our solutions in production.
US, CA, Sunnyvale
Prime Video is a first-stop entertainment destination offering customers a vast collection of premium programming in one app available across thousands of devices. Prime members can customize their viewing experience and find their favorite movies, series, documentaries, and live sports – including Amazon MGM Studios-produced series and movies; licensed fan favorites; and programming from Prime Video add-on subscriptions such as Apple TV+, Max, Crunchyroll and MGM+. All customers, regardless of whether they have a Prime membership or not, can rent or buy titles via the Prime Video Store, and can enjoy even more content for free with ads. Are you interested in shaping the future of entertainment? Prime Video's technology teams are creating best-in-class digital video experience. As a Prime Video technologist, you’ll have end-to-end ownership of the product, user experience, design, and technology required to deliver state-of-the-art experiences for our customers. You’ll get to work on projects that are fast-paced, challenging, and varied. You’ll also be able to experiment with new possibilities, take risks, and collaborate with remarkable people. We’ll look for you to bring your diverse perspectives, ideas, and skill-sets to make Prime Video even better for our customers. With global opportunities for talented technologists, you can decide where a career Prime Video Tech takes you! We are looking for a self-motivated, passionate and resourceful Applied Scientist to bring diverse perspectives, ideas, and skill-sets to make Prime Video even better for our customers. You will spend your time as a hands-on machine learning practitioner and a research leader. You will play a key role on the team, building and guiding machine learning models from the ground up. At the end of the day, you will have the reward of seeing your contributions benefit millions of Amazon.com customers worldwide. Key job responsibilities - Develop AI solutions for various Prime Video Search systems using Deep learning, GenAI, Reinforcement Learning, and optimization methods; - Work closely with engineers and product managers to design, implement and launch AI solutions end-to-end; - Design and conduct offline and online (A/B) experiments to evaluate proposed solutions based on in-depth data analyses; - Effectively communicate technical and non-technical ideas with teammates and stakeholders; - Stay up-to-date with advancements and the latest modeling techniques in the field; - Publish your research findings in top conferences and journals. About the team Prime Video Search Science team owns science solution to power search experience on various devices, from sourcing, relevance, ranking, to name a few. We work closely with the engineering teams to launch our solutions in production.
US, CA, Sunnyvale
Prime Video is a first-stop entertainment destination offering customers a vast collection of premium programming in one app available across thousands of devices. Prime members can customize their viewing experience and find their favorite movies, series, documentaries, and live sports – including Amazon MGM Studios-produced series and movies; licensed fan favorites; and programming from Prime Video add-on subscriptions such as Apple TV+, Max, Crunchyroll and MGM+. All customers, regardless of whether they have a Prime membership or not, can rent or buy titles via the Prime Video Store, and can enjoy even more content for free with ads. Are you interested in shaping the future of entertainment? Prime Video's technology teams are creating best-in-class digital video experience. As a Prime Video technologist, you’ll have end-to-end ownership of the product, user experience, design, and technology required to deliver state-of-the-art experiences for our customers. You’ll get to work on projects that are fast-paced, challenging, and varied. You’ll also be able to experiment with new possibilities, take risks, and collaborate with remarkable people. We’ll look for you to bring your diverse perspectives, ideas, and skill-sets to make Prime Video even better for our customers. With global opportunities for talented technologists, you can decide where a career Prime Video Tech takes you! We are looking for a self-motivated, passionate and resourceful Applied Scientist to bring diverse perspectives, ideas, and skill-sets to make Prime Video even better for our customers. You will spend your time as a hands-on machine learning practitioner and a research leader. You will play a key role on the team, building and guiding machine learning models from the ground up. At the end of the day, you will have the reward of seeing your contributions benefit millions of Amazon.com customers worldwide. Key job responsibilities - Develop AI solutions for various Prime Video Search systems using Deep learning, GenAI, Reinforcement Learning, and optimization methods; - Work closely with engineers and product managers to design, implement and launch AI solutions end-to-end; - Design and conduct offline and online (A/B) experiments to evaluate proposed solutions based on in-depth data analyses; - Effectively communicate technical and non-technical ideas with teammates and stakeholders; - Stay up-to-date with advancements and the latest modeling techniques in the field; - Publish your research findings in top conferences and journals. About the team Prime Video Search Science team owns science solution to power search experience on various devices, from sourcing, relevance, ranking, to name a few. We work closely with the engineering teams to launch our solutions in production.
US, CA, Pasadena
The Amazon Web Services (AWS) Center for Quantum Computing (CQC) is a multi-disciplinary team of scientists, engineers, and technicians on a mission to develop a fault-tolerant quantum computer. You will be joining a team located in Pasadena, CA that conducts materials research to improve the performance of superconducting quantum processors. We seek a Quantum Research Scientist to investigate how material defects affect qubit performance. In this role, you will combine expertise in numerical simulations and materials characterization to study materials loss mechanisms such as two-level systems, quasiparticles, vortices, etc. Key job responsibilities Provide subject matter expertise on integrated experimental and computational studies of materials defects Develop and use computational tools for large-scale simulations of disordered structures Develop and implement multi-technique materials characterization workflows for thin films and devices, with a focus on the surfaces and interfaces Identify material properties that can be a reliable proxy for the performance of superconducting resonators and qubits Communicate findings to teammates, the broader CQC team and, when appropriate, publish findings in scientific journals A day in the life At the AWS CQC, we understand that developing quantum computing technology is a marathon, not a sprint. The work/life integration within our team encourages a culture where employees work hard and also have ownership over their downtime. We are committed to the growth and development of every employee at the AWS CQC, and that includes our research scientists. You will receive management and mentorship from within the team that is geared toward career growth, and also have the opportunity to participate in Amazon's mentorship programs for scientists and engineers. Working closely with other quantum research scientists in other disciplines – like design, measurement and cryogenic hardware – will provide opportunities to dive deep into an education on quantum computing. About the team Our team contributes to the fabrication of processors and other hardware that enable quantum computing technologies. Doing that necessitates the development of materials with tailored properties for superconducting circuits. Research Scientists and Engineers on the Materials team operate deposition and characterization systems in order to develop and optimize thin film processes for use in these devices. They work alongside other Research Scientists and Engineers to help deliver the fabricated devices for quantum computing experiments. Export Control Requirement: Due to applicable export control laws and regulations, candidates must be either a U.S. citizen or national, U.S. permanent resident (i.e., current Green Card holder), or lawfully admitted into the U.S. as a refugee or granted asylum, or be able to obtain a U.S export license. If you are unsure if you meet these requirements, please apply and Amazon will review your application for eligibility. About the team Diverse Experiences AWS values diverse experiences. Even if you do not meet all of the preferred qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why AWS? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Inclusive Team Culture AWS values curiosity and connection. Our employee-led and company-sponsored affinity groups promote inclusion and empower our people to take pride in what makes us unique. Our inclusion events foster stronger, more collaborative teams. Our continual innovation is fueled by the bold ideas, fresh perspectives, and passionate voices our teams bring to everything we do. Mentorship & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve. Export Control Requirement: Due to applicable export control laws and regulations, candidates must be either a U.S. citizen or national, U.S. permanent resident (i.e., current Green Card holder), or lawfully admitted into the U.S. as a refugee or granted asylum, or be able to obtain a U.S export license. If you are unsure if you meet these requirements, please apply and Amazon will review your application for eligibility.
US, CA, Sunnyvale
Prime Video is a first-stop entertainment destination offering customers a vast collection of premium programming in one app available across thousands of devices. Prime members can customize their viewing experience and find their favorite movies, series, documentaries, and live sports – including Amazon MGM Studios-produced series and movies; licensed fan favorites; and programming from Prime Video add-on subscriptions such as Apple TV+, Max, Crunchyroll and MGM+. All customers, regardless of whether they have a Prime membership or not, can rent or buy titles via the Prime Video Store, and can enjoy even more content for free with ads. Are you interested in shaping the future of entertainment? Prime Video's technology teams are creating best-in-class digital video experience. As a Prime Video technologist, you’ll have end-to-end ownership of the product, user experience, design, and technology required to deliver state-of-the-art experiences for our customers. You’ll get to work on projects that are fast-paced, challenging, and varied. You’ll also be able to experiment with new possibilities, take risks, and collaborate with remarkable people. We’ll look for you to bring your diverse perspectives, ideas, and skill-sets to make Prime Video even better for our customers. With global opportunities for talented technologists, you can decide where a career Prime Video Tech takes you! Key job responsibilities - Develop ML models for various recommendation & search systems using deep learning, online learning, and optimization methods - Work closely with other scientists, engineers and product managers to expand the depth of our product insights with data, create a variety of experiments to determine the high impact projects to include in planning roadmaps - Stay up-to-date with advancements and the latest modeling techniques in the field - Publish your research findings in top conferences and journals A day in the life We're using advanced approaches such as foundation models to connect information about our videos and customers from a variety of information sources, acquiring and processing data sets on a scale that only a few companies in the world can match. This will enable us to recommend titles effectively, even when we don't have a large behavioral signal (to tackle the cold-start title problem). It will also allow us to find our customer's niche interests, helping them discover groups of titles that they didn't even know existed. We are looking for creative & customer obsessed machine learning scientists who can apply the latest research, state of the art algorithms and ML to build highly scalable page personalization solutions. You'll be a research leader in the space and a hands-on ML practitioner, guiding and collaborating with talented teams of engineers and scientists and senior leaders in the Prime Video organization. You will also have the opportunity to publish your research at internal and external conferences. About the team Prime Video Recommendation Science team owns science solution to power recommendation and personalization experience on various Prime Video surfaces and devices. We work closely with the engineering teams to launch our solutions in production.
US, CA, Cupertino
We are seeking a highly skilled Data Scientist to join our Machine Learning Architecture team, focusing on power and performance optimization for ML acceleration workloads across Amazon's global data center infrastructure. This role combines advanced data science techniques with deep technical understanding of ML hardware acceleration to drive efficiency improvements in training and inference workloads at massive scale. Key job responsibilities ata Analysis & Optimization * Analyze power consumption and performance metrics across all Amazon data centers for machine learning acceleration workloads * Develop predictive models and statistical frameworks to identify optimization opportunities and performance bottlenecks * Create automated monitoring and alerting systems for power and performance anomalies Strategic Planning & Deployment Guidance * Provide data-driven recommendations for server deployments and capacity planning decisions across Amazon's global data center network * Develop optimization scenarios and business cases to improve capacity delivery efficiency to customers worldwide * Support strategic decision-making through comprehensive analysis of power, performance, and cost trade-offs Cross-Functional Collaboration * Partner with software engineering teams to optimize ML frameworks, drivers, and runtime systems * Collaborate with hardware engineering teams to influence chip design, server architecture, and cooling system optimization * Work closely with data center operations teams to implement and validate optimization strategies Research & Development * Conduct applied research on emerging ML acceleration technologies and their power/performance characteristics * Develop novel methodologies for measuring and improving energy efficiency in large-scale ML workloads * Publish findings and contribute to industry best practices in sustainable ML infrastructure