AWS VP of AI and data on computer vision research at Amazon

In his keynote address at CVPR, Swami Sivasubramanian considers the many ways that Amazon incorporates computer vision technology into its products and makes it directly available to Amazon Web Services’ customers.

At this year’s Computer Vision and Pattern Recognition Conference (CVPR) — the premier computer vision conference — Amazon Web Services’ vice president for AI and data, Swami Sivasubramanian, gave a keynote address titled “Computer vision at scale: Driving customer innovation and industry adoption”. What follows is an edited version of that talk.

Related content
As in other areas of AI, generative models and foundation models — such as vision-language models — are a hot topic.

Amazon has been working on AI for more than 25 years, and that includes our ongoing innovations in computer vision. Computer vision is part of Amazon’s heritage, ethos, and future — and today, we’re using it in many parts of the company.

Computer vision technology helps power our e-commerce recommendations engine on Amazon.com, as well as the customer reviews you see on our product pages. Our Prime Air drones use computer vision and deep learning, and the Amazon Show uses computer vision to streamline customer interactions with Alexa. Every day, more than half a million vision-enabled robots assist with stocking inventory, filling orders, and sorting packages for delivery.

I’d like to take a closer look at a few such applications, starting with Amazon Ads.

Amazon Ads Image Generator

Advertisers often struggle to create visually appealing and effective ads, especially when it comes to generating multiple variations and optimizing for different placements and audiences. That’s why we developed an AI-powered image generation tool called Amazon Ads Image Generator.

With this tool, advertisers can input product images, logos, and text prompts, and an AI model will generate multiple versions of visually appealing ads tailored to their brands and messaging. The tool aims to simplify and streamline the ad creation process for advertisers, allowing them to produce engaging visuals more efficiently and cost effectively.

Ad Generator.png
Examples of the types of ad variations generated by the Amazon Ads Image Generator.

To build the Image Generator, we used both Amazon machine learning services such as Amazon SageMaker and Amazon SageMaker Jumpstart and human-in-the-loop workflows that ensure high-quality and appropriate images. The architecture consists of modular microservices and separate components for model development, registry, model lifecycle management, selecting the appropriate model, and tracking the job throughout the service, as well as a customer-facing API.

Amazon One

In the retail setting, we’re reimagining identification, entry, and payment with Amazon One, a fast, convenient, and contactless experience that lets customers leave their wallets — and even their phones — at home. Instead, they can use the palms of their hands to enter a facility, identify themselves, pay, present loyalty cards or event tickets, and even verify their ages.

Amazon One is able to recognize the unique lines, grooves, and ridges of your palm and the pattern of veins just under the skin using infrared light. At registration, proprietary algorithms capture and encrypt your palm image within seconds. The Amazon One device uses this information to create your palm signature and connect it to your credit card or your Amazon account.

To ensure Amazon One’s accuracy, we trained it on millions of synthetically generated images with subtle variations, such as illumination conditions and hand poses. We also trained our system to detect fake hands, such as a highly detailed silicon hand replica, and reject them.

Amazon One synthetic images.jpg
Examples of the types of synthetic images used to train the Amazon One model.

Protecting customer data and safeguarding privacy are foundational design principles with Amazon One. Palm images are never stored on-device. Rather, the images are immediately encrypted and sent to a highly secure zone in the Amazon Web Services (AWS) cloud, custom-built for Amazon One, where the customer’s palm signature is created.

Customers like Crunch Fitness are taking advantage of Amazon One and features like the membership linking capability, which addresses a traditional pain point for both customers and the fitness industry. Crunch Fitness announced that it was the first fitness brand to introduce Amazon One as an entry option for its members at select locations nationwide.

NFL Next Gen Stats

Related content
Spliced binned-Pareto distributions are flexible enough to handle symmetric, asymmetric, and multimodal distributions, offering a more consistent metric.

Twenty-five years ago, the height of innovation in NFL broadcasts was the superimposition of a yellow line on the field to mark the first-down distance. These types of on-screen fan experiences have come a long way since then, thanks in large part to AI and machine learning (ML) technologies.

For example, as part of our ongoing partnership with the NFL, we’re delivering Prime Vision with Next Gen Stats during Thursday Night Football to provide insights gleaned by tracking RFID chips embedded in players’ shoulder pads.

One of our most recent innovations is the Defensive Alerts feature shown below, which tracks the movements of defensive players before the snap and uses an ML model to identify “players of interest” most likely to rush the quarterback (circled in red). This unique capability came out of a collaboration between the Thursday Night Football producers, engineers, and our computer vision team.

Defensive alerts.png
The new defensive-alert feature from NFL Nex Gen Stats.

In recent months, Amazon Science has profiled a range of other Amazon computer vision projects, from Project P.I., a fulfillment center technology that uses generative AI and computer vision to help spot, isolate, and remove imperfect products before they’re delivered to customers, to Virtual Try-All, which enables customers to visualize any product in any personal setting.

But for now, I’d like to turn from Amazon products and services that rely on computer vision to the ways in which AWS puts computer vision technologies directly into our customers’ hands.

The AWS ML stack

At AWS, our mission is to make it easy for every developer, data scientist, and researcher to build intelligent applications and leverage AI-enabled services that unlock new value from their data. We do this with the industry’s most comprehensive set of ML tools, which we think of as constituting a three-layer stack.

At the top of the stack are applications that rely on large language models (LLMs), like Amazon Q, our generative-AI-powered assistant for accelerating software development and helping customers extract useful information from their data.

Related content
AWS service enables machine learning innovation on a robust foundation.

At the middle layer, we offer a wide variety of services that enable developers to build powerful AI applications, from our computer vision services and devices to Amazon Bedrock, a secure and easy way to build generative-AI apps with the latest and greatest foundation models and the broadest set of capabilities for security, privacy, and responsible AI.

And at the bottom layer, we provide high-performance, cost-effective infrastructure that is purpose-built for ML.

Let’s look at few examples in more detail, starting with one our most popular vision services: Amazon Rekognition.

Amazon Rekognition

Amazon Rekognition is a fully managed service that uses ML to automatically extract information from images and video files so that customers can build computer vision models and apps more quickly, at lower cost, and with customization for different business needs.

This includes support for a variety of use cases, from content moderation, which enables the detection of unsafe or inappropriate content across images and videos, to custom labels that enable customers to detect objects like brand logos. And most recently we introduced an anti-spoofing feature to help customers verify that only real users, and not spoofs or bad actors, can access their services.

Amazon Textract

Amazon Textract uses optical character recognition to convert images or text — whether from a scanned document, PDF, or a photo of a document — into machine-encoded text. But it goes beyond traditional OCR technology by not only identifying each character, word, and letter but also the contents of fields in forms and information stored in tables.

For example, when presented with queries like the ones below, Textract can create specialized response objects by leveraging a combination of visual, spatial, and language cues. Each object assigns its query a short label, or “alias”. It then provides an answer to the query, the confidence it has in that answer, and the location of the answer on the page.

Textract.png
An example of the outputs of a specialized Textract response object.

Amazon Bedrock

Finally, let’s look at how we’re enabling computer vision technologies with Amazon Bedrock, a fully managed service that makes it easy for customers to build and scale generative-AI applications. Tens of thousands of customers have already selected Amazon Bedrock as the foundation for their generative-AI strategies because it gives them access to the broadest selection of first- and third-party LLMs and foundation models. This includes models from AI21 Labs, Anthropic, Cohere, Meta, Mistral AI, and Stability AI, as well as our own Titan family of models.

Related content
Novel architectures and carefully prepared training data enable state-of-the-art performance.

One of those models is the Titan Image Generator, which enables customers to produce high-quality, realistic images or enhance existing images using natural-language prompts. Amazon Science reported on the Titan Image Generator when we launched it last year at our re:Invent conference.

Responsible AI

We remain committed to the responsible development and deployment of AI technology, around which we made a series of voluntary commitments at the White House last year. To that end, we’ve launched new features and techniques such as invisible watermarks and a new method for assessing “hallucinations” in generative models.

By default, all Titan-generated images contain invisible watermarks, which are designed to help reduce the spread of misinformation by providing a discreet mechanism for identifying AI-generated images. AWS is among the first model providers to widely release built-in invisible watermarks that are integrated into the image outputs and are designed to be tamper-resistant.

Related content
Real-world deployment requires notions of fairness that are task relevant and responsive to the available data, recognition of unforeseen variation in the “last mile” of AI delivery, and collaboration with AI activists.

Hallucination occurs when the data generated by a generative model do not align with reality, as represented by a knowledge base of “facts”. The alignment between representation and fact is referred to as grounding. In the case of vision-language models, the knowledge base to which generated text must align is the evidence provided in images. There is a considerable amount of work ongoing at Amazon on visual grounding, some of which was presented at CVPR.

One of the necessary elements of controlling hallucinations is to be able to measure them. Consider, for example, the following image-prompt pair and the output generated by a vision-language (VL) model. If the model extends its output with the highest-probability next word, it will hallucinate a fridge where the image includes none:

VL kitchen.png
Input image, prompt, and output probabilities from a vision-language model.

 Existing datasets for evaluating hallucinations typically consist of specific questions like “Is there a refrigerator in this image?” But at CVPR, our team presented a paper describing a new benchmark called THRONE, which leverages LLMs themselves to evaluate hallucinations in response to free-form, open-ended prompts such as “Describe what you see”.

In other work, AWS researchers have found that one of the reasons modern transformer-based vision-language models hallucinate is that they cannot retain information about the input image prompt: they progressively “forget” it as more tokens are generated and longer contexts used.

Related content
Method preserves knowledge encoded in teacher model’s attention heads even when student model has fewer of them.

Recently, state space models have resurfaced ideas from the ’70s in a modern key, stacking dynamical models into modular architectures that have arbitrarily long memory residing in their state. But that memory — much like human memory — grows lossier over time, so it cannot be used effectively for grounding. Hybrid models that combine state space models and attention-based networks (such as transformers) are also gaining popularity, given their high recall capabilities over longer contexts. Literally every week, a growing number of variants appear in the literature.

At Amazon, we want to not only make the existing models available for builders to use but also empower researchers to explore and expand the current set of hybrid models. For this reason, we plan to open-source a class of modular hybrid architectures that are designed to make both memory and inference computation more efficient.

To enable efficient memory, these architectures use a more general elementary module that seamlessly integrates both eidetic (exact) and fading (lossy) memory, so the model can learn the optimal tradeoff. To make inference more efficient, we optimize core modules to run on the most efficient hardware — specifically, AWS Trainium, our purpose-built chip for training machine learning models.

It's an exciting time for AI research, with innovations emerging at a breakneck pace. Amazon is committed to making those innovations available to our customers, both indirectly, in the AI-enabled products and services we offer, and directly, through AWS’s commitment to democratize AI.

Research areas

Related content

IL, Haifa
We’re looking for a Principal Applied Scientist in the Personalization team with experience in generative AI and large models. You will be responsible for developing and disseminating customer-facing personalized recommendation models. This is a hands-on role with global impact working with a team of world-class engineers and scientists across the wider organization. You will lead the design of machine learning models that scale to very large quantities of data, and serve high-scale low-latency recommendations to all customers worldwide. You will embody scientific rigor, designing and executing experiments to demonstrate the technical efficacy and business value of your methods. You will work alongside a science team to delight customers by aiding in recommendations relevancy, and raise the profile of Amazon as a global leader in machine learning and personalization. Successful candidates will have strong technical ability, focus on customers by applying a customer-first approach, excellent teamwork and communication skills, and a motivation to achieve results in a fast-paced environment. Our position offers exceptional opportunities for every candidate to grow their technical and non-technical skills. If you are selected, you have the opportunity to make a difference to our business by designing and building state of the art machine learning systems on big data, leveraging Amazon’s vast computing resources (AWS), working on exciting and challenging projects, and delivering meaningful results to customers world-wide. Key job responsibilities Develop machine learning algorithms for high-scale recommendations problem Rapidly design, prototype and test many possible hypotheses in a high-ambiguity environment, making use of both quantitative analysis and business judgement. Collaborate with software engineers to integrate successful experimental results into large-scale, highly complex Amazon production systems capable of handling 100,000s of transactions per second at low latency. Report results in a manner which is both statistically rigorous and compellingly relevant, exemplifying good scientific practice in a business environment.
DE, Aachen
The Artificial General Intelligence (AGI) team is looking for a passionate, talented, and inventive Applied Scientist with a strong deep learning background, to build industry-leading Generative Artificial Intelligence (GenAI) technology with Large Language Models (LLMs) and multimodal systems. Key job responsibilities As an Applied Scientist with the AGI team, you will work with talented peers to lead the development of novel algorithms and modeling techniques, to advance the state of the art with LLMs. Your work will directly impact our customers in the form of products and services that make use of speech and language technology. You will leverage Amazon’s heterogeneous data sources and large-scale computing resources to accelerate advances in spoken language understanding. About the team The AGI team has a mission to push the envelope in GenAI with LLMs and multimodal systems, in order to provide the best-possible experience for our customers.
US, WA, Seattle
Are you a brilliant mind seeking to push the boundaries of what's possible with intelligent robotics? Join our elite team of researchers and engineers - led by Pieter Abeel, Rocky Duan, and Peter Chen - at the forefront of applied science, where we're harnessing the latest advancements in large language models (LLMs) and generative AI to reshape the world of robotics and unlock new realms of innovation. As an Applied Science Intern, you'll have the unique opportunity to work alongside world-renowned experts, gaining invaluable hands-on experience with cutting-edge robotics technologies. You'll dive deep into exciting research projects at the intersection of AI and robotics. This internship is not just about executing tasks – it's about being a driving force behind groundbreaking discoveries. You'll collaborate with cross-functional teams, leveraging your expertise in areas such as deep learning, reinforcement learning, computer vision, and motion planning to tackle real-world problems and deliver impactful solutions. Throughout your journey, you'll have access to unparalleled resources, including state-of-the-art computing infrastructure, cutting-edge research papers, and mentorship from industry luminaries. This immersive experience will not only sharpen your technical skills but also cultivate your ability to think critically, communicate effectively, and thrive in a fast-paced, innovative environment where bold ideas are celebrated. Join us at the forefront of applied robotics and AI, where your contributions will shape the future of intelligent systems and propel humanity forward. Seize this extraordinary opportunity to learn, grow, and leave an indelible mark on the world of technology. Amazon has positions available in San Francisco, CA and Seattle, WA. The ideal candidate should possess: - Strong background in machine learning, deep learning, and/or robotics - Publication record at science conferences such as NeurIPS, CVPR, ICRA, RSS, CoRL, and ICLR. - Experience in areas such as multimodal LLMs, world models, image/video tokenization, real2Sim/Sim2real transfer, bimanual manipulation, open-vocabulary panoptic scene understanding, scaling up multi-modal LLMs, and end-to-end vision-language-action models. - Proficiency in Python, Experience with PyTorch or JAX - Excellent problem-solving skills, attention to detail, and the ability to work collaboratively in a team Join us at the forefront of applied robotics and AI, and be a part of the team that's reshaping the future of intelligent systems. Apply now and embark on an extraordinary journey of discovery and innovation! Key job responsibilities - Develop novel, scalable algorithms and modeling techniques that advance the state-of-the-art in areas at the intersection of LLMs and generative AI for robotics - Tackle challenging, groundbreaking research problems on production-scale data, with a focus on robotic perception, manipulation, and control - Collaborate with cross-functional teams to solve complex business problems, leveraging your expertise in areas such as deep learning, reinforcement learning, computer vision, and motion planning - Demonstrate the ability to work independently, thrive in a fast-paced, ever-changing environment, and communicate effectively with diverse stakeholders
US, WA, Seattle
Join the next revolution in robotics at Amazon's Frontier AI & Robotics team, where you'll work alongside world-renowned AI pioneers like Pieter Abbeel, Rocky Duan, and Peter Chen to lead key initiatives in robotic intelligence. As a Senior Applied Scientist, you'll spearhead the development of breakthrough foundation models that enable robots to perceive, understand, and interact with the world in unprecedented ways. You'll drive technical excellence in areas such as perception, manipulation, scence understanding, sim2real transfer, multi-modal foundation models, and multi-task learning, designing novel algorithms that bridge the gap between cutting-edge research and real-world deployment at Amazon scale. In this role, you'll combine hands-on technical work with scientific leadership, ensuring your team delivers robust solutions for dynamic real-world environments. You'll leverage Amazon's vast computational resources to tackle ambitious problems in areas like very large multi-modal robotic foundation models and efficient, promptable model architectures that can scale across diverse robotic applications. Key job responsibilities - Lead technical initiatives in robotics foundation models, driving breakthrough approaches through hands-on research and development in areas like open-vocabulary panoptic scene understanding, scaling up multi-modal LLMs, sim2real/real2sim techniques, end-to-end vision-language-action models, efficient model inference, video tokenization - Design and implement novel deep learning architectures that push the boundaries of what robots can understand and accomplish - Guide technical direction for specific research initiatives, ensuring robust performance in production environments - Mentor fellow scientists while maintaining strong individual technical contributions - Collaborate with engineering teams to optimize and scale models for real-world applications - Influence technical decisions and implementation strategies within your area of focus A day in the life - Develop and implement novel foundation model architectures, working hands-on with our extensive compute infrastructure - Guide fellow scientists in solving complex technical challenges, from sim2real transfer to efficient multi-task learning - Lead focused technical initiatives from conception through deployment, ensuring successful integration with production systems - Drive technical discussions within your team and with key stakeholders - Conduct experiments and prototype new ideas using our massive compute cluster - Mentor team members while maintaining significant hands-on contribution to technical solutions Amazon offers a full range of benefits that support you and eligible family members, including domestic partners and their children. Benefits can vary by location, the number of regularly scheduled hours you work, length of employment, and job status such as seasonal or temporary employment. The benefits that generally apply to regular, full-time employees include: 1. Medical, Dental, and Vision Coverage 2. Maternity and Parental Leave Options 3. Paid Time Off (PTO) 4. 401(k) Plan If you are not sure that every qualification on the list above describes you exactly, we'd still love to hear from you! At Amazon, we value people with unique backgrounds, experiences, and skillsets. If you’re passionate about this role and want to make an impact on a global scale, please apply! About the team At Frontier AI & Robotics, we're not just advancing robotics – we're reimagining it from the ground up. Our team, led by pioneering AI researchers Pieter Abbeel, Rocky Duan, and Peter Chen, is building the future of intelligent robotics through groundbreaking foundation models and end-to-end learned systems. We tackle some of the most challenging problems in AI and robotics, from developing sophisticated perception systems to creating adaptive manipulation strategies that work in complex, real-world scenarios. What sets us apart is our unique combination of ambitious research vision and practical impact. We leverage Amazon's massive computational infrastructure and rich real-world datasets to train and deploy state-of-the-art foundation models. Our work spans the full spectrum of robotics intelligence – from multimodal perception using images, videos, and sensor data, to sophisticated manipulation strategies that can handle diverse real-world scenarios. We're building systems that don't just work in the lab, but scale to meet the demands of Amazon's global operations. Join us if you're excited about pushing the boundaries of what's possible in robotics, working with world-class researchers, and seeing your innovations deployed at unprecedented scale.
US, WA, Seattle
The Private Brands Discovery team designs innovative machine learning solutions to drive customer awareness for Amazon’s own brands and help customers discover products they love. Private Brands Discovery is an interdisciplinary team of Scientists and Engineers, who incubate and build disruptive solutions using cutting-edge technology to solve some of the toughest science problems at Amazon. To this end, the team employs methods from Natural Language Processing, Deep learning, multi-armed bandits and reinforcement learning, Bayesian Optimization, causal and statistical inference, and econometrics to drive discovery across the customer journey. Our solutions are crucial for the success of Amazon’s own brands and serve as a beacon for discovery solutions across Amazon. This is a high visibility opportunity for someone who wants to have business impact, dive deep into large-scale problems, enable measurable actions on the consumer economy, and work closely with scientists and engineers. As a scientist, you bring business and industry context to science and technology decisions. You set the standard for scientific excellence and make decisions that affect the way we build and integrate algorithms. Your solutions are exemplary in terms of algorithm design, clarity, model structure, efficiency, and extensibility. You tackle intrinsically hard problems, acquiring expertise as needed. You decompose complex problems into straightforward solutions.. With a focus on bias for action, this individual will be able to work equally well with Science, Engineering, Economics and business teams. Key job responsibilities - 5+ yrs of relevant, broad research experience after PhD degree or equivalent. - Advanced expertise and knowledge of applying observational causal interference methods - Strong background in statistics methodology, applications to business problems, and/or big data. - Ability to work in a fast-paced business environment. - Strong research track record. - Effective verbal and written communications skills with both economists and non-economist audiences.
US, WA, Seattle
The AWS Marketplace & Partner Services Science team is hiring an Applied Scientist to develop science products that support AWS initiatives to grow AWS Partners. The team is seeking candidates with strong background in machine learning and engineering, creativity, curiosity, and great business judgment. As an applied scientist on the team, you will work on targeting and lead prioritization related AI/ML products, recommendation systems, and deliver them into the production ecosystem. You are comfortable with ambiguity and have a deep understanding of ML algorithms and an analytical mindset. You are capable of summarizing complex data and models through clear visual and written explanations. You thrive in a collaborative environment and are passionate about learning. Key job responsibilities - Work with scientists, product managers and engineers to deliver high-quality science products - Experiment with large amounts of data to deliver the best possible science solutions - Design, build, and deploy innovative ML solutions to impact AWS Co-Sell initiatives About the team The AWS Marketplace & Partner Services team is the center of Analytics, Insights, and Science supporting the AWS Specialist Partner Organization on its mission to provide customers with an outstanding experience while working with AWS partners. The Science team supports science models and recommendation systems that are deployed directly to AWS Customers, AWS partners, and internal AWS Sellers.
US, WA, Seattle
The People eXperience and Technology (PXT) Central Science Team uses economics, behavioral science, statistics, and machine learning to proactively identify mechanisms, process improvements and products, which simultaneously improve Amazon and the lives, wellbeing, and the value of work of Amazonians. We are an interdisciplinary team which combines the talents of science and engineering to develop and deliver solutions that measurably achieve this goal. We invest in innovation and rapid prototyping of scientific models, AI/ML technologies and software solutions to accelerate informed, accurate, and reliable decision backed by science and data. As a research scientist you will you will design and carry out surveys to address business questions; analyze survey and other forms of data with regression models; perform weighting and multiple imputation to reduce bias due to nonresponse. You will conduct methodological and statistical research to understand the quality of survey data. You will work with economists, engineers, and computer scientists to select samples, draft and test survey questions, calculate nonresponse adjusted weights, and estimate regression models on large scale data. You will evaluate, diagnose, understand, and surface drivers and moderators for key research streams, including (but are not limited to) attrition, engagement, productivity, inclusion, and Amazon culture. Key job responsibilities Help to design and execute a scalable global content development and validation strategy to drive more effective decisions and improve the employee experience across all of Amazon Conduct psychometric and econometric analyses to evaluate integrity and practical application of survey questions and data Identify and execute research streams to evaluate how to mitigate or remove sources of measurement error Partner closely and drive effective collaborations across multi-disciplinary research and product teams Manage full life cycle of large-scale research programs (Develop strategy, gather requirements, manage and execute)
US, WA, Seattle
The Artificial General Intelligence (AGI) team is looking for a passionate, talented, and inventive Applied Scientist with a strong deep learning background, to build industry-leading Generative Artificial Intelligence (GenAI) technology with Large Language Models (LLMs) and multimodal systems. Key job responsibilities - Leverage Amazon’s heterogeneous data sources and large-scale computing resources to accelerate advances in generative artificial intelligence (GenAI). - Work with talented peers to lead the development of novel algorithms and modeling techniques to advance the state of the art with LLMs. - Collaborate with other science and engineering teams as well as business stakeholders to maximize the velocity and impact of your contributions. About the team It's an exciting time to be a leader in AI research. In Amazon's AGI Information team, you can make your mark by improving information-driven experiences of Amazon customers worldwide. Your work will directly impact our customers in the form of products and services that make use of language and multimodal technology!
US, WA, Seattle
Are you excited about developing foundation models to revolutionize automation, robotics and computer vision? Are you looking for opportunities to build and deploy them on real problems at truly vast scale? At Amazon Fulfillment Technologies and Robotics we are on a mission to build high-performance autonomous systems that perceive and act to further improve our world-class customer experience - at Amazon scale. We are looking for collaborative scientists, engineers and program managers for a variety of roles. The Amazon Robotics software team is seeking an experienced and senior Applied Scientist to focus on computer vision machine learning models. This includes building multi-viewpoint and time-series computer vision systems. It includes building large-scale models using data from many different tasks and scenes. This work spans from basic research such as cross domain training, to experimenting on prototype in the lab, to running wide-scale A/B tests on robots in our facilities. Key job responsibilities * Research vision - Where should we be focusing our efforts * Research delivery – Proving/dis-proving strategies in offline data or in the lab * Production studies - Insights from production data or ad-hoc experimentation. A day in the life Amazon offers a full range of benefits that support you and eligible family members, including domestic partners and their children. Benefits can vary by location, the number of regularly scheduled hours you work, length of employment, and job status such as seasonal or temporary employment. The benefits that generally apply to regular, full-time employees include: 1. Medical, Dental, and Vision Coverage 2. Maternity and Parental Leave Options 3. Paid Time Off (PTO) 4. 401(k) Plan If you are not sure that every qualification on the list above describes you exactly, we'd still love to hear from you! At Amazon, we value people with unique backgrounds, experiences, and skillsets. If you’re passionate about this role and want to make an impact on a global scale, please apply!
US, CA, East Palo Alto
The Customer Engagement Technology team leads AI/LLM-driven customer experience transformation using task-oriented dialogue systems. We develop multi-modal, multi-turn, goal-oriented dialog systems that can handle customer issues at Amazon scale across multiple languages. These systems are designed to adapt to changing company policies and invoke correct APIs to automate solutions to customer problems. Additionally, we enhance associate productivity through response/action recommendation, summarization to capture conversation context succinctly, retrieving precise information from documents to provide useful information to the agent, and machine translation to facilitate smoother conversations when the customer and agent speak different languages. Key job responsibilities Research and development of LLM-based chatbots and conversational AI systems for customer service applications. Design and implement state-of-the-art NLP and ML models for tasks such as language understanding, dialogue management, and response generation. Collaborate with cross-functional teams, including data scientists, software engineers, and product managers, to integrate LLM-based solutions into Amazon's customer service platforms. 4. Develop and implement strategies for data collection, annotation, and model training to ensure high-quality and robust performance of the chatbots. Conduct experiments and evaluations to measure the performance of the developed models and systems, and identify areas for improvement. Stay up-to-date with the latest advancements in NLP, LLMs, and conversational AI, and explore opportunities to incorporate new techniques and technologies into Amazon's customer service solutions. Collaborate with internal and external research communities, participate in conferences and publications, and contribute to the advancement of the field. A day in the life Amazon offers a full range of benefits that support you and eligible family members, including domestic partners and their children. Benefits can vary by location, the number of regularly scheduled hours you work, length of employment, and job status such as seasonal or temporary employment. The benefits that generally apply to regular, full-time employees include: 1. Medical, Dental, and Vision Coverage 2. Maternity and Parental Leave Options 3. Paid Time Off (PTO) 4. 401(k) Plan If you are not sure that every qualification on the list above describes you exactly, we'd still love to hear from you! At Amazon, we value people with unique backgrounds, experiences, and skillsets. If you’re passionate about this role and want to make an impact on a global scale, please apply!