The breadth of Amazon's computer vision research is on display at ECCV

Research topics range from visual anomaly detection to road network extraction, regression-constrained neural-architecture search to self-supervised learning for video representations.

Amazon's contributions to this year's European Conference on Computer Vision (ECCV) reflect the diversity of the company's research interests. Below is a quick guide to the topics and methods of a dozen ECCV papers whose authors include Amazon scientists.

Fine-grained fashion representation learning by online deep clustering
Yang (Andrew) Jiao, Ning Xie, Yan Gao, Chien-Chih Wang, Yi Sun

Related content
Three papers at CVPR present complementary methods to improve product discovery.

Fashions are characterized by both global attributes, such as “skirt length”, and local attributes, such as “neckline style”. Accurate representations of such attributes are essential to tasks like fashion retrieval and fashion recommendation, but learning representations of each attribute independently ignores shared visual statistics among the attributes. Instead, the researchers treat representation learning as a multitask learning problem, enforcing cluster-level constraints on global structure. The learned representations improve fashion retrieval by a large margin.

GLASS: Global to local attention for scene-text spotting
Roi Ronen, Shahar Tsiper, Oron Anschel, Inbal Lavi, Amir Markovitz, R. Manmatha

Modern text-spotting models combine text detection and recognition into a single end-to-end framework, in which both tasks often rely on a shared global feature map. Such models, however, struggle to recognize text across scale variations (smaller or larger text) and arbitrary word rotation angles. The researchers propose a novel attention mechanism for text spotting, called GLASS, that fuses together global and local features. The global features are extracted from the shared backbone, while the local features are computed individually on resized, high-resolution word crops with upright orientation. GLASS achieves state-of-the-art results on multiple public benchmarks, and the researchers show that it can be integrated with other text-spotting solutions, improving their performance.

GLASS.png
A novel attention mechanism for text spotting, called GLASS, fuses together global and local features. From "GLASS: Global to local attention for scene-text spotting".

Large scale real-world multi-person tracking
Bing Shuai, Alessandro Bergamo, Uta Buechler, Andrew Berneshawi, Alyssa Boden, Joseph Tighe

Related content
ICCV workshop hosted by Amazon Prime Air and AWS will announce results of challenge to detect airborne obstacles.

This paper presents a new multi-person tracking dataset — PersonPath22 — which is more than an order of magnitude larger than existing high-quality multi-object tracking datasets. The PersonPath22 dataset is specifically sourced to provide a wide variety of conditions, and its annotations include rich metadata that allows the performance of a tracker to be evaluated along these different dimensions. Its large-scale real-world training and test data enable the community to better understand the performance of multi-person tracking systems in a range of scenarios and conditions.

MaCLR: Motion-aware contrastive Learning of representations for videos
Fanyi Xiao, Joseph Tighe, Davide Modolo

Attempts to use self-supervised learning for video have had some success, but existing approaches don’t make explicit use of motion information derived from the temporal sequence, which is important for supervised action recognition tasks. The researchers propose a self-supervised video representation-learning method that explicitly models motion cues during training. The method, MaCLR, consists of two pathways, visual and motion, connected by a novel cross-modal contrastive objective that enables the motion pathway to guide the visual pathway toward relevant motion cues.

MACLR.png
A frame of video (top left) and three different methods of capturing motion. From "MaCLR: Motion-aware contrastive Learning of representations for videos".

PSS: Progressive sample selection for open-world visual representation learning
Tianyue Cao, Yongxin Wang, Yifan Xing, Tianjun Xiao, Tong He, Zheng Zhang, Hao Zhou, Joseph Tighe

Related content
New end-to-end approach to zero-shot video classification dramatically outperforms predecessors.

In computer vision, open-world representation learning is the challenge of learning representations for categories of images not seen during training. Existing approaches make unrealistic assumptions, such as foreknowledge of the number of categories the unseen images fall into, or the ability to determine in advance which unlabeled training examples fall into unseen categories. The researchers’ novel progressive approach avoids such assumptions, selecting at each iteration unlabeled samples that are highly homogenous but belong to classes that are distant from the current set of known classes. High-quality pseudo-labels generated via clustering over these selected samples then improve the feature generalization iteratively.

Rayleigh EigenDirections (REDs): Nonlinear GAN latent space traversals for multidimensional features
Guha Balakrishnan, Raghudeep Gadde, Aleix Martinez, Pietro Perona

Generative adversarial networks (GANs) can map points in a latent space to images, producing extremely realistic synthetic data. Past attempts to control GANs’ outputs have looked for linear trajectories through the space that correspond, approximately, to continuous variation of a particular image feature. The researchers propose a new method for finding nonlinear trajectories through the space, providing unprecedented control over GANs’ outputs, including the ability to hold specified image features fixed while varying others.

Rethinking few-shot object detection on a multi-domain benchmark
Kibok Lee, Hao Yang, Satyaki Chakraborty, Zhaowei Cai, Gurumurthy Swaminathan, Avinash Ravichandran, Onkar Dabeer

Related content
New “meta-learning” approach improves on the state of the art in “one-shot” learning.

Most existing work on few-shot object detection (FSOD) focuses on settings where both the pretraining and few-shot learning datasets are from similar domains. The researchers propose a Multi-dOmain Few-Shot Object Detection (MoFSOD) benchmark consisting of 10 datasets from a wide range of domains to evaluate FSOD algorithms across a greater variety of applications. They comprehensively analyze the effects of freezing layers, different architectures, and different pretraining datasets on FSOD performance, drawing several surprising conclusions. One of these is that, contrary to prior belief, on a multidomain benchmark, fine-tuning (FT) is a strong baseline for FSOD.

SPot-the-Difference: Self-supervised pre-training for anomaly detection and segmentation
Yang Zou, Jongheon Jeong, Latha Pemula, Dongqing Zhang, Onkar Dabeer

Visual anomaly detection is commonly used in industrial quality inspection. This paper presents a new dataset and a new self-supervised learning method for ImageNet pretraining to improve anomaly detection and segmentation in 1-class and 2-class 5/10/high-shot training setups. The Visual Anomaly (VisA) Dataset consists of 10,821 high-resolution color images (9,621 normal and 1,200 anomalous samples) covering 12 objects in three domains, making it one of the largest industrial anomaly detection datasets to date. The paper also proposes a new self-supervised framework — SPot-the-Difference (SPD) — that can regularize contrastive self-supervised and also supervised pretraining to better handle anomaly detection tasks.

SPD contrastive learning.png
Conventional contrastive learning (left) and the contrastive-learning scheme used in SPD (spot-the-difference) training. From "SPot-the-difference: Self-supervised pre-training for anomaly detection and segmentation".

TD-Road: Top-down road network extraction with holistic graph construction
Yang He, Ravi Garg, Amber Roy Chowdhury

Road network extraction from satellite imagery is essential for constructing rich maps and enabling numerous applications in route planning and navigation. Previous graph-based methods used a bottom-up approach, estimating local information and extending a graph iteratively. This paper, by contrast, proposes a top-down approach that decomposes the problem into two subtasks: key point prediction and connectedness prediction. Unlike previous approaches, the proposed method applies graph structures (i.e., locations of nodes and connections between them) as training supervisions for deep neural networks and directly generates road graph outputs through inference.

TD-road.png
A satellite image (left) and three methods for extracting road networks from it: segmentation, bottom-up-graph-based methods, and a new top-down graph-based method (far right). From "TD-Road: Top-down road network extraction with holistic graph construction."

Towards regression-free neural networks for diverse compute platforms
Rahul Duggal, Hao Zhou, Shuo Yang, Jun Fang, Yuanjun Xiong, Wei Xia

Related content
New approach corrects for cases when average improvements are accompanied by specific regressions.

Commercial machine learning models are constantly being updated, and while an updated model may improve performance on average, it can still regress — i.e., suffer “negative flips” — on particular inputs it used to handle correctly. This paper introduces regression-constrained neural-architecture search (REG-NAS), which consists of two components: (1) a novel architecture constraint that enables a larger model to contain all the weights of a smaller one, thus maximizing weight sharing, and (2) a novel search reward that incorporates both top-1 accuracy and negative flips in the architecture search metric. Relative to the existing state-of-the-art approach, REG-NAS enables 33 – 48% reduction of negative flips.

Unsupervised and semi-supervised bias benchmarking in face recognition
Alexandra Chouldechova, Siqi Deng, Yongxin Wang, Wei Xia, Pietro Perona

This paper introduces semi-supervised performance evaluation for face recognition (SPE-FR), a statistical method for evaluating the performance and algorithmic bias of face verification systems when identity labels are unavailable or incomplete. The method is based on parametric Bayesian modeling of face embedding similarity scores, and it produces point estimates, performance curves, and confidence bands that reflect uncertainty in the estimation procedure. Experiments show that SPE-FR can accurately assess performance on data with no identity labels and confidently reveal demographic biases in system performance.

X-DETR: A versatile architecture for instance-wise vision-language tasks
Zhaowei Cai, Gukyeong Kwon, Avinash Ravichandran, Erhan Bas, Zhuowen Tu, Rahul Bhotika, Stefano Soatto

Related content
Two methods presented at CVPR achieve state-of-the-art results by imposing additional structure on the representational space.

This paper addresses the challenge of instance-wise vision-language tasks, which require free-form language to align with objects inside an image, rather than the image itself. The paper presents the X-DETR model, whose architecture has three major components: an object detector, a language encoder, and a vision-language alignment module. The vision and language streams are independent until the end, and they are aligned using an efficient dot-product operation. This simple architecture shows good accuracy and fast speeds for multiple instance-wise vision-language tasks, such as open-vocabulary object detection.

X-DETR.png
X-DETR addresses the challenge of instance-wise vision-language tasks, which require free-form language to align with objects inside an image, rather than the image itself. From "X-DETR: A versatile architecture for instance-wise vision-language tasks".

Research areas

Related content

US, WA, Seattle
As part of the AWS Solutions organization, we have a vision to provide business applications, leveraging Amazon’s unique experience and expertise, that are used by millions of companies worldwide to manage day-to-day operations. We will accomplish this by accelerating our customers’ businesses through delivery of intuitive and differentiated technology solutions that solve enduring business challenges. We blend vision with curiosity and Amazon’s real-world experience to build opinionated, turnkey solutions. Where customers prefer to buy over build, we become their trusted partner with solutions that are no-brainers to buy and easy to use. The Team Just Walk Out (JWO) is a new kind of store with no lines and no checkout—you just grab and go! Customers simply use the Amazon Go app to enter the store, take what they want from our selection of fresh, delicious meals and grocery essentials, and go! Our checkout-free shopping experience is made possible by our Just Walk Out Technology, which automatically detects when products are taken from or returned to the shelves and keeps track of them in a virtual cart. When you’re done shopping, you can just leave the store. Shortly after, we’ll charge your account and send you a receipt. Check it out at amazon.com/go. Designed and custom-built by Amazonians, our Just Walk Out Technology uses a variety of technologies including computer vision, sensor fusion, and advanced machine learning. Innovation is part of our DNA! Our goal is to be Earths’ most customer centric company and we are just getting started. We need people who want to join an ambitious program that continues to push the state of the art in computer vision, machine learning, distributed systems and hardware design. Key job responsibilities Everyone on the team needs to be entrepreneurial, wear many hats and work in a highly collaborative environment that’s more startup than big company. We’ll need to tackle problems that span a variety of domains: computer vision, image recognition, machine learning, real-time and distributed systems. As an Applied Scientist, you will help solve a variety of technical challenges and mentor other scientists. You will tackle challenging, novel situations every day and given the size of this initiative, you’ll have the opportunity to work with multiple technical teams at Amazon in different locations. You should be comfortable with a degree of ambiguity that’s higher than most projects and relish the idea of solving problems that, frankly, haven’t been solved at scale before - anywhere. Along the way, we guarantee that you’ll learn a ton, have fun and make a positive impact on millions of people. A key focus of this role will be developing and implementing advanced visual reasoning systems that can understand complex spatial relationships and object interactions in real-time. You'll work on designing autonomous AI agents that can make intelligent decisions based on visual inputs, understand customer behavior patterns, and adapt to dynamic retail environments. This includes developing systems that can perform complex scene understanding, reason about object permanence, and predict customer intentions through visual cues. About the team AWS Solutions As part of the AWS solutions organization, we have a vision to provide business applications, leveraging Amazon's unique experience and expertise, that are used by millions of companies worldwide to manage day-to-day operations. We will accomplish this by accelerating our customers' businesses through delivery of intuitive and differentiated technology solutions that solve enduring business challenges. we blend vision with curiosity and Amazon's real-world experience to build opinionated, turnkey solutions. Where customers prefer to buy over build, we become their trusted partner with solutions that are no-brainers to buy and easy to use. About AWS Diverse Experiences AWS values diverse experiences. Even if you do not meet all of the preferred qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why AWS? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Inclusive Team Culture AWS values curiosity and connection. Our employee-led and company-sponsored affinity groups promote inclusion and empower our people to take pride in what makes us unique. Our inclusion events foster stronger, more collaborative teams. Our continual innovation is fueled by the bold ideas, fresh perspectives, and passionate voices our teams bring to everything we do. Mentorship & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve.
CA, ON, Toronto
The Sponsored Products and Brands (SPB) team at Amazon Ads is re-imagining the advertising landscape through state-of-the-art generative AI technologies, revolutionizing how millions of customers discover products and engage with brands across Amazon.com and beyond. We are at the forefront of re-inventing advertising experiences, bridging human creativity with artificial intelligence to transform every aspect of the advertising lifecycle from ad creation and optimization to performance analysis and customer insights. We are a passionate group of innovators dedicated to developing responsible and intelligent AI technologies that balance the needs of advertisers, enhance the shopping experience, and strengthen the marketplace. If you're energized by solving complex challenges and pushing the boundaries of what's possible with AI, join us in shaping the future of advertising. The Off-Search team within Sponsored Products and Brands (SPB) is focused on building delightful ad experiences across various surfaces beyond Search on Amazon—such as product detail pages, the homepage, and store-in-store pages—to drive monetization. Our vision is to deliver highly personalized, context-aware advertising that adapts to individual shopper preferences, scales across diverse page types, remains relevant to seasonal and event-driven moments, and integrates seamlessly with organic recommendations such as new arrivals, basket-building content, and fast-delivery options. To execute this vision, we work in close partnership with Amazon Stores stakeholders to lead the expansion and growth of advertising across Amazon-owned and -operated pages beyond Search. We operate full stack—from backend ads-retail edge services, ads retrieval, and ad auctions to shopper-facing experiences—all designed to deliver meaningful value. Curious about our advertising solutions? Discover more about Sponsored Products and Sponsored Brands to see how we’re helping businesses grow on Amazon.com and beyond! Key job responsibilities This role will be pivotal in redesigning how ads contribute to a personalized, relevant, and inspirational shopping experience, with the customer value proposition at the forefront. Key responsibilities include, but are not limited to: - Contribute to the design and development of GenAI, deep learning, multi-objective optimization and/or reinforcement learning empowered solutions to transform ad retrieval, auctions, whole-page relevance, and/or bespoke shopping experiences. - Collaborate cross-functionally with other scientists, engineers, and product managers to bring scalable, production-ready science solutions to life. - Stay abreast of industry trends in GenAI, LLMs, and related disciplines, bringing fresh and innovative concepts, ideas, and prototypes to the organization. - Contribute to the enhancement of team’s scientific and technical rigor by identifying and implementing best-in-class algorithms, methodologies, and infrastructure that enable rapid experimentation and scaling. - Mentor and grow junior scientists and engineers, cultivating a high-performing, collaborative, and intellectually curious team. A day in the life As an Applied Scientist on the Sponsored Products and Brands Off-Search team, you will contribute to the development in Generative AI (GenAI) and Large Language Models (LLMs) to revolutionize our advertising flow, backend optimization, and frontend shopping experiences. This is a rare opportunity to redefine how ads are retrieved, allocated, and/or experienced—elevating them into personalized, contextually aware, and inspiring components of the customer journey. You will have the opportunity to fundamentally transform areas such as ad retrieval, ad allocation, whole-page relevance, and differentiated recommendations through the lens of GenAI. By building novel generative models grounded in both Amazon’s rich data and the world’s collective knowledge, your work will shape how customers engage with ads, discover products, and make purchasing decisions. If you are passionate about applying frontier AI to real-world problems with massive scale and impact, this is your opportunity to define the next chapter of advertising science. About the team The Off-Search team within Sponsored Products and Brands (SPB) is focused on building delightful ad experiences across various surfaces beyond Search on Amazon—such as product detail pages, the homepage, and store-in-store pages—to drive monetization. Our vision is to deliver highly personalized, context-aware advertising that adapts to individual shopper preferences, scales across diverse page types, remains relevant to seasonal and event-driven moments, and integrates seamlessly with organic recommendations such as new arrivals, basket-building content, and fast-delivery options. To execute this vision, we work in close partnership with Amazon Stores stakeholders to lead the expansion and growth of advertising across Amazon-owned and -operated pages beyond Search. We operate full stack—from backend ads-retail edge services, ads retrieval, and ad auctions to shopper-facing experiences—all designed to deliver meaningful value. Curious about our advertising solutions? Discover more about Sponsored Products and Sponsored Brands to see how we’re helping businesses grow on Amazon.com and beyond!
US, MA, N.reading
Amazon Industrial Robotics is seeking exceptional talent to help develop the next generation of advanced robotics systems that will transform automation at Amazon's scale. We're building revolutionary robotic systems that combine cutting-edge AI, sophisticated control systems, and advanced mechanical design to create adaptable automation solutions capable of working safely alongside humans in dynamic environments. This is a unique opportunity to shape the future of robotics and automation at an unprecedented scale, working with world-class teams pushing the boundaries of what's possible in robotic dexterous manipulation, locomotion, and human-robot interaction. This role presents an opportunity to shape the future of robotics through innovative applications of deep learning and large language models. At Amazon Industrial Robotics we leverage advanced robotics, machine learning, and artificial intelligence to solve complex operational challenges at an unprecedented scale. Our fleet of robots operates across hundreds of facilities worldwide, working in sophisticated coordination to fulfill our mission of customer excellence. We are pioneering the development of dexterous manipulation system that: - Enables unprecedented generalization across diverse tasks - Enables contact-rich manipulation in different environments - Seamlessly integrates low-level skills and high-level behaviors - Leverage mechanical intelligence, multi-modal sensor feedback and advanced control techniques. The ideal candidate will contribute to research that bridges the gap between theoretical advancement and practical implementation in robotics. You will be part of a team that's revolutionizing how robots learn, adapt, and interact with their environment. Join us in building the next generation of intelligent robotics systems that will transform the future of automation and human-robot collaboration. Key job responsibilities - Design and implement methods for dexterous manipulation - Design and implement methods for use of dexterous end effectors with force and tactile sensing - Develop a hierarchical system that combines low-level control with high-level planning - Utilize state-of-the-art manipulation models and optimal control techniques
US, MA, Boston
We're a new research lab based in San Francisco and Boston focused on developing foundational capabilities for useful AI agents. We're pursuing several key research bets that will enable AI agents to perform real-world actions, learn from human feedback, self-course-correct, and infer human goals. We're particularly excited about combining large language models (LLMs) with reinforcement learning (RL) to solve reasoning and planning, learned world models, and generalizing agents to physical environments. We're a small, talent-dense team with the resources and scale of Amazon. Each team has the autonomy to move fast and the long-term commitment to pursue high-risk, high-payoff research. AI agents are the next frontier—the right research bets can reinvent what's possible. Join us and help build this lab from the ground up. Key job responsibilities * Define the product vision and roadmap for our agentic developer platform, translating research into products developers love * Partner deeply with research and engineering to identify which capabilities are ready for productization and shape how they're exposed to customers * Own the developer experience end-to-end from API design and SDK ergonomics to documentation, sample apps, and onboarding flows * Understand our customers deeply by engaging directly with developers and end-users, synthesizing feedback, and using data to drive prioritization * Shape how the world builds AI agents by defining new primitives, patterns, and best practices for agentic applications About the team Our team brings the AGI Lab's agent capabilities to customers. We build accessible, usable products: interfaces, frameworks, and solutions, that turn our platform and model capabilities into AI agents developers can use. We own the Nova Act agent playground, Nova Act IDE extension, Nova Act SDK, Nova Act AWS Console, reference architectures, sample applications, and more.
US, CA, San Francisco
Join the next revolution in robotics at Amazon's Frontier AI & Robotics team, where you'll work alongside world-renowned AI pioneers to push the boundaries of what's possible in robotic intelligence. As an Applied Scientist, you'll be at the forefront of developing breakthrough foundation models that enable robots to perceive, understand, and interact with the world in unprecedented ways. You'll drive independent research initiatives in areas such as perception, manipulation, science understanding, locomotion, manipulation, sim2real transfer, multi-modal foundation models and multi-task robot learning, designing novel frameworks that bridge the gap between state-of-the-art research and real-world deployment at Amazon scale. In this role, you'll balance innovative technical exploration with practical implementation, collaborating with platform teams to ensure your models and algorithms perform robustly in dynamic real-world environments. You'll have access to Amazon's vast computational resources, enabling you to tackle ambitious problems in areas like very large multi-modal robotic foundation models and efficient, promptable model architectures that can scale across diverse robotic applications. Key job responsibilities - Drive independent research initiatives across the robotics stack, including robotics foundation models, focusing on breakthrough approaches in perception, and manipulation, for example open-vocabulary panoptic scene understanding, scaling up multi-modal LLMs, sim2real/real2sim techniques, end-to-end vision-language-action models, efficient model inference, video tokenization - Design and implement novel deep learning architectures that push the boundaries of what robots can understand and accomplish - Lead full-stack robotics projects from conceptualization through deployment, taking a system-level approach that integrates hardware considerations with algorithmic development, ensuring robust performance in production environments - Collaborate with platform and hardware teams to ensure seamless integration across the entire robotics stack, optimizing and scaling models for real-world applications - Contribute to the team's technical strategy and help shape our approach to next-generation robotics challenges A day in the life - Design and implement novel foundation model architectures and innovative systems and algorithms, leveraging our extensive infrastructure to prototype and evaluate at scale - Collaborate with our world-class research team to solve complex technical challenges - Lead technical initiatives from conception to deployment, working closely with robotics engineers to integrate your solutions into production systems - Participate in technical discussions and brainstorming sessions with team leaders and fellow scientists - Leverage our massive compute cluster and extensive robotics infrastructure to rapidly prototype and validate new ideas - Transform theoretical insights into practical solutions that can handle the complexities of real-world robotics applications About the team At Frontier AI & Robotics, we're not just advancing robotics – we're reimagining it from the ground up. Our team is building the future of intelligent robotics through innovative foundation models and end-to-end learned systems. We tackle some of the most challenging problems in AI and robotics, from developing sophisticated perception systems to creating adaptive manipulation strategies that work in complex, real-world scenarios. What sets us apart is our unique combination of ambitious research vision and practical impact. We leverage Amazon's massive computational infrastructure and rich real-world datasets to train and deploy state-of-the-art foundation models. Our work spans the full spectrum of robotics intelligence – from multimodal perception using images, videos, and sensor data, to sophisticated manipulation strategies that can handle diverse real-world scenarios. We're building systems that don't just work in the lab, but scale to meet the demands of Amazon's global operations. Join us if you're excited about pushing the boundaries of what's possible in robotics, working with world-class researchers, and seeing your innovations deployed at unprecedented scale.
IN, HR, Gurugram
Our customers have immense faith in our ability to deliver packages timely and as expected. A well planned network seamlessly scales to handle millions of package movements a day. It has monitoring mechanisms that detect failures before they even happen (such as predicting network congestion, operations breakdown), and perform proactive corrective actions. When failures do happen, it has inbuilt redundancies to mitigate impact (such as determine other routes or service providers that can handle the extra load), and avoids relying on single points of failure (service provider, node, or arc). Finally, it is cost optimal, so that customers can be passed the benefit from an efficiently set up network. Amazon Shipping is hiring Applied Scientists to help improve our ability to plan and execute package movements. As an Applied Scientist in Amazon Shipping, you will work on multiple challenging machine learning problems spread across a wide spectrum of business problems. You will build ML models to help our transportation cost auditing platforms effectively audit off-manifest (discrepancies between planned and actual shipping cost). You will build models to improve the quality of financial and planning data by accurately predicting ship cost at a package level. Your models will help forecast the packages required to be pick from shipper warehouses to reduce First Mile shipping cost. Using signals from within the transportation network (such as network load, and velocity of movements derived from package scan events) and outside (such as weather signals), you will build models that predict delivery delay for every package. These models will help improve buyer experience by triggering early corrective actions, and generating proactive customer notifications. Your role will require you to demonstrate Think Big and Invent and Simplify, by refining and translating Transportation domain-related business problems into one or more Machine Learning problems. You will use techniques from a wide array of machine learning paradigms, such as supervised, unsupervised, semi-supervised and reinforcement learning. Your model choices will include, but not be limited to, linear/logistic models, tree based models, deep learning models, ensemble models, and Q-learning models. You will use techniques such as LIME and SHAP to make your models interpretable for your customers. You will employ a family of reusable modelling solutions to ensure that your ML solution scales across multiple regions (such as North America, Europe, Asia) and package movement types (such as small parcel movements and truck movements). You will partner with Applied Scientists and Research Scientists from other teams in US and India working on related business domains. Your models are expected to be of production quality, and will be directly used in production services. You will work as part of a diverse data science and engineering team comprising of other Applied Scientists, Software Development Engineers and Business Intelligence Engineers. You will participate in the Amazon ML community by authoring scientific papers and submitting them to Machine Learning conferences. You will mentor Applied Scientists and Software Development Engineers having a strong interest in ML. You will also be called upon to provide ML consultation outside your team for other problem statements. If you are excited by this charter, come join us!
CN, 31, Shanghai
Are you looking to work at the forefront of Machine Learning and AI? Would you be excited to apply Generative AI algorithms to solve real world problems with significant impact? The Generative AI Innovation Center helps AWS customers implement Generative AI solutions and realize transformational business opportunities. This is a team of strategists, scientists, engineers, and architects working step-by-step with customers to build bespoke solutions that harness the power of generative AI. Starting in 2024, the Innovation Center launched a new Custom Model and Optimization program to help customers develop and scale highly customized generative AI solutions. The team helps customers imagine and scope bespoke use cases that will create the greatest value for their businesses, define paths to navigate technical or business challenges, develop and optimize models to power their solutions, and make plans for launching solutions at scale. The GenAI Innovation Center team provides guidance on best practices for applying generative AI responsibly and cost efficiently. You will work directly with customers and innovate in a fast-paced organization that contributes to game-changing projects and technologies. You will design and run experiments, research new algorithms, and find new ways of optimizing risk, profitability, and customer experience. We’re looking for Applied Scientists capable of using GenAI and other techniques to design, evangelize, and implement state-of-the-art solutions for never-before-solved problems. As an Applied Scientist, you will - Collaborate with AI/ML scientists and architects to research, design, develop, and evaluate generative AI solutions to address real-world challenges - Interact with customers directly to understand their business problems, aid them in implementation of generative AI solutions, brief customers and guide them on adoption patterns and paths to production - Help customers optimize their solutions through approaches such as model selection, training or tuning, right-sizing, distillation, and hardware optimization - Provide customer and market feedback to product and engineering teams to help define product direction About the team Diverse Experiences AWS values diverse experiences. Even if you do not meet all of the qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why AWS? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Inclusive Team Culture Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (diversity) conferences, inspire us to never stop embracing our uniqueness. Mentorship & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud.
US, WA, Seattle
Innovators wanted! Are you an entrepreneur? A builder? A dreamer? This role is part of an Amazon Special Projects team that takes the company’s Think Big leadership principle to the extreme. We focus on creating entirely new products and services with a goal of positively impacting the lives of our customers. No industries or subject areas are out of bounds. If you’re interested in innovating at scale to address big challenges in the world, this is the team for you. Here at Amazon, we embrace our differences. We are committed to furthering our culture of inclusion. We have thirteen employee-led affinity groups, reaching 40,000 employees in over 190 chapters globally. We are constantly learning through programs that are local, regional, and global. Amazon’s culture of inclusion is reinforced within our 16 Leadership Principles, which remind team members to seek diverse perspectives, learn and be curious, and earn trust. Our team highly values work-life balance, mentorship and career growth. We believe striking the right balance between your personal and professional life is critical to life-long happiness and fulfillment. We care about your career growth and strive to assign projects and offer training that will challenge you to become your best.
US, CA, Sunnyvale
Are you passionate about robotics and research? Do you want to solve real customer problems through innovative technology? Do you enjoy working on scalable research and projects in a collaborative team environment? Do you want to see your science solutions directly impact millions of customers worldwide? At Amazon, we hire the best minds in technology to innovate and build on behalf of our customers. Customer obsession is part of our company DNA, which has made us one of the world's most beloved brands. We’re looking for current PhD students with a passion for robotic research and applications to join us as Robotics Research Scientist II Intern/Co-ops in 2026 to shape the future of robotics and automation at an unprecedented scale across. For these positions, our Robotics teams at Amazon are looking for students with a specialization in one or more of the research areas in robotics such as: robotics, robotics manipulation (e.g., robot arm, grasping, dexterous manipulation, end of arm tools/end effector), autonomous mobile robots, mobile manipulation, movement, autonomous navigation, locomotion, motion/path planning, controls, perception, sensing, robot learning, artificial intelligence, machine learning, computer vision, large language models, human-robot interaction, robotics simulation, optimization, and more! We're looking for curious minds who think big and want to define tomorrow's technology. At Amazon, you'll grow into the high-impact engineer you know you can be, supported by a culture of learning and mentorship. Every day brings exciting new challenges and opportunities for personal growth. By applying to this role, you will be considered for Robotics Research Scientist II Intern/Co-op (2026) opportunities across various Robotics teams at Amazon with different robotics research focus, with internship positions available for multiple locations, durations (3 to 6+ months), and year-round start dates (winter, spring, summer, fall). Amazon intern and co-op roles follow the same internship structure. "Intern/Internship" wording refers to both interns and co-ops. Amazon internships across all seasons are full-time positions, and interns should expect to work in office, Monday-Friday, up to 40 hours per week typically between 8am-5pm. Specific team norms around working hours will be communicated by your manager. Interns should not have conflicts such as classes or other employment during the Amazon work-day. Applicants should have a minimum of one quarter/semester/trimester remaining in their studies after their internship concludes. The robotics internship join dates, length, location, and prospective team will be finalized at the time of any applicable job offers. In your application, you will be able to provide your preference of research interests, start dates, internship duration, and location. While your preference will be taken into consideration, we cannot guarantee that we can meet your selection based on several factors including but not limited to the internship availability and business needs of this role. About the team The Personal Robotics Group is pioneering intelligent robotic products that deliver meaningful customer experiences. We're the team behind Amazon Astro, and we're building the next generation of robotic systems that will redefine how customers interact with technology. Our work spans the full spectrum from advanced hardware design to sophisticated software and control systems, combining mechanical innovation, software engineering, dynamic systems modeling, and intelligent algorithms to create robots that are not just functional, but delightful. This is a unique opportunity to shape the future of personal robotics working with world-class teams pushing the boundaries of what's possible in robotic manipulation, locomotion, and human-robot interaction. Join us if you're passionate about creating the future of personal robotics, solving complex challenges at the intersection of hardware and software, and seeing your innovations deliver transformative customer experiences.
IN, HR, Gurugram
Do you want to join an innovative team of scientists who use machine learning and statistical techniques to create state-of-the-art solutions for providing better value to Amazon’s customers? Do you want to build and deploy advanced ML systems that help optimize millions of transactions every day? Are you excited by the prospect of analyzing and modeling terabytes of data to solve real-world problems? Do you like to own end-to-end business problems/metrics and directly impact the profitability of the company? Do you like to innovate and simplify? If yes, then you may be a great fit to join the Machine Learning team for India Consumer Businesses. Machine Learning, Big Data and related quantitative sciences have been strategic to Amazon from the early years. Amazon has been a pioneer in areas such as recommendation engines, ecommerce fraud detection and large-scale optimization of fulfillment center operations. As Amazon has rapidly grown and diversified, the opportunity for applying machine learning has exploded. We have a very broad collection of practical problems where machine learning systems can dramatically improve the customer experience, reduce cost, and drive speed and automation. These include product bundle recommendations for millions of products, safeguarding financial transactions across by building the risk models, improving catalog quality via extracting product attribute values from structured/unstructured data for millions of products, enhancing address quality by powering customer suggestions We are developing state-of-the-art machine learning solutions to accelerate the Amazon India growth story. Amazon India is an exciting place to be at for a machine learning practitioner. We have the eagerness of a fresh startup to absorb machine learning solutions, and the scale of a mature firm to help support their development at the same time. As part of the India Machine Learning team, you will get to work alongside brilliant minds motivated to solve real-world machine learning problems that make a difference to millions of our customers. We encourage thought leadership and blue ocean thinking in ML. Key job responsibilities Use machine learning and analytical techniques to create scalable solutions for business problems Analyze and extract relevant information from large amounts of Amazon’s historical business data to help automate and optimize key processes Design, develop, evaluate and deploy, innovative and highly scalable ML models Work closely with software engineering teams to drive real-time model implementations Work closely with business partners to identify problems and propose machine learning solutions Establish scalable, efficient, automated processes for large scale data analyses, model development, model validation and model maintenance Work proactively with engineering teams and product managers to evangelize new algorithms and drive the implementation of large-scale complex ML models in production Leading projects and mentoring other scientists, engineers in the use of ML techniques About the team International Machine Learning Team is responsible for building novel ML solutions that attack India first (and other Emerging Markets across MENA and LatAm) problems and impact the bottom-line and top-line of India business. Learn more about our team from https://www.amazon.science/working-at-amazon/how-rajeev-rastogis-machine-learning-team-in-india-develops-innovations-for-customers-worldwide