The breadth of Amazon's computer vision research is on display at ECCV

Research topics range from visual anomaly detection to road network extraction, regression-constrained neural-architecture search to self-supervised learning for video representations.

Amazon's contributions to this year's European Conference on Computer Vision (ECCV) reflect the diversity of the company's research interests. Below is a quick guide to the topics and methods of a dozen ECCV papers whose authors include Amazon scientists.

Fine-grained fashion representation learning by online deep clustering
Yang (Andrew) Jiao, Ning Xie, Yan Gao, Chien-Chih Wang, Yi Sun

Related content
Three papers at CVPR present complementary methods to improve product discovery.

Fashions are characterized by both global attributes, such as “skirt length”, and local attributes, such as “neckline style”. Accurate representations of such attributes are essential to tasks like fashion retrieval and fashion recommendation, but learning representations of each attribute independently ignores shared visual statistics among the attributes. Instead, the researchers treat representation learning as a multitask learning problem, enforcing cluster-level constraints on global structure. The learned representations improve fashion retrieval by a large margin.

GLASS: Global to local attention for scene-text spotting
Roi Ronen, Shahar Tsiper, Oron Anschel, Inbal Lavi, Amir Markovitz, R. Manmatha

Modern text-spotting models combine text detection and recognition into a single end-to-end framework, in which both tasks often rely on a shared global feature map. Such models, however, struggle to recognize text across scale variations (smaller or larger text) and arbitrary word rotation angles. The researchers propose a novel attention mechanism for text spotting, called GLASS, that fuses together global and local features. The global features are extracted from the shared backbone, while the local features are computed individually on resized, high-resolution word crops with upright orientation. GLASS achieves state-of-the-art results on multiple public benchmarks, and the researchers show that it can be integrated with other text-spotting solutions, improving their performance.

GLASS.png
A novel attention mechanism for text spotting, called GLASS, fuses together global and local features. From "GLASS: Global to local attention for scene-text spotting".

Large scale real-world multi-person tracking
Bing Shuai, Alessandro Bergamo, Uta Buechler, Andrew Berneshawi, Alyssa Boden, Joseph Tighe

Related content
ICCV workshop hosted by Amazon Prime Air and AWS will announce results of challenge to detect airborne obstacles.

This paper presents a new multi-person tracking dataset — PersonPath22 — which is more than an order of magnitude larger than existing high-quality multi-object tracking datasets. The PersonPath22 dataset is specifically sourced to provide a wide variety of conditions, and its annotations include rich metadata that allows the performance of a tracker to be evaluated along these different dimensions. Its large-scale real-world training and test data enable the community to better understand the performance of multi-person tracking systems in a range of scenarios and conditions.

MaCLR: Motion-aware contrastive Learning of representations for videos
Fanyi Xiao, Joseph Tighe, Davide Modolo

Attempts to use self-supervised learning for video have had some success, but existing approaches don’t make explicit use of motion information derived from the temporal sequence, which is important for supervised action recognition tasks. The researchers propose a self-supervised video representation-learning method that explicitly models motion cues during training. The method, MaCLR, consists of two pathways, visual and motion, connected by a novel cross-modal contrastive objective that enables the motion pathway to guide the visual pathway toward relevant motion cues.

MACLR.png
A frame of video (top left) and three different methods of capturing motion. From "MaCLR: Motion-aware contrastive Learning of representations for videos".

PSS: Progressive sample selection for open-world visual representation learning
Tianyue Cao, Yongxin Wang, Yifan Xing, Tianjun Xiao, Tong He, Zheng Zhang, Hao Zhou, Joseph Tighe

Related content
New end-to-end approach to zero-shot video classification dramatically outperforms predecessors.

In computer vision, open-world representation learning is the challenge of learning representations for categories of images not seen during training. Existing approaches make unrealistic assumptions, such as foreknowledge of the number of categories the unseen images fall into, or the ability to determine in advance which unlabeled training examples fall into unseen categories. The researchers’ novel progressive approach avoids such assumptions, selecting at each iteration unlabeled samples that are highly homogenous but belong to classes that are distant from the current set of known classes. High-quality pseudo-labels generated via clustering over these selected samples then improve the feature generalization iteratively.

Rayleigh EigenDirections (REDs): Nonlinear GAN latent space traversals for multidimensional features
Guha Balakrishnan, Raghudeep Gadde, Aleix Martinez, Pietro Perona

Generative adversarial networks (GANs) can map points in a latent space to images, producing extremely realistic synthetic data. Past attempts to control GANs’ outputs have looked for linear trajectories through the space that correspond, approximately, to continuous variation of a particular image feature. The researchers propose a new method for finding nonlinear trajectories through the space, providing unprecedented control over GANs’ outputs, including the ability to hold specified image features fixed while varying others.

Rethinking few-shot object detection on a multi-domain benchmark
Kibok Lee, Hao Yang, Satyaki Chakraborty, Zhaowei Cai, Gurumurthy Swaminathan, Avinash Ravichandran, Onkar Dabeer

Related content
New “meta-learning” approach improves on the state of the art in “one-shot” learning.

Most existing work on few-shot object detection (FSOD) focuses on settings where both the pretraining and few-shot learning datasets are from similar domains. The researchers propose a Multi-dOmain Few-Shot Object Detection (MoFSOD) benchmark consisting of 10 datasets from a wide range of domains to evaluate FSOD algorithms across a greater variety of applications. They comprehensively analyze the effects of freezing layers, different architectures, and different pretraining datasets on FSOD performance, drawing several surprising conclusions. One of these is that, contrary to prior belief, on a multidomain benchmark, fine-tuning (FT) is a strong baseline for FSOD.

SPot-the-Difference: Self-supervised pre-training for anomaly detection and segmentation
Yang Zou, Jongheon Jeong, Latha Pemula, Dongqing Zhang, Onkar Dabeer

Visual anomaly detection is commonly used in industrial quality inspection. This paper presents a new dataset and a new self-supervised learning method for ImageNet pretraining to improve anomaly detection and segmentation in 1-class and 2-class 5/10/high-shot training setups. The Visual Anomaly (VisA) Dataset consists of 10,821 high-resolution color images (9,621 normal and 1,200 anomalous samples) covering 12 objects in three domains, making it one of the largest industrial anomaly detection datasets to date. The paper also proposes a new self-supervised framework — SPot-the-Difference (SPD) — that can regularize contrastive self-supervised and also supervised pretraining to better handle anomaly detection tasks.

SPD contrastive learning.png
Conventional contrastive learning (left) and the contrastive-learning scheme used in SPD (spot-the-difference) training. From "SPot-the-difference: Self-supervised pre-training for anomaly detection and segmentation".

TD-Road: Top-down road network extraction with holistic graph construction
Yang He, Ravi Garg, Amber Roy Chowdhury

Road network extraction from satellite imagery is essential for constructing rich maps and enabling numerous applications in route planning and navigation. Previous graph-based methods used a bottom-up approach, estimating local information and extending a graph iteratively. This paper, by contrast, proposes a top-down approach that decomposes the problem into two subtasks: key point prediction and connectedness prediction. Unlike previous approaches, the proposed method applies graph structures (i.e., locations of nodes and connections between them) as training supervisions for deep neural networks and directly generates road graph outputs through inference.

TD-road.png
A satellite image (left) and three methods for extracting road networks from it: segmentation, bottom-up-graph-based methods, and a new top-down graph-based method (far right). From "TD-Road: Top-down road network extraction with holistic graph construction."

Towards regression-free neural networks for diverse compute platforms
Rahul Duggal, Hao Zhou, Shuo Yang, Jun Fang, Yuanjun Xiong, Wei Xia

Related content
New approach corrects for cases when average improvements are accompanied by specific regressions.

Commercial machine learning models are constantly being updated, and while an updated model may improve performance on average, it can still regress — i.e., suffer “negative flips” — on particular inputs it used to handle correctly. This paper introduces regression-constrained neural-architecture search (REG-NAS), which consists of two components: (1) a novel architecture constraint that enables a larger model to contain all the weights of a smaller one, thus maximizing weight sharing, and (2) a novel search reward that incorporates both top-1 accuracy and negative flips in the architecture search metric. Relative to the existing state-of-the-art approach, REG-NAS enables 33 – 48% reduction of negative flips.

Unsupervised and semi-supervised bias benchmarking in face recognition
Alexandra Chouldechova, Siqi Deng, Yongxin Wang, Wei Xia, Pietro Perona

This paper introduces semi-supervised performance evaluation for face recognition (SPE-FR), a statistical method for evaluating the performance and algorithmic bias of face verification systems when identity labels are unavailable or incomplete. The method is based on parametric Bayesian modeling of face embedding similarity scores, and it produces point estimates, performance curves, and confidence bands that reflect uncertainty in the estimation procedure. Experiments show that SPE-FR can accurately assess performance on data with no identity labels and confidently reveal demographic biases in system performance.

X-DETR: A versatile architecture for instance-wise vision-language tasks
Zhaowei Cai, Gukyeong Kwon, Avinash Ravichandran, Erhan Bas, Zhuowen Tu, Rahul Bhotika, Stefano Soatto

Related content
Two methods presented at CVPR achieve state-of-the-art results by imposing additional structure on the representational space.

This paper addresses the challenge of instance-wise vision-language tasks, which require free-form language to align with objects inside an image, rather than the image itself. The paper presents the X-DETR model, whose architecture has three major components: an object detector, a language encoder, and a vision-language alignment module. The vision and language streams are independent until the end, and they are aligned using an efficient dot-product operation. This simple architecture shows good accuracy and fast speeds for multiple instance-wise vision-language tasks, such as open-vocabulary object detection.

X-DETR.png
X-DETR addresses the challenge of instance-wise vision-language tasks, which require free-form language to align with objects inside an image, rather than the image itself. From "X-DETR: A versatile architecture for instance-wise vision-language tasks".

Research areas

Related content

GB, London
Are you a MS or PhD student interested in a 2026 internship in the field of machine learning, deep learning, generative AI, large language models and speech technology, robotics, computer vision, optimization, operations research, quantum computing, automated reasoning, or formal methods? If so, we want to hear from you! We are looking for students interested in using a variety of domain expertise to invent, design and implement state-of-the-art solutions for never-before-solved problems. You can find more information about the Amazon Science community as well as our interview process via the links below; https://www.amazon.science/ https://amazon.jobs/content/en/career-programs/university/science https://amazon.jobs/content/en/how-we-hire/university-roles/applied-science Key job responsibilities As an Applied Science Intern, you will own the design and development of end-to-end systems. You’ll have the opportunity to write technical white papers, create roadmaps and drive production level projects that will support Amazon Science. You will work closely with Amazon scientists and other science interns to develop solutions and deploy them into production. You will have the opportunity to design new algorithms, models, or other technical solutions whilst experiencing Amazon’s customer focused culture. The ideal intern must have the ability to work with diverse groups of people and cross-functional teams to solve complex business problems. A day in the life At Amazon, you will grow into the high impact person you know you’re ready to be. Every day will be filled with developing new skills and achieving personal growth. How often can you say that your work changes the world? At Amazon, you’ll say it often. Join us and define tomorrow. Some more benefits of an Amazon Science internship include; • All of our internships offer a competitive stipend/salary • Interns are paired with an experienced manager and mentor(s) • Interns receive invitations to different events such as intern program initiatives or site events • Interns can build their professional and personal network with other Amazon Scientists • Interns can potentially publish work at top tier conferences each year About the team Applicants will be reviewed on a rolling basis and are assigned to teams aligned with their research interests and experience prior to interviews. Start dates are available throughout the year and durations can vary in length from 3-6 months for full time internships. This role may available across multiple locations in the EMEA region (Austria, Estonia, France, Germany, Ireland, Israel, Italy, Jordan, Luxembourg, Netherlands, Poland, Romania, Spain, South Africa, UAE, and UK). Please note these are not remote internships.
US, CA, Sunnyvale
The Artificial General Intelligence (AGI) team is looking for a passionate, talented, and inventive Applied Scientist with a strong deep learning background, to build Generative Artificial Intelligence (GenAI) technology with Large Language Models (LLMs) and multimodal systems. Key job responsibilities As an Applied Scientist with the AGI team, you will work with talented peers to support the development of GenAI algorithms and modeling techniques, to advance the state of the art with LLMs. Your work will directly impact our customers in the form of products and services that make use of speech and language technology. You will leverage Amazon’s heterogeneous data sources and large-scale computing resources to accelerate advances in GenAI. About the team The AGI team has a mission to push the envelope with GenAI in LLMs and multimodal systems, in order to provide the best-possible experience for our customers.
IT, Turin
Are you a MS or PhD student interested in a 2026 internship in the field of machine learning, deep learning, generative AI, large language models, speech technology, robotics, computer vision, optimization, operations research, quantum computing, automated reasoning, or formal methods? If so, we want to hear from you! We are looking for students interested in using a variety of domain expertise to invent, design and implement state-of-the-art solutions for never-before-solved problems. You can find more information about the Amazon Science community as well as our interview process via the links below; https://www.amazon.science/ https://amazon.jobs/content/en/career-programs/university/science https://amazon.jobs/content/en/how-we-hire/university-roles/applied-science Key job responsibilities As an Applied Science Intern, you will own the design and development of end-to-end systems. You’ll have the opportunity to write technical white papers, create roadmaps and drive production level projects that will support Amazon Science. You will work closely with Amazon scientists and other science interns to develop solutions and deploy them into production. You will have the opportunity to design new algorithms, models, or other technical solutions whilst experiencing Amazon’s customer focused culture. The ideal intern must have the ability to work with diverse groups of people and cross-functional teams to solve complex business problems. A day in the life At Amazon, you will grow into the high impact person you know you’re ready to be. Every day will be filled with developing new skills and achieving personal growth. How often can you say that your work changes the world? At Amazon, you’ll say it often. Join us and define tomorrow. Some more benefits of an Amazon Science internship include; • All of our internships offer a competitive stipend/salary • Interns are paired with an experienced manager and mentor(s) • Interns receive invitations to different events such as intern program initiatives or site events • Interns can build their professional and personal network with other Amazon Scientists • Interns can potentially publish work at top tier conferences each year About the team Applicants will be reviewed on a rolling basis and are assigned to teams aligned with their research interests and experience prior to interviews. Start dates are available throughout the year and durations can vary in length from 3-6 months for full time internships. This role may available across multiple locations in the EMEA region (Austria, Estonia, France, Germany, Ireland, Israel, Italy, Jordan, Luxembourg, Netherlands, Poland, Romania, Spain, South Africa, UAE, and UK). Please note these are not remote internships.
US, CA, Pasadena
The Amazon Web Services (AWS) Center for Quantum Computing (CQC) is a multi-disciplinary team of theoretical and experimental physicists, materials scientists, and hardware and software engineers on a mission to develop a fault-tolerant quantum computer. Throughout your internship journey, you'll have access to unparalleled resources, including state-of-the-art computing infrastructure, cutting-edge research papers, and mentorship from industry luminaries. This immersive experience will not only sharpen your technical skills but also cultivate your ability to think critically, communicate effectively, and thrive in a fast-paced, innovative environment where bold ideas are celebrated. Join us at the forefront of applied science, where your contributions will shape the future of Quantum Computing and propel humanity forward. Seize this extraordinary opportunity to learn, grow, and leave an indelible mark on the world of technology. Amazon has positions available for Quantum Research Science and Applied Science Internships in Santa Clara, CA and Pasadena, CA. We are particularly interested in candidates with expertise in any of the following areas: superconducting qubits, cavity/circuit QED, quantum optics, open quantum systems, superconductivity, electromagnetic simulations of superconducting circuits, microwave engineering, benchmarking, quantum error correction, etc. In this role, you will work alongside global experts to develop and implement novel, scalable solutions that advance the state-of-the-art in the areas of quantum computing. You will tackle challenging, groundbreaking research problems, work with leading edge technology, focus on highly targeted customer use-cases, and launch products that solve problems for Amazon customers. The ideal candidate should possess the ability to work collaboratively with diverse groups and cross-functional teams to solve complex business problems. A successful candidate will be a self-starter, comfortable with ambiguity, with strong attention to detail and the ability to thrive in a fast-paced, ever-changing environment. About the team Diverse Experiences AWS values diverse experiences. Even if you do not meet all of the qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why AWS? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Inclusive Team Culture Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences, inspire us to never stop embracing our uniqueness. Mentorship & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. Hybrid Work We value innovation and recognize this sometimes requires uninterrupted time to focus on a build. We also value in-person collaboration and time spent face-to-face. Our team affords employees options to work in the office every day or in a flexible, hybrid work model near one of our U.S. Amazon offices.
US, MA, N.reading
Amazon Industrial Robotics is seeking exceptional talent to help develop the next generation of advanced robotics systems that will transform automation at Amazon's scale. We're building revolutionary robotic systems that combine cutting-edge AI, sophisticated control systems, and advanced mechanical design to create adaptable automation solutions capable of working safely alongside humans in dynamic environments. This is a unique opportunity to shape the future of robotics and automation at an unprecedented scale, working with world-class teams pushing the boundaries of what's possible in robotic dexterous manipulation, locomotion, and human-robot interaction. This role presents an opportunity to shape the future of robotics through innovative applications of deep learning and large language models. At Amazon Industrial Robotics we leverage advanced robotics, machine learning, and artificial intelligence to solve complex operational challenges at an unprecedented scale. Our fleet of robots operates across hundreds of facilities worldwide, working in sophisticated coordination to fulfill our mission of customer excellence. We are pioneering the development of dexterous manipulation system that: - Enables unprecedented generalization across diverse tasks - Enables contact-rich manipulation in different environments - Seamlessly integrates low-level skills and high-level behaviors - Leverage mechanical intelligence, multi-modal sensor feedback and advanced control techniques. The ideal candidate will contribute to research that bridges the gap between theoretical advancement and practical implementation in robotics. You will be part of a team that's revolutionizing how robots learn, adapt, and interact with their environment. Join us in building the next generation of intelligent robotics systems that will transform the future of automation and human-robot collaboration. Key job responsibilities - Design and implement methods for dexterous manipulation with single and dual arm manipulation - Leverage simulation and real-world data collection to create large datasets for model development - Develop a hierarchical system that combines low-level control with high-level planning - Utilize state-of-the-art manipulation models and optimal control techniques - Collaborate effectively with multi-disciplinary teams to co-design hardware and algorithms for dexterous manipulation
US, MA, N.reading
Amazon Industrial Robotics is seeking exceptional talent to help develop the next generation of advanced robotics systems that will transform automation at Amazon's scale. We're building revolutionary robotic systems that combine cutting-edge AI, sophisticated control systems, and advanced mechanical design to create adaptable automation solutions capable of working safely alongside humans in dynamic environments. This is a unique opportunity to shape the future of robotics and automation at an unprecedented scale, working with world-class teams pushing the boundaries of what's possible in robotic dexterous manipulation, locomotion, and human-robot interaction. This role presents an opportunity to shape the future of robotics through innovative applications of deep learning and large language models. At Amazon Industrial Robotics we leverage advanced robotics, machine learning, and artificial intelligence to solve complex operational challenges at an unprecedented scale. Our fleet of robots operates across hundreds of facilities worldwide, working in sophisticated coordination to fulfill our mission of customer excellence. - We are pioneering the development of robotics dexterous hands that: - Enable unprecedented generalization across diverse tasks - Are compliant but at the same time impact resistant - Can enable power grasps with the same reliability as fine dexterity and nonprehensile manipulation - Can naturally cope with the uncertainty of the environment - Leverage mechanical intelligence, multi-modal sensor feedback and advanced control techniques. The ideal candidate will contribute to research that bridges the gap between theoretical advancement and practical implementation in robotics. You will be part of a team that's revolutionizing how robots learn, adapt, and interact with their environment. Join us in building the next generation of intelligent robotics systems that will transform the future of automation and human-robot collaboration. Key job responsibilities - Design and implement novel highly dexterous and reliable robotic dexterous hand morphologies - Develop parallel paths for rapid finger design and prototyping combining different actuation and sensing technologies as well as different finger morphologies - Develop new testing and validation strategies to support fast continuous integration and modularity - Build and test full hand prototypes to validate the performance of the solution - Create hybrid approaches combining different actuation technologies, under-actuation, active and passive compliance - Hand integration into rest of the embodiment - Partner with cross-functional teams to rapidly create new concepts and prototypes - Work with Amazon's robotics engineering and operations teams to grasp their requirements and develop tailored solutions - Document the designs, performance, and validation of the final system
US, CA, San Francisco
The Artificial General Intelligence (AGI) team is looking for a passionate, talented, and inventive Member of Technical Staff with a strong deep learning background, to build industry-leading Generative Artificial Intelligence (GenAI) technology with Large Language Models (LLMs) and multimodal systems. Key job responsibilities As a Member of Technical Staff with the AGI team, you will lead the development of algorithms and modeling techniques, to advance the state of the art with LLMs. You will lead the foundational model development in an applied research role, including model training, dataset design, and pre- and post-training optimization. Your work will directly impact our customers in the form of products and services that make use of GenAI technology. You will leverage Amazon’s heterogeneous data sources and large-scale computing resources to accelerate advances in LLMs. About the team The AGI team has a mission to push the envelope in GenAI with LLMs and multimodal systems, in order to provide the best-possible experience for our customers.
GB, London
Are you a MS or PhD student interested in a 2026 Research Science Internship, where you would be using your experience to initiate the design, development, execution and implementation of scientific research projects? If so, we want to hear from you! Is your research in machine learning, deep learning, automated reasoning, speech, robotics, computer vision, optimization, or quantum computing? If so, we want to hear from you! We are looking for motivated students with research interests in a variety of science domains to build state-of-the-art solutions for never before solved problems You can find more information about the Amazon Science community as well as our interview process via the links below; https://www.amazon.science/ https://amazon.jobs/content/en/career-programs/university/science Key job responsibilities As a Research Science Intern, you will have following key job responsibilities; • Work closely with scientists and engineering teams (position-dependent) • Work on an interdisciplinary team on customer-obsessed research • Design new algorithms, models, or other technical solutions • Experience Amazon's customer-focused culture A day in the life At Amazon, you will grow into the high impact person you know you’re ready to be. Every day will be filled with developing new skills and achieving personal growth. How often can you say that your work changes the world? At Amazon, you’ll say it often. Join us and define tomorrow. Some more benefits of an Amazon Science internship include; • All of our internships offer a competitive stipend/salary • Interns are paired with an experienced manager and mentor(s) • Interns receive invitations to different events such as intern program initiatives or site events • Interns can build their professional and personal network with other Amazon Scientists • Interns can potentially publish work at top tier conferences each year About the team Applicants will be reviewed on a rolling basis and are assigned to teams aligned with their research interests and experience prior to interviews. Start dates are available throughout the year and durations can vary in length from 3-6 months. This role may available across multiple locations in the EMEA region (Austria, Estonia, France, Germany, Ireland, Israel, Italy, Luxembourg, Netherlands, Poland, Romania, Spain, UAE, and UK). Please note these are not remote internships.
LU, Luxembourg
Are you a MS student interested in a 2026 internship in the field of machine learning, deep learning, generative AI, large language models and speech technology, robotics, computer vision, optimization, operations research, quantum computing, automated reasoning, or formal methods? If so, we want to hear from you! We are looking for a customer obsessed Data Scientist Intern who can innovate in a business environment, building and deploying machine learning models to drive step-change innovation and scale it to the EU/worldwide. If this describes you, come and join our Data Science teams at Amazon for an exciting internship opportunity. If you are insatiably curious and always want to learn more, then you’ve come to the right place. You can find more information about the Amazon Science community as well as our interview process via the links below; https://www.amazon.science/ https://amazon.jobs/content/en/career-programs/university/science Key job responsibilities As a Data Science Intern, you will have following key job responsibilities: • Work closely with scientists and engineers to architect and develop new algorithms to implement scientific solutions for Amazon problems. • Work on an interdisciplinary team on customer-obsessed research • Experience Amazon's customer-focused culture • Create and Deliver Machine Learning projects that can be quickly applied starting locally and scaled to EU/worldwide • Build and deploy Machine Learning models using large data-sets and cloud technology. • Create and share with audiences of varying levels technical papers and presentations • Define metrics and design algorithms to estimate customer satisfaction and engagement A day in the life At Amazon, you will grow into the high impact person you know you’re ready to be. Every day will be filled with developing new skills and achieving personal growth. How often can you say that your work changes the world? At Amazon, you’ll say it often. Join us and define tomorrow. Some more benefits of an Amazon Science internship include; • All of our internships offer a competitive stipend/salary • Interns are paired with an experienced manager and mentor(s) • Interns receive invitations to different events such as intern program initiatives or site events • Interns can build their professional and personal network with other Amazon Scientists • Interns can potentially publish work at top tier conferences each year About the team Applicants will be reviewed on a rolling basis and are assigned to teams aligned with their research interests and experience prior to interviews. Start dates are available throughout the year and durations can vary in length from 3-6 months for full time internships. This role may available across multiple locations in the EMEA region (Austria, France, Germany, Ireland, Israel, Italy, Luxembourg, Netherlands, Poland, Romania, Spain and the UK). Please note these are not remote internships.
US, WA, Redmond
Amazon Leo is Amazon’s low Earth orbit satellite broadband network. Its mission is to deliver fast, reliable internet to customers and communities around the world, and we’ve designed the system with the capacity, flexibility, and performance to serve a wide range of customers, from individual households to schools, hospitals, businesses, government agencies, and other organizations operating in locations without reliable connectivity. Export Control Requirement: Due to applicable export control laws and regulations, candidates must be a U.S. citizen or national, U.S. permanent resident (i.e., current Green Card holder), or lawfully admitted into the U.S. as a refugee or granted asylum. We are searching for a senior manager with expertise in the spaceflight aerospace engineering domain of Flight Dynamics, including Mission Design of LEO Constellations, Trajectory, Maneuver Planning, and Navigation. This role will be responsible for the research and development of core spaceflight algorithms that enable the Amazon Leo mission. This role will manage the team responsible for designing and developing flight dynamics innovations for evolving constellation mission needs. Key job responsibilities This position requires expertise in simulation and analysis of astrodynamics models and spaceflight trajectories. This position requires demonstrated achievement in managing technology research portfolios. A strong candidate will have demonstrated achievement in managing spaceflight engineering Guidance, Navigation, and Control teams for full mission lifecycle including design, prototype development and deployment, and operations. Working with the Leo Flight Dynamics Research Science team, you will manage, guide, and direct staff to: • Implement high fidelity modeling techniques for analysis and simulation of large constellation concepts. • Develop algorithms for station-keeping and constellation maintenance. • Perform analysis in support of multi-disciplinary trades within the Amazon Leo team. • Formulate solutions to address collision avoidance and conjunction assessment challenges. • Develop the Leo ground system’s evolving Flight Dynamics System functional requirements. • Work closely with GNC engineers to manage on-orbit performance and develop flight dynamics operations processes About the team The Flight Dynamics Research Science team is staffed with subject matter experts of various areas within the Flight Dynamics domain. It also includes a growing Position, Navigation, and Timing (PNT) team.