Josh Miele, in a purple dress shirt, sits at a desk in an office, he is typing and looking at a computer screen, there are chairs and desks in the background
Josh Miele, an Amazon principal accessibility researcher, was selected a 2021 MacArthur Foundation Fellow. He has spent his career developing tools to make the world more accessible for people who are blind and visually impaired.
Meg Coyle / Amazon

Josh Miele: Amazon’s resident MacArthur Fellow

Miele has merged a lifelong passion for science with a mission to make the world more accessible for people with disabilities.

In September 2021, when Josh Miele, an Amazon principal accessibility researcher, got a text from someone at the MacArthur Foundation requesting a phone call, his heart leapt. For anyone in the arts and sciences, a MacArthur Fellowship, known as the “genius” grant, is akin to winning the lottery. You can’t apply for the $625,000 fellowship; it just arrives, mysteriously, out of the ether with a phone call from the foundation.

For Miele, who is blind and has spent his career developing tools to make the world more accessible for people who are blind and visually impaired, a MacArthur grant had long been a fantastical dream.

Joshua Miele, Adaptive Technology Designer | 2021 MacArthur Fellow

“Everybody has things that they imagine might happen to them,” Miele said. “And some things are more realistic than others. You think, ‘Wouldn’t it be nice to get married, have kids, get a great job at Amazon, and, yes, wouldn’t it be nice to get a MacArthur grant?’ Some dreams you can work on and make happen yourself, and some you have to wait and hope for. I won’t deny that one of my long-time fantasies was that I would get a MacArthur Fellowship.”

Assuming the call was to ask his opinion about a possible recipient, he was ecstatic to learn he was among the 25 2021 fellows selected by the foundation. The five-year grant provides money that recipients can use however they want. For someone like Miele, who spent years in a non-profit accessibility thinktank in an endless quest for grant money, the MacArthur news left him with sweaty palms, ringing ears, and pure joy.

“It was extraordinary,” he declared.

A devotion to accessibility

Yet given his life’s work, the grant was a surprise to no one who knows Miele. After graduating from the University of California, Berkeley with a PhD in psychoacoustics in 2003, Miele worked for 16 years at the non-profit Smith-Kettlewell Eye Research Institute in San Francisco as a principal scientist and researcher. He devoted his life to fostering accessibility for the blind and visually impaired.

Josh using a refreshable braille display in his home office.jpg
Josh Miele using a refreshable braille display in his home office in Berkeley, California.
Stephen Lam/Amazon

He began working with Amazon Lab126, which designs and engineers Amazon devices and services, in 2019. There Miele joined the group of designers and developers who built the Echo family of devices, Kindle, Fire tablets, Fire TV, Amazon Basics Microwaves, and a growing list of innovative products.

His work is with the accessibility team that seeks to make Amazon products intuitively useful for individuals with disabilities.

Related content
Alexa Fund company unlocks voice-based computing for people who have trouble using their voices.

His aim is to ensure that the designers, product managers, and team members have as clear an understanding as possible of the user experience for customers with disabilities.

“What I’m doing is making sure that the folks who are doing the design understand the customers they are designing for,” Miele explained. “That’s a special and important part of the puzzle because some people doing the work don’t have disabilities themselves and don’t innately have the deep understanding of what the customer requirements are.”

Improving Show and Tell

For example, when Miele joined Lab126, the group was working on Show and Tell, an Alexa feature for Echo Show devices that uses the camera and voice interface to help people who are blind identify products. Employing advanced computer vision and machine learning models for object recognition, Show and Tell can be a vital tool in the kitchen of a customer who is blind or has low vision. A person holds up an object and asks, “Alexa, what am I holding?” and gets an immediate answer.

New Amazon Echo Feature Increases Accessibility (With Audio Description) | Amazon News

The project was stymied, however, by the developers’ struggle to match each product perfectly. If they didn’t get a 100-percent match, the team felt they had failed.

Miele helped the team understand that they needed only to provide useful context, even just a word or two, for a person who is blind or visually impaired to identify the product. The team focused on kitchen and pantry items — things that come in cans, boxes, bottles, and tubes. The goal: Recognize items in Amazon’s vast product catalogue, or if that wasn’t possible, recognize brands and logos that could give the customer enough information to know what they held in their hand.

“If I touch a can of something, I know it’s a can,” Miele explained, “but I don’t know if it’s a can of black beans or pineapple. So, if I’m making chili, and I open a can of pineapple, I’m going to be pretty irritated.”

Related content
Gari Clifford, the chair of the Department of Biomedical Informatics at Emory University and an Amazon Research Award recipient, wants to transform healthcare.

“I helped design an interaction model that would look for an exact match,” he added, “but if Alexa didn’t get that, it would look for brand recognition. Alexa would look for logos or text that would offer at least some clue as to what was in the can.”

The team created what he called “a gentle letdown curve.” If Alexa can’t find the exact information, Alexa politely apologizes — but doesn’t give up. “I don’t know what it is, but I saw the words `whole black beans’ on the label,” Alexa says. It isn’t an exact match, Miele noted, at least you know what is in the can, which is still incredibly helpful.

Additionally, Miele worked with the team to create audio feedback where Alexa guides the customer to hold the object in the camera’s field of view. Without that, Alexa might be unable to identify the product and frustration could set in.

“Doing things the right way”

Miele also has provided important input into the team’s braille display support and Braille Screen Input technology.

“The most important thing I do is work with my colleagues to test these experiences with real blind and visually impaired customers,” Miele said. “Usability testing is a fairly well known art, but when you’re testing with specialized populations, like people who are blind or deaf, or people using wheelchairs, there are certain adjustments that you want to make to your research protocol so that you are doing things the right way.”

Related content
Participating teams reported their progress at a workshop earlier this month.

For example, Miele helped the design team understand that it was important that things like consent forms be made accessible.

“Designing research protocols so that people with disabilities are comfortable and properly accommodated is a really important part of research,” Miele said. “The very best way to ensure that the thing you’re designing works for the people you are designing it for is to have people on the team who are going to be customers for that experience.”

A remarkable journey

Miele’s story is a remarkable one, but he discourages others from focusing too much on what happened to him as a child, and instead to consider who he is and what he’s accomplished. He was blinded at age 4 when an assailant in his Brooklyn, New York neighborhood poured sulfuric acid over his head, which blinded and disfigured him.

Josh Miele is seen wearing a suit, sitting in a chair while playing a bass guitar, there is a mic stand and an amp in the foreground
Josh Miele plays bass guitar and has developed a new braille code for writing jazz chord charts.
Barbara Butkus

With the support of incredible parents, teachers and colleagues, he has never thought of himself as being less capable.

He has a full and accomplished life. Miele is married and has two teenage children. He plays the bass guitar and has developed a new braille code for writing jazz chord charts. He is a serious cook and woodworker and loves to hike. He is a proud member of the disability community.

He doesn’t focus on what happened to him because for him, it was simply the challenge life handed him.

“It wasn’t a choice I made,” he said about his positive outlook. “It never even occurred to me to feel sorry for myself. I just wanted to go through life and do the things I was interested in.”

An important epiphany

Miele’s love of science emerged early. He wanted to build rockets, become a space scientist, and explore outer space. Shortly after beginning an undergraduate physics degree at UC Berkeley, he interned with NASA, where he worked in planetary science. It wasn’t until later when he took a job at a software startup named Berkeley Systems, where he worked on some of the first graphical user interface (GUI) screen readers for the blind and visually impaired. That produced an epiphany.

“I realized that the work I was doing in accessibility was both rewarding to me and something that not many people could do at the level I was able to do it,” he recalled. “I thought, ‘There are plenty of people who could be great planetary scientists but there were not a lot of people who could design cool stuff for blind people and meet the needs of the people who were going to use it.’”

Josh Miele, standing, gives advice to a student who is seated during a soldering workshop. Several students are sitting at a large table with soldering equipment.
Josh Miele, seen here leading a soldering workshop, says, “I am blind and that is my superpower in this. I’ve been working in accessibility for a really long time and not just for people who are blind and visually impaired, but for people with all kinds of disabilities."
Jean Miele

His colleague at Berkeley Systems, Peter Korn, was recruited to join the Amazon Lab126 accessibility team in 2013. One of his first moves was to create an external advisory council of disability experts and he asked Miele to join the council. Korn offered council members a peek behind the curtain at some of the lab’s projects. After one of those sessions, Miele took Korn aside and said, “You know, I’d really love to play a bigger role in helping bring some of those technologies you’re talking about to life.”

Korn responded, “Well, I would love to have you.”

Korn, who has been a colleague and friend for 30 years, was among those who were not surprised at Miele’s MacArthur grant.

“I’ve been incredibly impressed by his creativity, his design sense, his energy and passion, and his inventiveness,” Korn said. “He has a really good sense of what somebody who doesn’t understand technology faces.”

“My superpower”

“I am blind and that is my superpower in this,” Miele said. “I’ve been working in accessibility for a really long time and not just for people who are blind and visually impaired, but for people with all kinds of disabilities. I not only have a fairly good understanding of what some of the basic requirements are for a wide range of disabilities, but I also know how to connect with those communities and bring their voices into the conversation with the designers, developers, and product managers.”

I love accessibility. There’s a social justice aspect to it. You’re working on inclusion and accessibility of information.
Josh Miele

After many years in the non-profit sector, Miele is happy with his move to the technology industry.

“I love what I do,” he declared. “I love accessibility. There’s a social justice aspect to it. You’re working on inclusion and accessibility of information. You’re empowering people to do the things they want to do, which is extremely exciting for me. I’m strongly motivated to build cool things for blind people. I want blind people’s lives to be better. I also really like challenges, finding new, fun exciting things to work on and at Amazon, there is absolutely no shortage of cool, interesting, thought-provoking design challenges for accessibility.”

To learn more about Amazon Accessibility, please visit amazon.com/accessibility.

Related content

IN, TS, Hyderabad
Have you ever wondered how Amazon launches and maintains a consistent customer experience across hundreds of countries and languages it serves its customers? Are you passionate about data and mathematics, and hope to impact the experience of millions of customers? Are you obsessed with designing simple algorithmic solutions to very challenging problems? If so, we look forward to hearing from you! At Amazon, we strive to be Earth's most customer-centric company, where both internal and external customers can find and discover anything they want in their own language of preference. Our Translations Services (TS) team plays a pivotal role in expanding the reach of our marketplace worldwide and enables thousands of developers and other stakeholders (Product Managers, Program Managers, Linguists) in developing locale specific solutions. Amazon Translations Services (TS) is seeking an Applied Scientist to be based in our Hyderabad office. As a key member of the Science and Engineering team of TS, this person will be responsible for designing algorithmic solutions based on data and mathematics for translating billions of words annually across 130+ and expanding set of locales. The successful applicant will ensure that there is minimal human touch involved in any language translation and accurate translated text is available to our worldwide customers in a streamlined and optimized manner. With access to vast amounts of data, technology, and a diverse community of talented individuals, you will have the opportunity to make a meaningful impact on the way customers and stakeholders engage with Amazon and our platform worldwide. Together, we will drive innovation, solve complex problems, and shape the future of e-commerce. Key job responsibilities * Apply your expertise in LLM models to design, develop, and implement scalable machine learning solutions that address complex language translation-related challenges in the eCommerce space. * Collaborate with cross-functional teams, including software engineers, data scientists, and product managers, to define project requirements, establish success metrics, and deliver high-quality solutions. * Conduct thorough data analysis to gain insights, identify patterns, and drive actionable recommendations that enhance seller performance and customer experiences across various international marketplaces. * Continuously explore and evaluate state-of-the-art modeling techniques and methodologies to improve the accuracy and efficiency of language translation-related systems. * Communicate complex technical concepts effectively to both technical and non-technical stakeholders, providing clear explanations and guidance on proposed solutions and their potential impact. About the team We are a start-up mindset team. As the long-term technical strategy is still taking shape, there is a lot of opportunity for this fresh Science team to innovate by leveraging Gen AI technoligies to build scalable solutions from scratch. Our Vision: Language will not stand in the way of anyone on earth using Amazon products and services. Our Mission: We are the enablers and guardians of translation for Amazon's customers. We do this by offering hands-off-the-wheel service to all Amazon teams, optimizing translation quality and speed at the lowest cost possible.
US, VA, Arlington
Are you fascinated by the power of Large Language Models (LLM) and Artificial Intelligence (AI) to transform the way we learn and interact with technology? Are you passionate about applying advanced machine learning (ML) techniques to solve complex challenges in the cloud learning space? If so, AWS Training & Certification (T&C) team has an exciting opportunity for you as an Applied Scientist. At AWS T&C, we strive to be leaders in not only how we learn about the latest AI/ML development and AWS services, but also how the same technologies transform the way we learn about them. As an Applied Scientist, you will join a talented and collaborative team that is dedicated to driving innovation and delivering exceptional experiences in our Skill Builder platform for both new learners and seasoned developers. You will be a part of a global team that is focused on transforming how people learn. The position will interact with global leaders and teams across the globe as well as different business and technical organizations. Join us at the AWS T&C Science Team and become a part of a global team that is redefining the future of cloud learning. With access to vast amounts of data, exciting new technology, and a diverse community of talented individuals, you will have the opportunity to make a meaningful impact on the ways how worldwide learners engage with our learning system and builders develop on our platform. Together, we will drive innovation, solve complex problems, and shape the future of future-generation cloud builders. Please visit https://skillbuilder.awsto learn more. Key job responsibilities - Apply your expertise in LLM to design, develop, and implement scalable machine learning solutions that address challenges in discovery and engagement for our international audiences. - Collaborate with cross-functional teams, including software engineers, data engineers, scientists, and product managers, to define project requirements, establish success metrics, and deliver high-quality solutions. - Conduct thorough data analysis to gain insights, identify patterns, and drive actionable recommendations that enhance operational performance and customer experiences across Skill Builder. - Continuously explore and evaluate state-of-the-art techniques and methodologies to improve the accuracy and efficiency of AI/ML systems. - Communicate complex technical concepts effectively to both technical and non-technical stakeholders, providing clear explanations and guidance on proposed solutions and their potential impact. About the team Why AWS? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Inclusive Team Culture Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon conferences, inspire us to never stop embracing our uniqueness. Diverse Experiences AWS values diverse experiences. Even if you do not meet all of the qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Mentorship & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud.
US, MA, N.reading
Amazon Industrial Robotics is seeking exceptional talent to help develop the next generation of advanced robotics systems that will transform automation at Amazon's scale. We're building revolutionary robotic systems that combine cutting-edge AI, sophisticated control systems, and advanced mechanical design to create adaptable automation solutions capable of working safely alongside humans in dynamic environments. This is a unique opportunity to shape the future of robotics and automation at an unprecedented scale, working with world-class teams pushing the boundaries of what's possible in robotic dexterous manipulation, locomotion, and human-robot interaction. This role presents an opportunity to shape the future of robotics through innovative applications of deep learning and large language models. At Amazon Industrial Robotics we leverage advanced robotics, machine learning, and artificial intelligence to solve complex operational challenges at an unprecedented scale. Our fleet of robots operates across hundreds of facilities worldwide, working in sophisticated coordination to fulfill our mission of customer excellence. We are pioneering the development of robotics dexterous hands that: - Enable unprecedented generalization across diverse tasks - Are compliant and durable - Can span tasks from power grasps to fine dexterity and nonprehensile manipulation - Can navigate the uncertainty of the environment - Leverage mechanical intelligence, multi-modal sensor feedback and advanced control techniques. The ideal candidate will contribute to research that bridges the gap between theoretical advancement and practical implementation in robotics. You will be part of a team that's revolutionizing how robots learn, adapt, and interact with their environment. Join us in building the next generation of intelligent robotics systems that will transform the future of automation and human-robot collaboration. Key job responsibilities - Design and implement robust sensing for dexterous manipulation, including but not limited to: Tactile sensing, Position sensing, Force sensing, Non-contact sensing - Prototype the various identified sensing strategies, considering the constraints of the rest of the hand design - Build and test full hand sensing prototypes to validate the performance of the solution - Develop testing and validation strategies, supporting fast integration into the rest of the robot - Partner with cross-functional teams to iterate on concepts and prototypes - Work with Amazon's robotics engineering and operations customers to deeply understand their requirements and develop tailored solutions - Document the designs, performance, and validation of the final system
US, CA, San Francisco
Join the next revolution in robotics at Amazon's Frontier AI & Robotics team. As a Senior Applied Scientist, you'll spearhead the development of breakthrough foundation models and full-stack robotics systems that enable robots to perceive, understand, and interact with the world in unprecedented ways. You'll drive technical excellence in areas such as locomotion, manipulation, sim2real transfer, multi-modal and multi-task robot learning, designing novel frameworks that bridge the gap between research and real-world deployment at Amazon scale. In this role, you'll combine hands-on technical work with scientific leadership, ensuring your team delivers robust solutions for dynamic real-world environments. You'll leverage Amazon's vast computational resources to tackle ambitious problems in areas like very large multi-modal robotic foundation models and efficient, promptable model architectures that can scale across diverse robotic applications. Key job responsibilities - Lead technical initiatives across the robotics stack, driving breakthrough approaches through hands-on research and development in areas including robot co-design, dexterous manipulation mechanisms, innovative actuation strategies, state estimation, low-level control, system identification, reinforcement learning, and sim-to-real transfer, as well as foundation models for perception and manipulation - Guide technical direction for full-stack robotics projects from conceptualization through deployment, taking a system-level approach that integrates hardware considerations with algorithmic development - Develop and optimize control algorithms and sensing pipelines that enable robust performance in production environments - Mentor fellow scientists while maintaining strong individual technical contributions - Collaborate with platform and hardware teams to ensure seamless integration across the entire robotics stack - Influence technical decisions and implementation strategies within your area of focus A day in the life - Design and implement innovative systems and algorithms, working hands-on with our extensive infrastructure to prototype and evaluate at scale - Guide fellow scientists in solving complex technical challenges across the full robotics stack - Lead focused technical initiatives from conception through deployment, ensuring successful integration with production systems - Drive technical discussions within your team and with key stakeholders - Conduct experiments and prototype new ideas using our massive compute cluster and extensive robotics infrastructure - Mentor team members while maintaining significant hands-on contribution to technical solutions About the team At Frontier AI & Robotics, we're not just advancing robotics – we're reimagining it from the ground up. Our team is building the future of intelligent robotics through innovative foundation models and end-to-end learned systems. We tackle some of the most challenging problems in AI and robotics, from developing sophisticated perception systems to creating adaptive manipulation strategies that work in complex, real-world scenarios. What sets us apart is our unique combination of ambitious research vision and practical impact. We leverage Amazon's massive computational infrastructure and rich real-world datasets to train and deploy state-of-the-art foundation models. Our work spans the full spectrum of robotics intelligence – from multimodal perception using images, videos, and sensor data, to sophisticated manipulation strategies that can handle diverse real-world scenarios. We're building systems that don't just work in the lab, but scale to meet the demands of Amazon's global operations. Join us if you're excited about pushing the boundaries of what's possible in robotics, working with world-class researchers, and seeing your innovations deployed at unprecedented scale.
US, CA, San Francisco
Join the next revolution in robotics at Amazon's Frontier AI & Robotics team, where you'll work alongside world-renowned AI pioneers to push the boundaries of what's possible in robotic intelligence. As an Applied Scientist, you'll be at the forefront of developing breakthrough foundation models that enable robots to perceive, understand, and interact with the world in unprecedented ways. You'll drive independent research initiatives in areas such as locomotion, manipulation, sim2real transfer, multi-modal and multi-task robot learning, designing novel frameworks that bridge the gap between state-of-the-art research and real-world deployment at Amazon scale. In this role, you'll balance innovative technical exploration with practical implementation, collaborating with platform teams to ensure your models and algorithms perform robustly in dynamic real-world environments. You'll have access to Amazon's vast computational resources, enabling you to tackle ambitious problems in areas like very large multi-modal robotic foundation models and efficient, promptable model architectures that can scale across diverse robotic applications. Key job responsibilities - Drive independent research initiatives across the robotics stack, including robot co-design, dexterous manipulation mechanisms, innovative actuation strategies, state estimation, low-level control, system identification, reinforcement learning, and sim-to-real transfer, as well as foundation models for perception and manipulation - Lead full-stack robotics projects from conceptualization through deployment, taking a system-level approach that integrates hardware considerations with algorithmic development - Develop and optimize control algorithms and sensing pipelines that enable robust performance in production environments - Collaborate with platform and hardware teams to ensure seamless integration across the entire robotics stack - Contribute to the team's technical strategy and help shape our approach to next-generation robotics challenges A day in the life - Design and implement innovative systems and algorithms, leveraging our extensive infrastructure to prototype and evaluate at scale - Collaborate with our world-class research team to solve complex technical challenges - Lead technical initiatives from conception to deployment, working closely with robotics engineers to integrate your solutions into production systems - Participate in technical discussions and brainstorming sessions with team leaders and fellow scientists - Leverage our massive compute cluster and extensive robotics infrastructure to rapidly prototype and validate new ideas - Transform theoretical insights into practical solutions that can handle the complexities of real-world robotics applications About the team At Frontier AI & Robotics, we're not just advancing robotics – we're reimagining it from the ground up. Our team is building the future of intelligent robotics through ground breaking foundation models and end-to-end learned systems. We tackle some of the most challenging problems in AI and robotics, from developing sophisticated perception systems to creating adaptive manipulation strategies that work in complex, real-world scenarios. What sets us apart is our unique combination of ambitious research vision and practical impact. We leverage Amazon's massive computational infrastructure and rich real-world datasets to train and deploy state-of-the-art foundation models. Our work spans the full spectrum of robotics intelligence – from multimodal perception using images, videos, and sensor data, to sophisticated manipulation strategies that can handle diverse real-world scenarios. We're building systems that don't just work in the lab, but scale to meet the demands of Amazon's global operations. Join us if you're excited about pushing the boundaries of what's possible in robotics, working with world-class researchers, and seeing your innovations deployed at unprecedented scale.
CA, QC, Montreal
Join the next revolution in robotics at Amazon's Frontier AI & Robotics team, where you'll work alongside world-renowned AI pioneers to push the boundaries of what's possible in robotic intelligence. As an Applied Scientist, you'll be at the forefront of developing breakthrough foundation models that enable robots to perceive, understand, and interact with the world in unprecedented ways. You'll drive independent research initiatives in areas such as perception, manipulation, scene understanding, sim2real transfer, multi-modal foundation models, and multi-task learning, designing novel algorithms that bridge the gap between state-of-the-art research and real-world deployment at Amazon scale. In this role, you'll balance innovative technical exploration with practical implementation, collaborating with platform teams to ensure your models and algorithms perform robustly in dynamic real-world environments. You'll have access to Amazon's vast computational resources, enabling you to tackle ambitious problems in areas like very large multi-modal robotic foundation models and efficient, promptable model architectures that can scale across diverse robotic applications. Key job responsibilities - Design and implement novel deep learning architectures that push the boundaries of what robots can understand and accomplish - Drive independent research initiatives in robotics foundation models, focusing on breakthrough approaches in perception, and manipulation, for example open-vocabulary panoptic scene understanding, scaling up multi-modal LLMs, sim2real/real2sim techniques, end-to-end vision-language-action models, efficient model inference, video tokenization - Lead technical projects from conceptualization through deployment, ensuring robust performance in production environments - Collaborate with platform teams to optimize and scale models for real-world applications - Contribute to the team's technical strategy and help shape our approach to next-generation robotics challenges A day in the life - Design and implement novel foundation model architectures, leveraging our extensive compute infrastructure to train and evaluate at scale - Collaborate with our world-class research team to solve complex technical challenges - Lead technical initiatives from conception to deployment, working closely with robotics engineers to integrate your solutions into production systems - Participate in technical discussions and brainstorming sessions with team leaders and fellow scientists - Leverage our massive compute cluster and extensive robotics infrastructure to rapidly prototype and validate new ideas - Transform theoretical insights into practical solutions that can handle the complexities of real-world robotics applications About the team At Frontier AI & Robotics, we're not just advancing robotics – we're reimagining it from the ground up. Our team is building the future of intelligent robotics through ground breaking foundation models and end-to-end learned systems. We tackle some of the most challenging problems in AI and robotics, from developing sophisticated perception systems to creating adaptive manipulation strategies that work in complex, real-world scenarios. What sets us apart is our unique combination of ambitious research vision and practical impact. We leverage Amazon's massive computational infrastructure and rich real-world datasets to train and deploy state-of-the-art foundation models. Our work spans the full spectrum of robotics intelligence – from multimodal perception using images, videos, and sensor data, to sophisticated manipulation strategies that can handle diverse real-world scenarios. We're building systems that don't just work in the lab, but scale to meet the demands of Amazon's global operations. Join us if you're excited about pushing the boundaries of what's possible in robotics, working with world-class researchers, and seeing your innovations deployed at unprecedented scale.
IL, Tel Aviv
Come build the future of entertainment with us. Are you interested in helping shape the future of movies and television? Do you want to help define the next generation of how and what Amazon customers are watching? Prime Video is a premium streaming service that offers customers a vast collection of TV shows and movies - all with the ease of finding what they love to watch in one place. We offer customers thousands of popular movies and TV shows from Originals and Exclusive content to exciting live sports events. We also offer our members the opportunity to subscribe to add-on channels which they can cancel at any time and to rent or buy new release movies and TV box sets on the Prime Video Store. Prime Video is a fast-paced, growth business - available in over 240 countries and territories worldwide. The team works in a dynamic environment where innovating on behalf of our customers is at the heart of everything we do. If this sounds exciting to you, please read on We are seeking an exceptional Applied Scientist to join our Prime Video Sports personalization team in Israel. Our team is dedicated to developing state-of-the-art science to personalize the customer experience and help customers seamlessly find any live event in our selection. You will have the opportunity to work on innovative, large-scale projects that push the boundaries of what's possible in sports content delivery and engagement. Your expertise will be crucial in tackling complex challenges such as information retrieval, sequential modeling, realtime model optimizations, utilizing Large Language Models (LLMs), and building state-of-the-art complex recommender systems. Key job responsibilities We are looking for an Applied Scientist with domain expertise in Personalization, Information Retrieval, and Recommender Systems, or general ML to develop new algorithms and end-to-end solutions. As part of our team of applied scientists and software development engineers, you will be responsible for researching, designing, developing, and deploying algorithms into production pipelines. Your role will involve working with cutting-edge technologies in recommender systems and search. You'll also tackle unique challenges like temporal information retrieval to improve real-time sports content recommendations. As a technologist, you will drive the publication of original work in top-tier conferences in Machine Learning and Recommender Systems. We expect you to thrive in ambiguous situations, demonstrating outstanding analytical abilities and comfort in collaborating with cross-functional teams and systems. The ideal candidate is a self-starter with the ability to learn and adapt quickly in our fast-paced environment. About the team We are the Prime Video Sports team. In September 2018 Prime Video launched its first full-scale live streaming experience to world-wide Prime customers with NFL Thursday Night Football. That was just the start. Now Amazon has exclusive broadcasting rights to major leagues like NFL Thursday Night Football, Tennis majors like Roland-Garros and English Premier League to list a few and are broadcasting live events across 30+ sports world-wide. Prime Video is expanding not just the breadth of live content that it offers, but the depth of the experience. This is a transformative opportunity, the chance to be at the vanguard of a program that will revolutionize Prime Video, and the live streaming experience of customers everywhere.
US, WA, Seattle
Within Amazon’s Corporate Financial Planning & Analysis team (FP&A), we enjoy a unique vantage point into everything happening within Amazon. This is exciting opportunity for scientist to join our Financial Transformation team, where you will get to harness the power of statistical and machine learning models to revolutionize finance forecasting that spans entire company and business units. As a key player in this innovative group, you'll be at the forefront of applying state-of-the-art scientific approaches and emerging technologies to solve complex financial challenges. Your deep domain expertise will be instrumental in identifying and addressing customer needs, often venturing into uncharted territories where textbook solutions don't exist. You'll have the chance to author Finance AI articles, showcasing your novel work to both internal and external audiences. Key job responsibilities Your role will involve developing production-ready science models/components that directly impact large-scale systems and services, making critical decisions on implementation complexity and technology adoption. You'll be a driving force in MLOps, optimizing compute and inference usage and enhancing system performance. Beyond technical prowess, you'll contribute to financial strategic planning, mentor team members, and represent our tech. organization in the broader scientific community. This role offers a perfect blend of hands-on development, strategic thinking, and thought leadership in the exciting intersection of finance and advanced analytics. Ready to shape the future of financial forecasting? Join us and let's transform the industry together!
CA, QC, Montreal
Join the next revolution in robotics at Amazon's Frontier AI & Robotics team, where you'll work alongside world-renowned AI pioneers to push the boundaries of what's possible in robotic intelligence. As an Applied Scientist, you'll be at the forefront of developing breakthrough foundation models that enable robots to perceive, understand, and interact with the world in unprecedented ways. You'll drive independent research initiatives in areas such as perception, manipulation, scene understanding, sim2real transfer, multi-modal foundation models, and multi-task learning, designing novel algorithms that bridge the gap between state-of-the-art research and real-world deployment at Amazon scale. In this role, you'll balance innovative technical exploration with practical implementation, collaborating with platform teams to ensure your models and algorithms perform robustly in dynamic real-world environments. You'll have access to Amazon's vast computational resources, enabling you to tackle ambitious problems in areas like very large multi-modal robotic foundation models and efficient, promptable model architectures that can scale across diverse robotic applications. Key job responsibilities - Design and implement novel deep learning architectures that push the boundaries of what robots can understand and accomplish - Drive independent research initiatives in robotics foundation models, focusing on breakthrough approaches in perception, and manipulation, for example open-vocabulary panoptic scene understanding, scaling up multi-modal LLMs, sim2real/real2sim techniques, end-to-end vision-language-action models, efficient model inference, video tokenization - Lead technical projects from conceptualization through deployment, ensuring robust performance in production environments - Collaborate with platform teams to optimize and scale models for real-world applications - Contribute to the team's technical strategy and help shape our approach to next-generation robotics challenges A day in the life - Design and implement novel foundation model architectures, leveraging our extensive compute infrastructure to train and evaluate at scale - Collaborate with our world-class research team to solve complex technical challenges - Lead technical initiatives from conception to deployment, working closely with robotics engineers to integrate your solutions into production systems - Participate in technical discussions and brainstorming sessions with team leaders and fellow scientists - Leverage our massive compute cluster and extensive robotics infrastructure to rapidly prototype and validate new ideas - Transform theoretical insights into practical solutions that can handle the complexities of real-world robotics applications About the team At Frontier AI & Robotics, we're not just advancing robotics – we're reimagining it from the ground up. Our team is building the future of intelligent robotics through ground breaking foundation models and end-to-end learned systems. We tackle some of the most challenging problems in AI and robotics, from developing sophisticated perception systems to creating adaptive manipulation strategies that work in complex, real-world scenarios. What sets us apart is our unique combination of ambitious research vision and practical impact. We leverage Amazon's massive computational infrastructure and rich real-world datasets to train and deploy state-of-the-art foundation models. Our work spans the full spectrum of robotics intelligence – from multimodal perception using images, videos, and sensor data, to sophisticated manipulation strategies that can handle diverse real-world scenarios. We're building systems that don't just work in the lab, but scale to meet the demands of Amazon's global operations. Join us if you're excited about pushing the boundaries of what's possible in robotics, working with world-class researchers, and seeing your innovations deployed at unprecedented scale.
US, WA, Seattle
The Sponsored Products and Brands (SPB) team at Amazon Ads is transforming advertising through generative AI technologies. We help millions of customers discover products and engage with brands across Amazon.com and beyond. Our team combines human creativity with artificial intelligence to reinvent the entire advertising lifecycle—from ad creation and optimization to performance analysis and customer insights. We develop responsible AI technologies that balance advertiser needs, enhance shopping experiences, and strengthen the marketplace. Our team values innovation and tackles complex challenges that push the boundaries of what's possible with AI. Join us in shaping the future of advertising. Key job responsibilities This role will redesign how ads create personalized, relevant shopping experiences with customer value at the forefront. Key responsibilities include: - Design and develop solutions using GenAI, deep learning, multi-objective optimization and/or reinforcement learning to transform ad retrieval, auctions, whole-page relevance, and shopping experiences. - Partner with scientists, engineers, and product managers to build scalable, production-ready science solutions. - Apply industry advances in GenAI, Large Language Models (LLMs), and related fields to create innovative prototypes and concepts. - Improve the team's scientific and technical capabilities by implementing algorithms, methodologies, and infrastructure that enable rapid experimentation and scaling. - Mentor junior scientists and engineers to build a high-performing, collaborative team. A day in the life As an Applied Scientist on the Sponsored Products and Brands Off-Search team, you will contribute to the development in Generative AI (GenAI) and Large Language Models (LLMs) to revolutionize our advertising flow, backend optimization, and frontend shopping experiences. This is a rare opportunity to redefine how ads are retrieved, allocated, and/or experienced—elevating them into personalized, contextually aware, and inspiring components of the customer journey. You will have the opportunity to fundamentally transform areas such as ad retrieval, ad allocation, whole-page relevance, and differentiated recommendations through the lens of GenAI. By building novel generative models grounded in both Amazon’s rich data and the world’s collective knowledge, your work will shape how customers engage with ads, discover products, and make purchasing decisions. If you are passionate about applying frontier AI to real-world problems with massive scale and impact, this is your opportunity to define the next chapter of advertising science. About the team The Off-Search team within Sponsored Products and Brands (SPB) is focused on building delightful ad experiences across various surfaces beyond Search on Amazon—such as product detail pages, the homepage, and store-in-store pages—to drive monetization. Our vision is to deliver highly personalized, context-aware advertising that adapts to individual shopper preferences, scales across diverse page types, remains relevant to seasonal and event-driven moments, and integrates seamlessly with organic recommendations such as new arrivals, basket-building content, and fast-delivery options. To execute this vision, we work in close partnership with Amazon Stores stakeholders to lead the expansion and growth of advertising across Amazon-owned and -operated pages beyond Search. We operate full stack—from backend ads-retail edge services, ads retrieval, and ad auctions to shopper-facing experiences—all designed to deliver meaningful value.