Cognixion’s assisted reality headset
Cognixion’s assisted reality architecture aims to overcome speech barriers by integrating a brain-computer interface with machine learning algorithms, assistive technology, and augmented reality (AR) applications in a wearable format.
Cognixion

Cognixion gives voice to a user’s thoughts

Alexa Fund company’s assisted reality tech could unlock speech for hundreds of millions of people who struggle to communicate.

(Editor’s note: This article is the latest installment in a series by Amazon Science delving into the science behind products and services of companies in which Amazon has invested. The Alexa Fund participated in Cognixion’s $12M seed round in November 2021.)

In 2012, Andreas Forsland, founder and CEO of Alexa Fund company Cognixion, became the primary caregiver and communicator for his mother. She was hospitalized with complications from pneumonia and unable to speak for herself.

“That experience opened my eyes to how precious speech really is,” Forsland says. According to a Cognixion analysis of over 1,200 relevant research papers, more than half a billion people worldwide struggle to speak clearly or at conversational speeds, which can hamper their interactions with others and full participation in society.

Forsland wondered whether a technology solution would be feasible and started Cognixion in 2014 to explore that possibility. “We had the gumption to think, ‘Wouldn’t it be neat to have a thought-to-speech interface that just reads your mind?’ We were naïve and curious at the same time.”

Brain–computer interfaces (BCIs) have been around since the 1970s, with demonstrated applications in enabling communication. But their use in the real world has so far been limited, owing to the amount of training required, the difficulty of operating them, performance issues related to recording technology, sensors, and signal processing, and the interaction between the brain and the BCI.

Cognixion’s assisted reality architecture aims to overcome these barriers by integrating a BCI with machine learning algorithms, assistive technology, and augmented reality (AR) applications in a wearable format.

Introducing Cognixion: The world's first "assisted reality" device

The current embodiment of the company’s technology is a non-invasive device called Cognixion ONE. Brainwave patterns associated with visual fixation on interactive objects presented through the headset are detected and decoded. The signals enable hands-free, voice-free control of AR/XR applications to generate speech or send instructions to smart-home components or AI assistants.

“For some people, we make things easy, and for other people, we make things possible. That’s the way we look at it: technology in service of enhancing a human’s ability to do things,” says Forsland.

In an interview with Amazon Science, Forsland described the ins and outs of Cognixion ONE, the next steps in its development, and the longer-term future of assisted reality tech.

  1. Q. 

    Given the wide range of abilities or disabilities that someone might have, how did you go about designing technology that anyone can use?

    A. 

    It all starts with the problem. One of the key constraints in this problem domain is that you can’t make any assumptions about someone’s ability to use their hands or arms or mouth in a meaningful way. So how can you actually drive an interaction with a computer using the limited degrees of freedom that the user has?

    In the extreme case, the user actually has no physical degrees of freedom. The only remaining degree of freedom is attention. So can you use attention as a mechanism to drive interaction with a computer, fully bypassing the rest of the body?

    It turns out that you can, thanks to neuroscience work in this area. You can project certain types of visual stimuli onto a user’s retina and look for their attentional reaction to those stimuli.

    Related content
    Alexa Fund portfolio company’s science-led program could change how we approach mental wellness — and how we use VR.

    If I give you two images with different movement characteristics, I can tell by the pattern of your brain waves that you’re seeing those two things, and the fact that you're paying attention to one of them actually changes that pattern.

    It takes a tiny bit of flow-state thinking. It’s kind of like when you look at an optical illusion, and you can see the two states. If you can do that, then you can decide between two choices, and as soon as you can do that, I can build an entire interface on top of that. I can ask, ‘Do you want A or do you want B?,’ like playing ‘20 Questions.’ It’s sort of the most basic way to differentiate a user’s intent.

    Basically, we considered the hardest possible situation first: a person with no physical capabilities whatsoever. Let’s solve that problem. Then we can start layering stuff on, like gaze tracking, gestures, or keyboards, to further enhance the interaction and make it even more efficient for people with the relevant physical capabilities. But it may turn out that an adaptive keyboard is actually overkill for a lot of interactions. Maybe you can get by with much less.

    Related content
    Alexa Fund company unlocks voice-based computing for people who have trouble using their voices.

    Now, if you marry that input with the massive advancements in the last five or ten years in machine learning, you can become much more aggressive about what you think the person is trying to do, or what is appropriate in that situation. You can use that information to minimize the number of interactions required. Ideally, you get to a place where you have a very efficient interface, because the user only has to decide between the things that are most relevant.

    And you can make it much more elaborate by integrating knowledge about the user’s environment, previous utterances, time of day, etc. That’s really the magic of this architecture: It leverages minimum inputs with really aggressive prediction capability to help people communicate smoothly and efficiently.

  2. Q. 

    What types of communication does this technology enable?

    A. 

    First and foremost is speech. And an easy way to understand the impact of this technology is to look at conversational rate. Right now, this conversation is probably on the order of 60 to 150 words per minute, depending on how much coffee we had and so on.

    For a lot of users of our technology, it’s like a pipe dream to even get to 20 or 30. It can take a long time to produce even very basic utterances, along the lines of ‘I am tired.’

    Now imagine breaking through to say, ‘Let’s talk about our day,’ and carrying on a conversation that provides meaning, interest, and value. That is the breakthrough capability that we’re really trying to enable.

    We have this amazing group — our Brainiac Council — of people with speech disabilities, scientists, technologists. We have more than 200 Brainiacs now, and we want to grow the council to 300.

    Cognixion ONE demo

    One of our Brainiacs uses the headset to help him communicate words that are difficult for him to pronounce, like ‘chocolate.’ He owns and operates a business where he performs for other people. During a performance, he can plug the headset directly into his sound system instead of having to talk into a microphone.

    Think of how many other people have something to say but might be overlooked. We want to help them get their point across.

    One possibility we’re exploring for future enhancement of speech generation is providing each user with their own voice, through technologies like voice banking and text-to-speech software like Amazon Web Services Polly. Personalization to such a degree could make the experience much richer and more meaningful for users.

    But speech generation is only one function of a broad ‘neuroprosthetic.’ People also interact with places, things, and media — and these interactions don’t necessarily require speech. We’re building an Alexa integration to enable home automation control and other enriched experiences. Through the headset, users can interact with their environment, control smart devices, or access news, music, whatever is available.

    In time, a device could allow users to control mobility devices for assisted navigation, robots for household tasks, settings for ambient lighting and temperature. It’s enabling a future where more people can live their daily lives more actively and independently.

  3. Q. 

    What are the next steps toward creating that future?

    A. 

    There are some key technical problems to solve. BCIs historically have been viewed somewhat skeptically, particularly the use of electroencephalography. So our challenge is to come up with a paradigm for stimulus response that enables sufficient expressive capability within the user interface. In other words, can I show you enough different kinds of stimuli to give you meaningful choices so you can efficiently use the system without becoming unnecessarily tired?

    Then it’s like whack-a-mole, or the digital equivalent. When we see a specific frequency come through, and a certain power threshold on it, we act on it. How many different unique frequencies can we disambiguate from one another at any given time?

    A simulated view of the interface in a Cognixion device
    “For some people, we make things easy, and for other people, we make things possible. That’s the way we look at it: technology in service of enhancing a human’s ability to do things,” says Andreas Forsland, founder and CEO of Cognixion.
    Cognixion

    Another challenge is that a commercial device should require a nearly zero learning curve. Once you pop it on, you need to be able use it within minutes and not hours.

    So we might couple the stimulus-response technology with a display, or speakers, or haptics that can give biofeedback to help train your brain: ‘I’m doing this right’ or ‘I’m doing it wrong.’ This would give people the positives and negatives as they interact with it. If you can close those iterations quickly, people learn to use it faster.

    Our goal is to really harden and fortify the reliability and accuracy of what we’re doing, algorithmically. We then have a very robust IP portfolio that could go into mainstream applications, likely in the form of much deeper partnerships.

    Related content
    Amazon Research Award recipient Jonathan Tamir is focusing on deriving better images faster.

    In terms of applications, we are pursuing a medical channel and a research channel. Making a medical device is much more challenging than making a consumer device, for a variety of technical reasons: validation, documentation, regulatory considerations. So it takes some time. But the initial indications for use will be speech generation and environmental control.

    Eventually, we could look to expand our indications within the control ‘bubble’ to cover additional interactions with people, places, things, and content. Plus, the system could find applications within three other healthcare bubbles. One is diagnostics in areas like ophthalmology and neurology, thanks to the sensors and closed-loop nature of the device. A second is therapeutics for conditions involving attention, focus, and memory. And the third is remote monitoring in telehealth-type situations, because of the network capabilities.

    The research side uses the same medical-grade hardware, but loaded with different software to enable biometric analysis and development of experimental AR applications. We’re preparing for production and delivery of initial demand early next year, and we’re actively seeking research partners who would get early access to the device.

    In addition to collaborators in neuroscience, neuroengineering, bionics, human-computer interaction, and clinical and translational research, we’re also soliciting input from user experience research to arrive at a final set of specific technical requirements and use-case requirements.

    We think there’s tremendous opportunity here. And we’re constantly being asked, ‘When can this become mainstream?’ We have some thoughts and ideas about that, of course, but we also want to hear from the research community about the use cases they can dream up.

Research areas

Related content

US, VA, Arlington
Do you want a role with deep meaning and the ability to have a global impact? Hiring top talent is not only critical to Amazon’s success – it can literally change the world. It took a lot of great hires to deliver innovations like AWS, Prime, and Alexa, which make life better for millions of customers around the world. As part of the Intelligent Talent Acquisition (ITA) team, you'll have the opportunity to reinvent Amazon’s hiring process with unprecedented scale, sophistication, and accuracy. ITA is an industry-leading people science and technology organization made up of scientists, engineers, analysts, product professionals, and more. Our shared goal is to fairly and precisely connect the right people to the right jobs. Last year, we delivered over 6 million online candidate assessments, driving a merit-based hiring approach that gives candidates the opportunity to showcase their true skills. Each year we also help Amazon deliver billions of packages around the world by making it possible to hire hundreds of thousands of associates in the right quantity, at the right location, at exactly the right time. You’ll work on state-of-the-art research with advanced software tools, new AI systems, and machine learning algorithms to solve complex hiring challenges. Join ITA in using cutting-edge technologies to transform the hiring landscape and make a meaningful difference in people's lives. Together, we can solve the world's toughest hiring problems. Within ITA, the Global Hiring Science (GHS) team designs and implements innovative hiring solutions at scale. We work in a fast-paced, global environment where we use research to solve complex problems and build scalable hiring products that deliver measurable impact to our customers. We are seeking selection researchers with a strong foundation in hiring assessment development, legally-defensible validation approaches, research and experimental design, and data analysis. Preferred candidates will have experience across the full hiring assessment lifecycle, from solution design to content development and validation to impact analysis. We are looking for equal parts researcher and consultant, who is able to influence customers with insights derived from science and data. You will work closely with cross-functional teams to design new hiring solutions and experiment with measurement methods intended to precisely define exactly what job success looks like and how best to predict it. Key job responsibilities What you’ll do as a GHS Research Scientist: • Design large-scale personnel selection research that shapes Amazon’s global talent assessment practices across a variety of topics (e.g., assessment validation, measuring post-hire impact) • Partner with key stakeholders to create innovative solutions that blend scientific rigor with real-world business impact while navigating complex legal and professional standards • Apply advanced statistical techniques to analyze massive, diverse datasets to uncover insights that optimize our candidate evaluation processes and drive hiring excellence • Explore emerging technologies and innovative methodologies to enhance talent measurement while maintaining Amazon's commitment to scientific integrity • Translate complex research findings into compelling, actionable strategies that influence senior leader/business decisions and shape Amazon's talent acquisition roadmap • Write impactful documents that distill intricate scientific concepts into clear, persuasive communications for diverse audiences, from data scientists to business leaders • Ensure effective teamwork, communication, collaboration, and commitment across multiple teams with competing priorities A day in the life Imagine diving into challenges that impact millions of employees across Amazon's global operations. As a GHS Research Scientist, you'll tackle questions about hiring and organizational effectiveness on a global scale. Your day might begin with analyzing datasets to inform how we attract and select world-class talent. Throughout the day, you'll collaborate with peers in our research community, discussing different research methodologies and sharing innovative approaches to solving unique personnel challenges. This role offers a blend of focused analytical time and interacting with stakeholders across the globe.
US, WA, Seattle
We are looking for a researcher in state-of-the-art LLM technologies for applications across Alexa, AWS, and other Amazon businesses. In this role, you will innovate in the fastest-moving fields of current AI research, in particular in how to integrate a broad range of structured and unstructured information into AI systems (e.g. with RAG techniques), and get to immediately apply your results in highly visible Amazon products. If you are deeply familiar with LLMs, natural language processing, computer vision, and machine learning and thrive in a fast-paced environment, this may be the right opportunity for you. Our fast-paced environment requires a high degree of autonomy to deliver ambitious science innovations all the way to production. You will work with other science and engineering teams as well as business stakeholders to maximize velocity and impact of your deliverables. It's an exciting time to be a leader in AI research. In Amazon's AGI Information team, you can make your mark by improving information-driven experience of Amazon customers worldwide!
US, NY, New York
The Sponsored Products and Brands team at Amazon Ads is re-imagining the advertising landscape through cutting-edge generative AI technologies, revolutionizing how millions of customers discover products and engage with brands across Amazon.com and beyond. We are at the forefront of re-inventing advertising experiences, bridging human creativity with artificial intelligence to transform every aspect of the advertising lifecycle from ad creation and optimization to performance analysis and customer insights. We are a passionate group of innovators dedicated to developing responsible and intelligent AI technologies that balance the needs of advertisers, enhance the shopping experience, and strengthen the marketplace. If you're energized by solving complex challenges and pushing the boundaries of what's possible with AI, join us in shaping the future of advertising. Key job responsibilities Participate in the Science hiring process as well as mentor other scientists - improving their skills, their knowledge of your solutions, and their ability to get things done. Identify and devise new video related solutions following a customer-obsessed scientific approach to address customer or business problems when the problem is ill-defined, needs to be framed, and new methodologies or paradigms need to be invented at the product level. Articulate potential scientific challenges of ongoing or future customers’ needs or business problems, and present interventions to address them. Independently assess alternative video related technologies, driving evaluation and adoption of those that fit best A day in the life As an Applied Scientist on the Sponsored Products Video team, you will work with a team of talented and experienced engineers, scientists, and designers to help bring new products to market and ensure that our customers are delighted by what we create. The Sponsored Products Video team is responsible for the design, development, and implementation of Sponsored Products Video experiences worldwide. About the team The Sponsored Products Video team within Sponsored Products and Brands creates relevant and engaging video experiences, connecting advertisers and shoppers. We are on a mission to make Amazon the best in class destination for shoppers to discover, engage and build affinity with brands, making shopping delightful, & personal.
IN, TS, Hyderabad
We're seeking an Applied Scientist to lead and innovate in applying advanced AI technologies that will reshape how businesses sell on Amazon. Our team is passionate about leveraging Machine Learning, GenAI, and Agentic AI to help B2B sellers optimize their operations and drive growth. Join Amazon Business 3P (Third Party - Sellers) - a rapidly growing global organization where we innovate at the intersection of AI technology and B2B commerce. We're reimagining how sellers reach and serve business customers, creating intelligent solutions that help them grow their B2B business on Amazon. From AI-powered Seller Central tools to smart business certifications, dynamic pricing capabilities, and advanced analytics, we're transforming how B2B selling happens. As an Applied Scientist II on our AB 3P Tech team, you'll drive the development and implementation of state-of-the-art algorithms and models for supervised fine-tuning and reinforcement learning. You'll work with highly technical, entrepreneurial teams to: - Design and implement AI models that power the B2B selling experience - Lead the development of GenAI products that can handle Amazon-scale use cases - Drive research and implementation of advanced algorithms for human feedback and complex reasoning - Make strategic AI technology decisions and mentor technical talent - Own critical AI systems spanning from Seller Central to Amazon Business detail pages Join us in shaping the future of B2B selling - we're building applied AI solutions that businesses love and trust for their day-to-day success. If you are scrappy and bias for action is your favorite Leadership Principle, you'll fit right in as we innovate across the seller experience to create significant impact in this fast-growing business. Key job responsibilities Key job responsibilities: - Collaborate with cross-functional teams of engineers, product managers, and scientists to identify and solve complex problems in Gen AI - Design and execute experiments to evaluate the performance of different algorithms and models, and iterate quickly to improve results - Think big about the arc of development of Gen AI over a multi-year horizon, and identify new opportunities to apply these technologies to solve real-world problems - Communicate results and insights to both technical and non-technical audiences About the team At Amazon Business Third Party (AB3P) Tech, we're revolutionizing B2B e-commerce by empowering sellers in the business marketplace. Our scope spans the complete B2B selling journey, from Seller Central to Amazon Business detail pages, cart, and checkout for merchant-fulfilled offers. Our entrepreneurial culture and global reach define us. We develop features across seller experience, delivery, certifications, fees, registration, and analytics, collaborating with worldwide teams and leveraging advanced AI technologies to continuously innovate. Working in true Day 1 spirit, we build next-generation solutions that shape the future of B2B commerce. Join us in building next-generation solutions that shape the future of B2B commerce.
GB, London
Come build the future of entertainment with us. Are you interested in shaping the future of movies and television? Prime Video is a premium streaming service that offers customers a vast collection of TV shows and movies - all with the ease of finding what they love to watch in one place. We offer customers thousands of popular movies and TV shows including Amazon Originals and exclusive licensed content to exciting live sports events. Prime Video is a fast-paced, growth business - available in over 200 countries and territories worldwide. The Video Content Research team works in a dynamic environment where innovating on behalf of our customers is at the heart of everything we do. We are seeking a Data Scientist to develop scalable models that uncover key insights into how, why and when customers engage with Prime Video marketing. Key job responsibilities In this role you will work closely with business stakeholders and technical peers (data scientists, economists and engineers) to develop causal marketing measurement models, analyze experiments and investigate customer, marketing and content related factors that drive engagement with Prime Video. You will create mechanisms and infrastructure to deploy complex models and generate insights at scale. You will have the opportunity to work with large datasets, work with AWS to build and deploy machine learning models that impact Prime Video's marketing decisions. About the team The Video Content Research team uses machine learning, econometrics, and data science to optimize Amazon's marketing and content investments. We generate insights for Amazon's digital video strategy, partnering with finance, marketing, and content teams. We analyze customer behavior on Prime Video (marketing impressions, clicks on owned channels) to identify optimization opportunities.
US, MA, Boston
AI is the most transformational technology of our time, capable of tackling some of humanity’s most challenging problems. That is why Amazon is investing in generative AI (GenAI) and the responsible development and deployment of large language models (LLMs) across all of our businesses. Come build the future of human-technology interaction with us. We are looking for a Research Scientist with strong technical skills which includes coding and natural language processing experience in dataset construction, training and evaluating models, and automatic processing of large datasets. You will play a critical role in driving innovation and advancing the state-of-the-art in natural language processing and machine learning. You will work closely with cross-functional teams, including product managers, language engineers, and other scientists. Key job responsibilities Specifically, the Research Scientist will: • Ensure quality of speech/language/other data throughout all stages of acquisition and processing, including data sourcing/collection, ground truth generation, normalization, transformation, cross-lingual alignment/mapping, etc. • Clean, analyze and select speech/language/other data to achieve goals • Build and test models that elevate the customer experience • Collaborate with colleagues from science, engineering and business backgrounds • Present proposals and results in a clear manner backed by data and coupled with actionable conclusions • Work with engineers to develop efficient data querying infrastructure for both offline and online use cases
US, VA, Arlington
The People eXperience and Technology Central Science (PXTCS) team uses economics, behavioral science, statistics, and machine learning to proactively identify mechanisms and process improvements which simultaneously improve Amazon and the lives, wellbeing, and the value of work to Amazonians. PXTCS is an interdisciplinary team that combines the talents of science and engineering to develop and deliver solutions that measurably achieve this goal. PXTCS is looking for an economist who can apply economic methods to address business problems. The ideal candidate will work with engineers and computer scientists to estimate models and algorithms on large scale data, design pilots and measure impact, and transform successful prototypes into improved policies and programs at scale. PXTCS is looking for creative thinkers who can combine a strong technical economic toolbox with a desire to learn from other disciplines, and who know how to execute and deliver on big ideas as part of an interdisciplinary technical team. Ideal candidates will work in a team setting with individuals from diverse disciplines and backgrounds. They will work with teammates to develop scientific models and conduct the data analysis, modeling, and experimentation that is necessary for estimating and validating models. They will work closely with engineering teams to develop scalable data resources to support rapid insights, and take successful models and findings into production as new products and services. They will be customer-centric and will communicate scientific approaches and findings to business leaders, listening to and incorporate their feedback, and delivering successful scientific solutions. A day in the life The Economist will work with teammates to apply economic methods to business problems. This might include identifying the appropriate research questions, writing code to implement a DID analysis or estimate a structural model, or writing and presenting a document with findings to business leaders. Our economists also collaborate with partner teams throughout the process, from understanding their challenges, to developing a research agenda that will address those challenges, to help them implement solutions. About the team PXTCS is a multidisciplinary science team that develops innovative solutions to make Amazon Earth's Best Employer
US, MA, N.reading
Amazon Industrial Robotics is seeking exceptional talent to help develop the next generation of advanced robotics systems that will transform automation at Amazon's scale. We're building revolutionary robotic systems that combine cutting-edge AI, sophisticated control systems, and advanced mechanical design to create adaptable automation solutions capable of working safely alongside humans in dynamic environments. This is a unique opportunity to shape the future of robotics and automation at unprecedented scale, working with world-class teams pushing the boundaries of what's possible in robotic manipulation, locomotion, and human-robot interaction. This role presents an opportunity to shape the future of robotics through innovative applications of deep learning and large language models. At Amazon Industrial Robotics we leverage advanced robotics, machine learning, and artificial intelligence to solve complex operational challenges at unprecedented scale. Our fleet of robots operates across hundreds of facilities worldwide, working in sophisticated coordination to fulfill our mission of customer excellence. We are pioneering the development of robotics foundation models that: - Enable unprecedented generalization across diverse tasks - Enable unprecedented robustness and reliability, industry-ready - Integrate multi-modal learning capabilities (visual, tactile, linguistic) - Accelerate skill acquisition through demonstration learning - Enhance robotic perception and environmental understanding - Streamline development processes through reusable capabilities The ideal candidate will contribute to research that bridges the gap between theoretical advancement and practical implementation in robotics. You will be part of a team that's revolutionizing how robots learn, adapt, and interact with their environment. Join us in building the next generation of intelligent robotics systems that will transform the future of automation and human-robot collaboration. Key job responsibilities As an Applied Science Manager in the Foundations Model team, you will: - Build and lead a team of scientists and developers responsible for foundation model development - Define the right ‘FM recipe’ to reach industry ready solutions - Define the right strategy to ensure fast and efficient development, combining state of the art methods, research and engineering. - Lead Model Development and Training: Designing and implementing the model architectures, training and fine tuning the foundation models using various datasets, and optimize the model performance through iterative experiments - Lead Data Management: Process and prepare training data, including data governance, provenance tracking, data quality checks and creating reusable data pipelines. - Lead Experimentation and Validation: Design and execute experiments to test model capabilities on the simulator and on the embodiment, validate performance across different scenarios, create a baseline and iteratively improve model performance. - Lead Code Development: Write clean, maintainable, well commented and documented code, contribute to training infrastructure, create tools for model evaluation and testing, and implement necessary APIs - Research: Stay current with latest developments in foundation models and robotics, assist in literature reviews and research documentation, prepare technical reports and presentations, and contribute to research discussions and brainstorming sessions. - Collaboration: Work closely with senior scientists, engineers, and leaders across multiple teams, participate in knowledge sharing, support integration efforts with robotics hardware teams, and help document best practices and methodologies.
CA, QC, Montreal
Join the next revolution in robotics at Amazon's Frontier AI & Robotics team, where you'll work alongside world-renowned AI pioneers to push the boundaries of what's possible in robotic intelligence. As an Applied Scientist, you'll be at the forefront of developing breakthrough foundation models that enable robots to perceive, understand, and interact with the world in unprecedented ways. You'll drive independent research initiatives in areas such as perception, manipulation, scene understanding, sim2real transfer, multi-modal foundation models, and multi-task learning, designing novel algorithms that bridge the gap between state-of-the-art research and real-world deployment at Amazon scale. In this role, you'll balance innovative technical exploration with practical implementation, collaborating with platform teams to ensure your models and algorithms perform robustly in dynamic real-world environments. You'll have access to Amazon's vast computational resources, enabling you to tackle ambitious problems in areas like very large multi-modal robotic foundation models and efficient, promptable model architectures that can scale across diverse robotic applications. Key job responsibilities - Design and implement novel deep learning architectures that push the boundaries of what robots can understand and accomplish - Drive independent research initiatives in robotics foundation models, focusing on breakthrough approaches in perception, and manipulation, for example open-vocabulary panoptic scene understanding, scaling up multi-modal LLMs, sim2real/real2sim techniques, end-to-end vision-language-action models, efficient model inference, video tokenization - Lead technical projects from conceptualization through deployment, ensuring robust performance in production environments - Collaborate with platform teams to optimize and scale models for real-world applications - Contribute to the team's technical strategy and help shape our approach to next-generation robotics challenges A day in the life - Design and implement novel foundation model architectures, leveraging our extensive compute infrastructure to train and evaluate at scale - Collaborate with our world-class research team to solve complex technical challenges - Lead technical initiatives from conception to deployment, working closely with robotics engineers to integrate your solutions into production systems - Participate in technical discussions and brainstorming sessions with team leaders and fellow scientists - Leverage our massive compute cluster and extensive robotics infrastructure to rapidly prototype and validate new ideas - Transform theoretical insights into practical solutions that can handle the complexities of real-world robotics applications About the team At Frontier AI & Robotics, we're not just advancing robotics – we're reimagining it from the ground up. Our team is building the future of intelligent robotics through ground breaking foundation models and end-to-end learned systems. We tackle some of the most challenging problems in AI and robotics, from developing sophisticated perception systems to creating adaptive manipulation strategies that work in complex, real-world scenarios. What sets us apart is our unique combination of ambitious research vision and practical impact. We leverage Amazon's massive computational infrastructure and rich real-world datasets to train and deploy state-of-the-art foundation models. Our work spans the full spectrum of robotics intelligence – from multimodal perception using images, videos, and sensor data, to sophisticated manipulation strategies that can handle diverse real-world scenarios. We're building systems that don't just work in the lab, but scale to meet the demands of Amazon's global operations. Join us if you're excited about pushing the boundaries of what's possible in robotics, working with world-class researchers, and seeing your innovations deployed at unprecedented scale.
US, WA, Bellevue
Amazon is looking for a Principal Applied Scientist world class scientists to join its AWS Fundamental Research Team working within a variety of machine learning disciplines. This group is entrusted with developing core machine learning solutions for AWS services. At the AWS Fundamental Research Team you will invent, implement, and deploy state of the art machine learning algorithms and systems. You will build prototypes and explore conceptually large scale ML solutions across different domains and computation platforms. You will interact closely with our customers and with the academic community. You will be at the heart of a growing and exciting focus area for AWS and work with other acclaimed engineers and world famous scientists. About the team About the team Diverse Experiences AWS values diverse experiences. Even if you do not meet all of the qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why AWS? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Inclusive Team Culture Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences, inspire us to never stop embracing our uniqueness. Mentorship & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud.