Cognixion’s assisted reality headset
Cognixion’s assisted reality architecture aims to overcome speech barriers by integrating a brain-computer interface with machine learning algorithms, assistive technology, and augmented reality (AR) applications in a wearable format.

Cognixion gives voice to a user’s thoughts

Alexa Fund company’s assisted reality tech could unlock speech for hundreds of millions of people who struggle to communicate.

(Editor’s note: This article is the latest installment in a series by Amazon Science delving into the science behind products and services of companies in which Amazon has invested. The Alexa Fund participated in Cognixion’s $12M seed round in November 2021.)

In 2012, Andreas Forsland, founder and CEO of Alexa Fund company Cognixion, became the primary caregiver and communicator for his mother. She was hospitalized with complications from pneumonia and unable to speak for herself.

“That experience opened my eyes to how precious speech really is,” Forsland says. According to a Cognixion analysis of over 1,200 relevant research papers, more than half a billion people worldwide struggle to speak clearly or at conversational speeds, which can hamper their interactions with others and full participation in society.

Forsland wondered whether a technology solution would be feasible and started Cognixion in 2014 to explore that possibility. “We had the gumption to think, ‘Wouldn’t it be neat to have a thought-to-speech interface that just reads your mind?’ We were naïve and curious at the same time.”

Brain–computer interfaces (BCIs) have been around since the 1970s, with demonstrated applications in enabling communication. But their use in the real world has so far been limited, owing to the amount of training required, the difficulty of operating them, performance issues related to recording technology, sensors, and signal processing, and the interaction between the brain and the BCI.

Cognixion’s assisted reality architecture aims to overcome these barriers by integrating a BCI with machine learning algorithms, assistive technology, and augmented reality (AR) applications in a wearable format.

Introducing Cognixion: The world's first "assisted reality" device

The current embodiment of the company’s technology is a non-invasive device called Cognixion ONE. Brainwave patterns associated with visual fixation on interactive objects presented through the headset are detected and decoded. The signals enable hands-free, voice-free control of AR/XR applications to generate speech or send instructions to smart-home components or AI assistants.

“For some people, we make things easy, and for other people, we make things possible. That’s the way we look at it: technology in service of enhancing a human’s ability to do things,” says Forsland.

In an interview with Amazon Science, Forsland described the ins and outs of Cognixion ONE, the next steps in its development, and the longer-term future of assisted reality tech.

  1. Q. 

    Given the wide range of abilities or disabilities that someone might have, how did you go about designing technology that anyone can use?


    It all starts with the problem. One of the key constraints in this problem domain is that you can’t make any assumptions about someone’s ability to use their hands or arms or mouth in a meaningful way. So how can you actually drive an interaction with a computer using the limited degrees of freedom that the user has?

    In the extreme case, the user actually has no physical degrees of freedom. The only remaining degree of freedom is attention. So can you use attention as a mechanism to drive interaction with a computer, fully bypassing the rest of the body?

    It turns out that you can, thanks to neuroscience work in this area. You can project certain types of visual stimuli onto a user’s retina and look for their attentional reaction to those stimuli.

    Related content
    Alexa Fund portfolio company’s science-led program could change how we approach mental wellness — and how we use VR.

    If I give you two images with different movement characteristics, I can tell by the pattern of your brain waves that you’re seeing those two things, and the fact that you're paying attention to one of them actually changes that pattern.

    It takes a tiny bit of flow-state thinking. It’s kind of like when you look at an optical illusion, and you can see the two states. If you can do that, then you can decide between two choices, and as soon as you can do that, I can build an entire interface on top of that. I can ask, ‘Do you want A or do you want B?,’ like playing ‘20 Questions.’ It’s sort of the most basic way to differentiate a user’s intent.

    Basically, we considered the hardest possible situation first: a person with no physical capabilities whatsoever. Let’s solve that problem. Then we can start layering stuff on, like gaze tracking, gestures, or keyboards, to further enhance the interaction and make it even more efficient for people with the relevant physical capabilities. But it may turn out that an adaptive keyboard is actually overkill for a lot of interactions. Maybe you can get by with much less.

    Related content
    Alexa Fund company unlocks voice-based computing for people who have trouble using their voices.

    Now, if you marry that input with the massive advancements in the last five or ten years in machine learning, you can become much more aggressive about what you think the person is trying to do, or what is appropriate in that situation. You can use that information to minimize the number of interactions required. Ideally, you get to a place where you have a very efficient interface, because the user only has to decide between the things that are most relevant.

    And you can make it much more elaborate by integrating knowledge about the user’s environment, previous utterances, time of day, etc. That’s really the magic of this architecture: It leverages minimum inputs with really aggressive prediction capability to help people communicate smoothly and efficiently.

  2. Q. 

    What types of communication does this technology enable?


    First and foremost is speech. And an easy way to understand the impact of this technology is to look at conversational rate. Right now, this conversation is probably on the order of 60 to 150 words per minute, depending on how much coffee we had and so on.

    For a lot of users of our technology, it’s like a pipe dream to even get to 20 or 30. It can take a long time to produce even very basic utterances, along the lines of ‘I am tired.’

    Now imagine breaking through to say, ‘Let’s talk about our day,’ and carrying on a conversation that provides meaning, interest, and value. That is the breakthrough capability that we’re really trying to enable.

    We have this amazing group — our Brainiac Council — of people with speech disabilities, scientists, technologists. We have more than 200 Brainiacs now, and we want to grow the council to 300.

    Cognixion ONE demo

    One of our Brainiacs uses the headset to help him communicate words that are difficult for him to pronounce, like ‘chocolate.’ He owns and operates a business where he performs for other people. During a performance, he can plug the headset directly into his sound system instead of having to talk into a microphone.

    Think of how many other people have something to say but might be overlooked. We want to help them get their point across.

    One possibility we’re exploring for future enhancement of speech generation is providing each user with their own voice, through technologies like voice banking and text-to-speech software like Amazon Web Services Polly. Personalization to such a degree could make the experience much richer and more meaningful for users.

    But speech generation is only one function of a broad ‘neuroprosthetic.’ People also interact with places, things, and media — and these interactions don’t necessarily require speech. We’re building an Alexa integration to enable home automation control and other enriched experiences. Through the headset, users can interact with their environment, control smart devices, or access news, music, whatever is available.

    In time, a device could allow users to control mobility devices for assisted navigation, robots for household tasks, settings for ambient lighting and temperature. It’s enabling a future where more people can live their daily lives more actively and independently.

  3. Q. 

    What are the next steps toward creating that future?


    There are some key technical problems to solve. BCIs historically have been viewed somewhat skeptically, particularly the use of electroencephalography. So our challenge is to come up with a paradigm for stimulus response that enables sufficient expressive capability within the user interface. In other words, can I show you enough different kinds of stimuli to give you meaningful choices so you can efficiently use the system without becoming unnecessarily tired?

    Then it’s like whack-a-mole, or the digital equivalent. When we see a specific frequency come through, and a certain power threshold on it, we act on it. How many different unique frequencies can we disambiguate from one another at any given time?

    A simulated view of the interface in a Cognixion device
    “For some people, we make things easy, and for other people, we make things possible. That’s the way we look at it: technology in service of enhancing a human’s ability to do things,” says Andreas Forsland, founder and CEO of Cognixion.

    Another challenge is that a commercial device should require a nearly zero learning curve. Once you pop it on, you need to be able use it within minutes and not hours.

    So we might couple the stimulus-response technology with a display, or speakers, or haptics that can give biofeedback to help train your brain: ‘I’m doing this right’ or ‘I’m doing it wrong.’ This would give people the positives and negatives as they interact with it. If you can close those iterations quickly, people learn to use it faster.

    Our goal is to really harden and fortify the reliability and accuracy of what we’re doing, algorithmically. We then have a very robust IP portfolio that could go into mainstream applications, likely in the form of much deeper partnerships.

    Related content
    Amazon Research Award recipient Jonathan Tamir is focusing on deriving better images faster.

    In terms of applications, we are pursuing a medical channel and a research channel. Making a medical device is much more challenging than making a consumer device, for a variety of technical reasons: validation, documentation, regulatory considerations. So it takes some time. But the initial indications for use will be speech generation and environmental control.

    Eventually, we could look to expand our indications within the control ‘bubble’ to cover additional interactions with people, places, things, and content. Plus, the system could find applications within three other healthcare bubbles. One is diagnostics in areas like ophthalmology and neurology, thanks to the sensors and closed-loop nature of the device. A second is therapeutics for conditions involving attention, focus, and memory. And the third is remote monitoring in telehealth-type situations, because of the network capabilities.

    The research side uses the same medical-grade hardware, but loaded with different software to enable biometric analysis and development of experimental AR applications. We’re preparing for production and delivery of initial demand early next year, and we’re actively seeking research partners who would get early access to the device.

    In addition to collaborators in neuroscience, neuroengineering, bionics, human-computer interaction, and clinical and translational research, we’re also soliciting input from user experience research to arrive at a final set of specific technical requirements and use-case requirements.

    We think there’s tremendous opportunity here. And we’re constantly being asked, ‘When can this become mainstream?’ We have some thoughts and ideas about that, of course, but we also want to hear from the research community about the use cases they can dream up.

Research areas

Related content

FR, Clichy
The role can be based in any of our EU offices. Amazon Supply Chain forms the backbone of the fastest growing e-commerce business in the world. The sheer growth of the business and the company's mission "to be Earth’s most customer-centric company” makes the customer fulfillment business bigger and more complex with each passing year. The EU SC Science Optimization team is looking for a Science leader to tackle complex and ambiguous forecasting and optimization problems for our EU fulfillment network. The team owns the optimization of our Supply Chain from our suppliers to our customers. We are also responsible for analyzing the performance of our Supply Chain end-to-end and deploying Statistics, Econometrics, Operations Research and Machine Learning models to improve decision making within our organization, including forecasting, planning and executing our network. We work closely with Supply Chain Optimization Technology (SCOT) teams, who own the systems and the inputs we rely on to plan our networks, the worldwide scientific community, and with our internal EU stakeholders within Supply Chain, Transportation, Store and Finance. The ideal candidate has a well-rounded-technical/science background as well as a history of leading large projects end-to-end, and is comfortable in developing long term research strategy while ensuring the delivery of incremental results in an ever-changing operational environment. As a Sr. Science Manager, you will lead and grow a high-performing team of data and research scientists, technical program managers and business intelligence engineers. You will partner with operations, finance, store, science and engineering leadership to identify opportunities to drive efficiency improvement in our Fulfillment Center network flows via optimization and scalable execution. As a science leader, you will not only develop optimization solutions, but also influence strategy and outcomes across multiple partner science teams such as forecasting, transportation network design, or modelling teams. You will identify new areas of investment and research and work to align roadmaps to deliver on these opportunities. This role is inherently cross-functional and requires an ability to communicate, influence and earn the trust of science, technical, operations and business leadership.
US, WA, Bellevue
We are looking for detail-oriented, organized, and responsible individuals who are eager to learn how to work with large and complicated data sets. Some knowledge of econometrics, as well as basic familiarity with Python is necessary, and experience with SQL and UNIX would be a plus. These are full-time positions at 40 hours per week, with compensation being awarded on an hourly basis. You will learn how to build data sets and perform applied econometric analysis at Internet speed collaborating with economists, scientists, and product managers. These skills will translate well into writing applied chapters in your dissertation and provide you with work experience that may help you with placement. Roughly 85% of previous cohorts have converted to full time economics employment at Amazon. If you are interested, please send your CV to our mailing list at Key job responsibilities Estimate econometric models using large datasets. Must know SQL and Matlab.
US, WA, Seattle
The AWS AI Labs team has a world-leading team of researchers and academics, and we are looking for world-class colleagues to join us and make the AI revolution happen. Our team of scientists have developed the algorithms and models that power AWS computer vision services such as Amazon Rekognition and Amazon Textract. As part of the team, we expect that you will develop innovative solutions to hard problems, and publish your findings at peer reviewed conferences and workshops. AWS is the world-leading provider of cloud services, has fostered the creation and growth of countless new businesses, and is a positive force for good. Our customers bring problems which will give Applied Scientists like you endless opportunities to see your research have a positive and immediate impact in the world. You will have the opportunity to partner with technology and business teams to solve real-world problems, have access to virtually endless data and computational resources, and to world-class engineers and developers that can help bring your ideas into the world. Our research themes include, but are not limited to: few-shot learning, transfer learning, unsupervised and semi-supervised methods, active learning and semi-automated data annotation, large scale image and video detection and recognition, face detection and recognition, OCR and scene text recognition, document understanding, 3D scene and layout understanding, and geometric computer vision. For this role, we are looking for scientist who have experience working in the intersection of vision and language. We are located in Seattle, Pasadena, Palo Alto (USA) and in Haifa and Tel Aviv (Israel).
US, WA, Seattle
Amazon Prime Video is changing the way millions of customers enjoy digital content. Prime Video delivers premium content to customers through purchase and rental of movies and TV shows, unlimited on-demand streaming through Amazon Prime subscriptions, add-on channels like Showtime and HBO, and live concerts and sporting events like NFL Thursday Night Football. In total, Prime Video offers nearly 200,000 titles and is available across a wide variety of platforms, including PCs and Macs, Android and iOS mobile devices, Fire Tablets and Fire TV, Smart TVs, game consoles, Blu-ray players, set-top-boxes, and video-enabled Alexa devices. Amazon believes so strongly in the future of video that we've launched our own Amazon Studios to produce original movies and TV shows, many of which have already earned critical acclaim and top awards, including Oscars, Emmys and Golden Globes. The Global Consumer Engagement team within Amazon Prime Video builds product and technology solutions that drive customer activation and engagement across all our supported devices and global footprint. We obsess over finding effective, programmatic and scalable ways to reach customers via a broad portfolio of both in-app and out-of-app experiences. We would love to have you join us to build models that can classify and detect content available on Prime Video. We need you to analyze the video, audio and textual signal streams and improve state-of-art solutions while being scalable to Amazon size data. We need to solve problems across many cultures and languages, working alongside an operations team generating labels across many languages to help us achieve these goals. Our team consistently strives to innovate, and holds several novel patents and inventions in the motion picture and television industry. We are highly motivated to extend the state of the art. As a member of our team, you will apply your deep knowledge of Computer Vision and Machine Learning to concrete problems that have broad cross-organizational, global, and technology impact. Your work will focus on addressing fundamental computer vision models like video understanding and video summarization in addition to building appropriate large scale datasets. You will work on large engineering efforts that solve significantly complex problems facing global customers. You will be trusted to operate with independence and are often assigned to focus on areas with significant impact on audience satisfaction. You must be equally comfortable with digging in to customer requirements as you are drilling into design with development teams and developing production ready learning models. You consistently bring strong, data-driven business and technical judgment to decisions. You will work with internal and external stakeholders, cross-functional partners, and end-users around the world at all levels. Our team makes a big impact because nothing is more important to us than pleasing our customers, continually earning their trust, and thinking long term. You are empowered to bring new technologies and deep learning approaches to your solutions. We embrace the challenges of a fast paced market and evolving technologies, paving the way to universal availability of content. You will be encouraged to see the big picture, be innovative, and positively impact millions of customers. This is a young and evolving business where creativity and drive will have a lasting impact on the way video is enjoyed worldwide.
US, NY, New York
Amazon is looking for an outstanding Data Scientist to help build the next generation of selection systems. On the Specialized Selection team within the Supply Chain Optimization Technologies (SCOT) organization, we own the selection systems that determine which products Amazon offers in our fastest delivery programs. We build state-of-the-art models leveraging tools from machine learning, numerical optimization, natural language processing, and causal inference to automate the management of Amazon's sub-same day (SSD) selection at scale. We sit as a part of one of the largest and most sophisticated supply chains in the world. We operate a highly cross-functional team across software, science, analytics, and product to define and scalably execute the strategic direction of SSD and speed selection more broadly. As a Data Scientist on the team, you will work with scientists, engineers, product managers, and business stakeholders to conduct analyses that reveal key business insights and leverage data science and machine learning techniques to develop new models and solutions to emergent business problems. Key job responsibilities Understanding business problems and translate them to appropriate scientific solutions; Using data to provide new insights and clarity to ambiguous situations; Designing effective, scalable, and achievable solutions to key business problems; Developing the right set of metrics to evaluate efficacy of your models and solutions; Prototyping and analyzing new models and business logic; Communicating, both written and verbally, with both technical and business audiences throughout each project; Contributing to the scientific community across the organization
US, CA, Palo Alto
Join a team working on cutting-edge science to innovate search experiences for Amazon shoppers! Amazon Search helps customers shop with ease, confidence and delight WW. We aim to transform Search from an information retrieval engine to a shopping engine. In this role, you will build models to generate and recommend search queries that can help customers fulfill their shopping missions, reduce search efforts and let them explore and discover new products. You will also build models and applications that will increase customer awareness of related products and product attributes that might be best suited to fulfill the customer needs. Key job responsibilities On a day-to-day basis, you will: Design, develop, and evaluate highly innovative, scalable models and algorithms; Design and execute experiments to determine the impact of your models and algorithms; Work with product and software engineering teams to manage the integration of successful models and algorithms in complex, real-time production systems at very large scale; Share knowledge and research outcomes via internal and external conferences and journal publications; Project manage cross-functional Machine Learning initiatives. About the team The mission of Search Assistance is to improve search feature by reducing customers’ effort to search. We achieve this through three customer-facing features: Autocomplete, Spelling Correction and Related Searches. The core capability behind the three features is backend service Query Recommendation.
US, CA, Palo Alto
Amazon is investing heavily in building a world class advertising business and we are responsible for defining and delivering a collection of self-service performance advertising products that drive discovery and sales. Our products are strategically important to our Retail and Marketplace businesses driving long term growth. We deliver billions of ad impressions and millions of clicks daily and are breaking fresh ground to create world-class products. We are highly motivated, collaborative and fun-loving with an entrepreneurial spirit and bias for action. With a broad mandate to experiment and innovate, we are growing at an unprecedented rate with a seemingly endless range of new opportunities. The Ad Response Prediction team in Sponsored Products organization build advanced deep-learning models, large-scale machine-learning (ML) pipelines, and real-time serving infra to match shoppers’ intent to relevant ads on all devices, for all contexts and in all marketplaces. Through precise estimation of shoppers’ interaction with ads and their long-term value, we aim to drive optimal ads allocation and pricing, and help to deliver a relevant, engaging and delightful ads experience to Amazon shoppers. As the business and the complexity of various new initiatives we take continues to grow, we are looking for energetic, entrepreneurial, and self-driven science leaders to join the team. Key job responsibilities As a Principal Applied Scientist in the team, you will: Seek to understand in depth the Sponsored Products offering at Amazon and identify areas of opportunities to grow our business via principled ML solutions. Mentor and guide the applied scientists in our organization and hold us to a high standard of technical rigor and excellence in ML. Design and lead organization wide ML roadmaps to help our Amazon shoppers have a delightful shopping experience while creating long term value for our sellers. Work with our engineering partners and draw upon your experience to meet latency and other system constraints. Identify untapped, high-risk technical and scientific directions, and simulate new research directions that you will drive to completion and deliver. Be responsible for communicating our ML innovations to the broader internal & external scientific community.
US, CA, Palo Alto
We’re working to improve shopping on Amazon using the conversational capabilities of large language models, and are searching for pioneers who are passionate about technology, innovation, and customer experience, and are ready to make a lasting impact on the industry. You'll be working with talented scientists, engineers, and technical program managers (TPM) to innovate on behalf of our customers. If you're fired up about being part of a dynamic, driven team, then this is your moment to join us on this exciting journey!"?
US, CA, Santa Clara
AWS AI/ML is looking for world class scientists and engineers to join its AI Research and Education group working on foundation models, large-scale representation learning, and distributed learning methods and systems. At AWS AI/ML you will invent, implement, and deploy state of the art machine learning algorithms and systems. You will build prototypes and innovate on new representation learning solutions. You will interact closely with our customers and with the academic and research communities. You will be at the heart of a growing and exciting focus area for AWS and work with other acclaimed engineers and world famous scientists. Large-scale foundation models have been the powerhouse in many of the recent advancements in computer vision, natural language processing, automatic speech recognition, recommendation systems, and time series modeling. Developing such models requires not only skillful modeling in individual modalities, but also understanding of how to synergistically combine them, and how to scale the modeling methods to learn with huge models and on large datasets. Join us to work as an integral part of a team that has diverse experiences in this space. We actively work on these areas: * Hardware-informed efficient model architecture, training objective and curriculum design * Distributed training, accelerated optimization methods * Continual learning, multi-task/meta learning * Reasoning, interactive learning, reinforcement learning * Robustness, privacy, model watermarking * Model compression, distillation, pruning, sparsification, quantization About Us Inclusive Team Culture Here at AWS, we embrace our differences. We are committed to furthering our culture of inclusion. We have ten employee-led affinity groups, reaching 40,000 employees in over 190 chapters globally. We have innovative benefit offerings, and host annual and ongoing learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences. Amazon’s culture of inclusion is reinforced within our 14 Leadership Principles, which remind team members to seek diverse perspectives, learn and be curious, and earn trust. Work/Life Balance Our team puts a high value on work-life balance. It isn’t about how many hours you spend at home or at work; it’s about the flow you establish that brings energy to both parts of your life. We believe striking the right balance between your personal and professional life is critical to life-long happiness and fulfillment. We offer flexibility in working hours and encourage you to find your own balance between your work and personal lives. Mentorship & Career Growth Our team is dedicated to supporting new members. We have a broad mix of experience levels and tenures, and we’re building an environment that celebrates knowledge sharing and mentorship. Our senior members enjoy one-on-one mentoring and thorough, but kind, code reviews. We care about your career growth and strive to assign projects based on what will help each team member develop into a better-rounded engineer and enable them to take on more complex tasks in the future.
US, WA, Seattle
Do you want to join an innovative team of scientists who use machine learning to help Amazon provide the best experience to our Selling Partners by automatically understanding and addressing their challenges, needs and opportunities? Do you want to build advanced algorithmic systems that are powered by state-of-art ML, such as Natural Language Processing, Large Language Models, Deep Learning, Computer Vision and Causal Modeling, to seamlessly engage with Sellers? Are you excited by the prospect of analyzing and modeling terabytes of data and creating cutting edge algorithms to solve real world problems? Do you like to build end-to-end business solutions and directly impact the profitability of the company and experience of our customers? Do you like to innovate and simplify? If yes, then you may be a great fit to join the Selling Partner Experience Science team. Key job responsibilities Use statistical and machine learning techniques to create the next generation of the tools that empower Amazon's Selling Partners to succeed. Design, develop and deploy highly innovative models to interact with Sellers and delight them with solutions. Work closely with teams of scientists and software engineers to drive real-time model implementations and deliver novel and highly impactful features. Establish scalable, efficient, automated processes for large scale data analyses, model development, model validation and model implementation. Research and implement novel machine learning and statistical approaches. Lead strategic initiatives to employ the most recent advances in ML in a fast-paced, experimental environment. Drive the vision and roadmap for how ML can continually improve Selling Partner experience. About the team Selling Partner Experience Science (SPeXSci) is a growing team of scientists, engineers and product leaders engaged in the research and development of the next generation of ML-driven technology to empower Amazon's Selling Partners to succeed. We draw from many science domains, from Natural Language Processing to Computer Vision to Optimization to Economics, to create solutions that seamlessly and automatically engage with Sellers, solve their problems, and help them grow. Focused on collaboration, innovation and strategic impact, we work closely with other science and technology teams, product and operations organizations, and with senior leadership, to transform the Selling Partner experience.