Cognixion’s assisted reality headset
Cognixion’s assisted reality architecture aims to overcome speech barriers by integrating a brain-computer interface with machine learning algorithms, assistive technology, and augmented reality (AR) applications in a wearable format.
Cognixion

Cognixion gives voice to a user’s thoughts

Alexa Fund company’s assisted reality tech could unlock speech for hundreds of millions of people who struggle to communicate.

(Editor’s note: This article is the latest installment in a series by Amazon Science delving into the science behind products and services of companies in which Amazon has invested. The Alexa Fund participated in Cognixion’s $12M seed round in November 2021.)

In 2012, Andreas Forsland, founder and CEO of Alexa Fund company Cognixion, became the primary caregiver and communicator for his mother. She was hospitalized with complications from pneumonia and unable to speak for herself.

“That experience opened my eyes to how precious speech really is,” Forsland says. According to a Cognixion analysis of over 1,200 relevant research papers, more than half a billion people worldwide struggle to speak clearly or at conversational speeds, which can hamper their interactions with others and full participation in society.

Forsland wondered whether a technology solution would be feasible and started Cognixion in 2014 to explore that possibility. “We had the gumption to think, ‘Wouldn’t it be neat to have a thought-to-speech interface that just reads your mind?’ We were naïve and curious at the same time.”

Brain–computer interfaces (BCIs) have been around since the 1970s, with demonstrated applications in enabling communication. But their use in the real world has so far been limited, owing to the amount of training required, the difficulty of operating them, performance issues related to recording technology, sensors, and signal processing, and the interaction between the brain and the BCI.

Cognixion’s assisted reality architecture aims to overcome these barriers by integrating a BCI with machine learning algorithms, assistive technology, and augmented reality (AR) applications in a wearable format.

Introducing Cognixion: The world's first "assisted reality" device

The current embodiment of the company’s technology is a non-invasive device called Cognixion ONE. Brainwave patterns associated with visual fixation on interactive objects presented through the headset are detected and decoded. The signals enable hands-free, voice-free control of AR/XR applications to generate speech or send instructions to smart-home components or AI assistants.

“For some people, we make things easy, and for other people, we make things possible. That’s the way we look at it: technology in service of enhancing a human’s ability to do things,” says Forsland.

In an interview with Amazon Science, Forsland described the ins and outs of Cognixion ONE, the next steps in its development, and the longer-term future of assisted reality tech.

  1. Q. 

    Given the wide range of abilities or disabilities that someone might have, how did you go about designing technology that anyone can use?

    A. 

    It all starts with the problem. One of the key constraints in this problem domain is that you can’t make any assumptions about someone’s ability to use their hands or arms or mouth in a meaningful way. So how can you actually drive an interaction with a computer using the limited degrees of freedom that the user has?

    In the extreme case, the user actually has no physical degrees of freedom. The only remaining degree of freedom is attention. So can you use attention as a mechanism to drive interaction with a computer, fully bypassing the rest of the body?

    It turns out that you can, thanks to neuroscience work in this area. You can project certain types of visual stimuli onto a user’s retina and look for their attentional reaction to those stimuli.

    Related content
    Alexa Fund portfolio company’s science-led program could change how we approach mental wellness — and how we use VR.

    If I give you two images with different movement characteristics, I can tell by the pattern of your brain waves that you’re seeing those two things, and the fact that you're paying attention to one of them actually changes that pattern.

    It takes a tiny bit of flow-state thinking. It’s kind of like when you look at an optical illusion, and you can see the two states. If you can do that, then you can decide between two choices, and as soon as you can do that, I can build an entire interface on top of that. I can ask, ‘Do you want A or do you want B?,’ like playing ‘20 Questions.’ It’s sort of the most basic way to differentiate a user’s intent.

    Basically, we considered the hardest possible situation first: a person with no physical capabilities whatsoever. Let’s solve that problem. Then we can start layering stuff on, like gaze tracking, gestures, or keyboards, to further enhance the interaction and make it even more efficient for people with the relevant physical capabilities. But it may turn out that an adaptive keyboard is actually overkill for a lot of interactions. Maybe you can get by with much less.

    Related content
    Alexa Fund company unlocks voice-based computing for people who have trouble using their voices.

    Now, if you marry that input with the massive advancements in the last five or ten years in machine learning, you can become much more aggressive about what you think the person is trying to do, or what is appropriate in that situation. You can use that information to minimize the number of interactions required. Ideally, you get to a place where you have a very efficient interface, because the user only has to decide between the things that are most relevant.

    And you can make it much more elaborate by integrating knowledge about the user’s environment, previous utterances, time of day, etc. That’s really the magic of this architecture: It leverages minimum inputs with really aggressive prediction capability to help people communicate smoothly and efficiently.

  2. Q. 

    What types of communication does this technology enable?

    A. 

    First and foremost is speech. And an easy way to understand the impact of this technology is to look at conversational rate. Right now, this conversation is probably on the order of 60 to 150 words per minute, depending on how much coffee we had and so on.

    For a lot of users of our technology, it’s like a pipe dream to even get to 20 or 30. It can take a long time to produce even very basic utterances, along the lines of ‘I am tired.’

    Now imagine breaking through to say, ‘Let’s talk about our day,’ and carrying on a conversation that provides meaning, interest, and value. That is the breakthrough capability that we’re really trying to enable.

    We have this amazing group — our Brainiac Council — of people with speech disabilities, scientists, technologists. We have more than 200 Brainiacs now, and we want to grow the council to 300.

    Cognixion ONE demo

    One of our Brainiacs uses the headset to help him communicate words that are difficult for him to pronounce, like ‘chocolate.’ He owns and operates a business where he performs for other people. During a performance, he can plug the headset directly into his sound system instead of having to talk into a microphone.

    Think of how many other people have something to say but might be overlooked. We want to help them get their point across.

    One possibility we’re exploring for future enhancement of speech generation is providing each user with their own voice, through technologies like voice banking and text-to-speech software like Amazon Web Services Polly. Personalization to such a degree could make the experience much richer and more meaningful for users.

    But speech generation is only one function of a broad ‘neuroprosthetic.’ People also interact with places, things, and media — and these interactions don’t necessarily require speech. We’re building an Alexa integration to enable home automation control and other enriched experiences. Through the headset, users can interact with their environment, control smart devices, or access news, music, whatever is available.

    In time, a device could allow users to control mobility devices for assisted navigation, robots for household tasks, settings for ambient lighting and temperature. It’s enabling a future where more people can live their daily lives more actively and independently.

  3. Q. 

    What are the next steps toward creating that future?

    A. 

    There are some key technical problems to solve. BCIs historically have been viewed somewhat skeptically, particularly the use of electroencephalography. So our challenge is to come up with a paradigm for stimulus response that enables sufficient expressive capability within the user interface. In other words, can I show you enough different kinds of stimuli to give you meaningful choices so you can efficiently use the system without becoming unnecessarily tired?

    Then it’s like whack-a-mole, or the digital equivalent. When we see a specific frequency come through, and a certain power threshold on it, we act on it. How many different unique frequencies can we disambiguate from one another at any given time?

    A simulated view of the interface in a Cognixion device
    “For some people, we make things easy, and for other people, we make things possible. That’s the way we look at it: technology in service of enhancing a human’s ability to do things,” says Andreas Forsland, founder and CEO of Cognixion.
    Cognixion

    Another challenge is that a commercial device should require a nearly zero learning curve. Once you pop it on, you need to be able use it within minutes and not hours.

    So we might couple the stimulus-response technology with a display, or speakers, or haptics that can give biofeedback to help train your brain: ‘I’m doing this right’ or ‘I’m doing it wrong.’ This would give people the positives and negatives as they interact with it. If you can close those iterations quickly, people learn to use it faster.

    Our goal is to really harden and fortify the reliability and accuracy of what we’re doing, algorithmically. We then have a very robust IP portfolio that could go into mainstream applications, likely in the form of much deeper partnerships.

    Related content
    Amazon Research Award recipient Jonathan Tamir is focusing on deriving better images faster.

    In terms of applications, we are pursuing a medical channel and a research channel. Making a medical device is much more challenging than making a consumer device, for a variety of technical reasons: validation, documentation, regulatory considerations. So it takes some time. But the initial indications for use will be speech generation and environmental control.

    Eventually, we could look to expand our indications within the control ‘bubble’ to cover additional interactions with people, places, things, and content. Plus, the system could find applications within three other healthcare bubbles. One is diagnostics in areas like ophthalmology and neurology, thanks to the sensors and closed-loop nature of the device. A second is therapeutics for conditions involving attention, focus, and memory. And the third is remote monitoring in telehealth-type situations, because of the network capabilities.

    The research side uses the same medical-grade hardware, but loaded with different software to enable biometric analysis and development of experimental AR applications. We’re preparing for production and delivery of initial demand early next year, and we’re actively seeking research partners who would get early access to the device.

    In addition to collaborators in neuroscience, neuroengineering, bionics, human-computer interaction, and clinical and translational research, we’re also soliciting input from user experience research to arrive at a final set of specific technical requirements and use-case requirements.

    We think there’s tremendous opportunity here. And we’re constantly being asked, ‘When can this become mainstream?’ We have some thoughts and ideas about that, of course, but we also want to hear from the research community about the use cases they can dream up.

Research areas

Related content

US, CA, Sunnyvale
The Artificial General Intelligence (AGI) team is looking for a passionate, talented, and inventive Applied Scientist; to support the development and implementation of Generative AI (GenAI) algorithms and models for supervised fine-tuning, and advance the state of the art with Large Language Models (LLMs), As an Applied Scientist, you will play a critical role in supporting the development of GenAI technologies that can handle Amazon-scale use cases and have a significant impact on our customers' experiences. Key job responsibilities - Collaborate with cross-functional teams of engineers and scientists to identify and solve complex problems in GenAI - Design and execute experiments to evaluate the performance of different algorithms and models, and iterate quickly to improve results - Think big about the arc of development of GenAI over a multi-year horizon, and identify new opportunities to apply these technologies to solve real-world problems - Communicate results and insights to both technical and non-technical audiences, including through presentations and written reports
LU, Luxembourg
Are you a MS student interested in a 2026 internship in the field of machine learning, deep learning, generative AI, large language models and speech technology, robotics, computer vision, optimization, operations research, quantum computing, automated reasoning, or formal methods? If so, we want to hear from you! We are looking for a customer obsessed Data Scientist Intern who can innovate in a business environment, building and deploying machine learning models to drive step-change innovation and scale it to the EU/worldwide. If this describes you, come and join our Data Science teams at Amazon for an exciting internship opportunity. If you are insatiably curious and always want to learn more, then you’ve come to the right place. You can find more information about the Amazon Science community as well as our interview process via the links below; https://www.amazon.science/ https://amazon.jobs/content/en/career-programs/university/science Key job responsibilities As a Data Science Intern, you will have following key job responsibilities: • Work closely with scientists and engineers to architect and develop new algorithms to implement scientific solutions for Amazon problems. • Work on an interdisciplinary team on customer-obsessed research • Experience Amazon's customer-focused culture • Create and Deliver Machine Learning projects that can be quickly applied starting locally and scaled to EU/worldwide • Build and deploy Machine Learning models using large data-sets and cloud technology. • Create and share with audiences of varying levels technical papers and presentations • Define metrics and design algorithms to estimate customer satisfaction and engagement A day in the life At Amazon, you will grow into the high impact person you know you’re ready to be. Every day will be filled with developing new skills and achieving personal growth. How often can you say that your work changes the world? At Amazon, you’ll say it often. Join us and define tomorrow. Some more benefits of an Amazon Science internship include; • All of our internships offer a competitive stipend/salary • Interns are paired with an experienced manager and mentor(s) • Interns receive invitations to different events such as intern program initiatives or site events • Interns can build their professional and personal network with other Amazon Scientists • Interns can potentially publish work at top tier conferences each year About the team Applicants will be reviewed on a rolling basis and are assigned to teams aligned with their research interests and experience prior to interviews. Start dates are available throughout the year and durations can vary in length from 3-6 months for full time internships. This role may available across multiple locations in the EMEA region (Austria, France, Germany, Ireland, Israel, Italy, Luxembourg, Netherlands, Poland, Romania, Spain and the UK). Please note these are not remote internships.
US, CA, San Francisco
The Artificial General Intelligence (AGI) team is looking for a passionate, talented, and inventive Member of Technical Staff with a strong deep learning background, to build industry-leading Generative Artificial Intelligence (GenAI) technology with Large Language Models (LLMs) and multimodal systems. Key job responsibilities As a Member of Technical Staff with the AGI team, you will lead the development of algorithms and modeling techniques, to advance the state of the art with LLMs. You will lead the foundational model development in an applied research role, including model training, dataset design, and pre- and post-training optimization. Your work will directly impact our customers in the form of products and services that make use of GenAI technology. You will leverage Amazon’s heterogeneous data sources and large-scale computing resources to accelerate advances in LLMs. About the team The AGI team has a mission to push the envelope in GenAI with LLMs and multimodal systems, in order to provide the best-possible experience for our customers.
US, CA, San Francisco
The Artificial General Intelligence (AGI) team is looking for a passionate, talented, and inventive Member of Technical Staff with a strong deep learning background, to build industry-leading Generative Artificial Intelligence (GenAI) technology with Large Language Models (LLMs) and multimodal systems. Key job responsibilities As a Member of Technical Staff with the AGI team, you will lead the development of algorithms and modeling techniques, to advance the state of the art with LLMs. You will lead the foundational model development in an applied research role, including model training, dataset design, and pre- and post-training optimization. Your work will directly impact our customers in the form of products and services that make use of GenAI technology. You will leverage Amazon’s heterogeneous data sources and large-scale computing resources to accelerate advances in LLMs. About the team The AGI team has a mission to push the envelope in GenAI with LLMs and multimodal systems, in order to provide the best-possible experience for our customers.
US, CA, San Francisco
The Artificial General Intelligence (AGI) team is looking for a passionate, talented, and inventive Member of Technical Staff with a strong deep learning background, to build industry-leading Generative Artificial Intelligence (GenAI) technology with Large Language Models (LLMs) and multimodal systems. Key job responsibilities As a Member of Technical Staff with the AGI team, you will lead the development of algorithms and modeling techniques, to advance the state of the art with LLMs. You will lead the foundational model development in an applied research role, including model training, dataset design, and pre- and post-training optimization. Your work will directly impact our customers in the form of products and services that make use of GenAI technology. You will leverage Amazon’s heterogeneous data sources and large-scale computing resources to accelerate advances in LLMs. About the team The AGI team has a mission to push the envelope in GenAI with LLMs and multimodal systems, in order to provide the best-possible experience for our customers.
US, CA, San Francisco
The Artificial General Intelligence (AGI) team is looking for a passionate, talented, and inventive Member of Technical Staff with a strong deep learning background, to build industry-leading Generative Artificial Intelligence (GenAI) technology with Large Language Models (LLMs) and multimodal systems. Key job responsibilities As a Member of Technical Staff with the AGI team, you will lead the development of algorithms and modeling techniques, to advance the state of the art with LLMs. You will lead the foundational model development in an applied research role, including model training, dataset design, and pre- and post-training optimization. Your work will directly impact our customers in the form of products and services that make use of GenAI technology. You will leverage Amazon’s heterogeneous data sources and large-scale computing resources to accelerate advances in LLMs. About the team The AGI team has a mission to push the envelope in GenAI with LLMs and multimodal systems, in order to provide the best-possible experience for our customers.
US, CA, San Francisco
The Artificial General Intelligence (AGI) team is looking for a passionate, talented, and inventive Member of Technical Staff with a strong deep learning background, to build industry-leading Generative Artificial Intelligence (GenAI) technology with Large Language Models (LLMs) and multimodal systems. Key job responsibilities As a Member of Technical Staff with the AGI team, you will lead the development of algorithms and modeling techniques, to advance the state of the art with LLMs. You will lead the foundational model development in an applied research role, including model training, dataset design, and pre- and post-training optimization. Your work will directly impact our customers in the form of products and services that make use of GenAI technology. You will leverage Amazon’s heterogeneous data sources and large-scale computing resources to accelerate advances in LLMs. About the team The AGI team has a mission to push the envelope in GenAI with LLMs and multimodal systems, in order to provide the best-possible experience for our customers.
US, CA, Sunnyvale
Prime Video is a first-stop entertainment destination offering customers a vast collection of premium programming in one app available across thousands of devices. Prime members can customize their viewing experience and find their favorite movies, series, documentaries, and live sports – including Amazon MGM Studios-produced series and movies; licensed fan favorites; and programming from Prime Video add-on subscriptions such as Apple TV+, Max, Crunchyroll and MGM+. All customers, regardless of whether they have a Prime membership or not, can rent or buy titles via the Prime Video Store, and can enjoy even more content for free with ads. Are you interested in shaping the future of entertainment? Prime Video's technology teams are creating best-in-class digital video experience. As a Prime Video technologist, you’ll have end-to-end ownership of the product, user experience, design, and technology required to deliver state-of-the-art experiences for our customers. You’ll get to work on projects that are fast-paced, challenging, and varied. You’ll also be able to experiment with new possibilities, take risks, and collaborate with remarkable people. We’ll look for you to bring your diverse perspectives, ideas, and skill-sets to make Prime Video even better for our customers. With global opportunities for talented technologists, you can decide where a career Prime Video Tech takes you! We are looking for a self-motivated, passionate and resourceful Sr. Applied Scientists with Recommender System or Search Ranking or Ads Ranking experience to bring diverse perspectives, ideas, and skill-sets to make Prime Video even better for our customers. You will spend your time as a hands-on machine learning practitioner and a research leader. You will play a key role on the team, building and guiding machine learning models from the ground up. At the end of the day, you will have the reward of seeing your contributions benefit millions of Amazon.com customers worldwide. Key job responsibilities - Develop AI solutions for various Prime Video Recommendation/Search systems using Deep learning, GenAI, Reinforcement Learning, and optimization methods; - Work closely with engineers and product managers to design, implement and launch AI solutions end-to-end; - Design and conduct offline and online (A/B) experiments to evaluate proposed solutions based on in-depth data analyses; - Effectively communicate technical and non-technical ideas with teammates and stakeholders; - Stay up-to-date with advancements and the latest modeling techniques in the field; - Publish your research findings in top conferences and journals. About the team Prime Video Recommendation/Search Science team owns science solution to power search experience on various devices, from sourcing, relevance, ranking, to name a few. We work closely with the engineering teams to launch our solutions in production.
US, WA, Seattle
We are open to hiring candidates to work out of one of the following locations: San Francisco, CA, USA | Santa Clara, CA, USA | Seattle, WA, USA | Sunnyvale, CA, USA Amazon is seeking an innovative and high-judgement Senior Applied Scientist to join the Privacy Engineering team in the Amazon Privacy Services org. We own products and programs that deliver technical innovation for ensuring compliance with high-impact, urgent regulation across Amazon services worldwide. The Senior Applied Scientist will contribute to the strategic direction for Amazon’s privacy practices while building/owning the compliance approach for individual regulations such as General Data Protection Regulation (GDPR), DMA, Quebec 25 etc. This will require helping to frame, and participating in, high judgment debates and decision making across senior business, technology, legal, and public policy leaders. A great candidate will have a unique combination of experience with innovative data governance technology, high judgement in system architecture decisions and ability to set detailed technical design from ambiguous compliance requirements. You will drive foundational, cross-service decisions, set technical requirements, oversee technical design, and have end to end accountability for delivering technical changes across dozens of different systems. You will have high engagement with WW senior leadership via quarterly reviews, annual organizational planning, and s-team goal updates. Key job responsibilities * Develop information retrieval benchmarks related to code analysis and invent algorithms to optimize identification of privacy requirements and controls. * Develop semantic and syntactic code analysis tools to assess privacy implementations within application code, and automatic code replacement tools to enhance privacy implementations. * Leverage Amazon’s heterogeneous data sources and large-scale computing resources to accelerate advances in generative artificial intelligence for privacy compliance. * Collaborate with other science and engineering teams as well as business stakeholders to maximize the velocity and impact of your contributions. A day in the life Amazon Privacy Services own products and programs that deliver technical innovation for ensuring Privacy Amazon services worldwide. We are hiring an innovative and high-judgement Senior Applied Scientist to develop AI solutions for builders across Amazon’s consumer and digital businesses including but not limited to Amazon.com, Amazon Ads, Amazon Go, Prime Video, Devices and more. Our ideal candidate is creative, has excellent problem-solving skills, a solid understanding of computer science fundamentals, deep learning and a customer-focused mindset. The Senior Scientist will serve as the resident expert on the development of AI agents for privacy. They build on their experiences to develop LLMs to develop AI implementations across privacy workflows. They will have responsibilities to mentor junior scientists and engineers develop AI skills. About the team Diverse Experiences Amazon Security values diverse experiences. Even if you do not meet all of the qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why Amazon Security? At Amazon, security is central to maintaining customer trust and delivering delightful customer experiences. Our organization is responsible for creating and maintaining a high bar for security across all of Amazon’s products and services. We offer talented security professionals the chance to accelerate their careers with opportunities to build experience in a wide variety of areas including cloud, devices, retail, entertainment, healthcare, operations, and physical stores Inclusive Team Culture In Amazon Security, it’s in our nature to learn and be curious. Ongoing DEI events and learning experiences inspire us to continue learning and to embrace our uniqueness. Addressing the toughest security challenges requires that we seek out and celebrate a diversity of ideas, perspectives, and voices. Training & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, training, and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why flexible work hours and arrangements are part of our culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve.
US, WA, Seattle
Amazon's Price Perception and Evaluation team is seeking a driven Principal Applied Scientist to harness planet scale multi-modal datasets, and navigate a continuously evolving competitor landscape, in order to build and scale an advanced self-learning scientific price estimation and product understanding system, regularly generating fresh customer-relevant prices on billions of Amazon and Third Party Seller products worldwide. We are looking for a talented, organized, and customer-focused technical leader with a charter to derive deep neural product relationships, quantify substitution and complementarity effects, and publish trust-preserving probabilistic price ranges on all products listed on Amazon. This role requires an individual with excellent scientific modeling and system design skills, bar-raising business acumen, and an entrepreneurial spirit. We are looking for an experienced leader who is a self-starter comfortable with ambiguity, demonstrates strong attention to detail, and has the ability to work in a fast-paced and ever-changing environment. Key job responsibilities - Develop the team. Mentor a highly talented group of applied machine learning scientists & researchers. - See the big picture. Shape long term vision for Amazon's science-based competitive, perception-preserving pricing techniques - Build strong collaborations. Partner with product, engineering, and science teams within Pricing & Promotions to deploy machine learning price estimation and error correction solutions at Amazon scale - Stay informed. Establish mechanisms to stay up to date on latest scientific advancements in machine learning, neural networks, natural language processing, probabilistic forecasting, and multi-objective optimization techniques. Identify opportunities to apply them to relevant Pricing & Promotions business problems - Keep innovating for our customers. Foster an environment that promotes rapid experimentation, continuous learning, and incremental value delivery. - Deliver Impact. Develop, Deploy, and Scale Amazon's next generation foundational price estimation and understanding system