Cognixion’s assisted reality headset
Cognixion’s assisted reality architecture aims to overcome speech barriers by integrating a brain-computer interface with machine learning algorithms, assistive technology, and augmented reality (AR) applications in a wearable format.
Cognixion

Cognixion gives voice to a user’s thoughts

Alexa Fund company’s assisted reality tech could unlock speech for hundreds of millions of people who struggle to communicate.

(Editor’s note: This article is the latest installment in a series by Amazon Science delving into the science behind products and services of companies in which Amazon has invested. The Alexa Fund participated in Cognixion’s $12M seed round in November 2021.)

In 2012, Andreas Forsland, founder and CEO of Alexa Fund company Cognixion, became the primary caregiver and communicator for his mother. She was hospitalized with complications from pneumonia and unable to speak for herself.

“That experience opened my eyes to how precious speech really is,” Forsland says. According to a Cognixion analysis of over 1,200 relevant research papers, more than half a billion people worldwide struggle to speak clearly or at conversational speeds, which can hamper their interactions with others and full participation in society.

Forsland wondered whether a technology solution would be feasible and started Cognixion in 2014 to explore that possibility. “We had the gumption to think, ‘Wouldn’t it be neat to have a thought-to-speech interface that just reads your mind?’ We were naïve and curious at the same time.”

Brain–computer interfaces (BCIs) have been around since the 1970s, with demonstrated applications in enabling communication. But their use in the real world has so far been limited, owing to the amount of training required, the difficulty of operating them, performance issues related to recording technology, sensors, and signal processing, and the interaction between the brain and the BCI.

Cognixion’s assisted reality architecture aims to overcome these barriers by integrating a BCI with machine learning algorithms, assistive technology, and augmented reality (AR) applications in a wearable format.

Introducing Cognixion: The world's first "assisted reality" device

The current embodiment of the company’s technology is a non-invasive device called Cognixion ONE. Brainwave patterns associated with visual fixation on interactive objects presented through the headset are detected and decoded. The signals enable hands-free, voice-free control of AR/XR applications to generate speech or send instructions to smart-home components or AI assistants.

“For some people, we make things easy, and for other people, we make things possible. That’s the way we look at it: technology in service of enhancing a human’s ability to do things,” says Forsland.

In an interview with Amazon Science, Forsland described the ins and outs of Cognixion ONE, the next steps in its development, and the longer-term future of assisted reality tech.

  1. Q. 

    Given the wide range of abilities or disabilities that someone might have, how did you go about designing technology that anyone can use?

    A. 

    It all starts with the problem. One of the key constraints in this problem domain is that you can’t make any assumptions about someone’s ability to use their hands or arms or mouth in a meaningful way. So how can you actually drive an interaction with a computer using the limited degrees of freedom that the user has?

    In the extreme case, the user actually has no physical degrees of freedom. The only remaining degree of freedom is attention. So can you use attention as a mechanism to drive interaction with a computer, fully bypassing the rest of the body?

    It turns out that you can, thanks to neuroscience work in this area. You can project certain types of visual stimuli onto a user’s retina and look for their attentional reaction to those stimuli.

    Related content
    Alexa Fund portfolio company’s science-led program could change how we approach mental wellness — and how we use VR.

    If I give you two images with different movement characteristics, I can tell by the pattern of your brain waves that you’re seeing those two things, and the fact that you're paying attention to one of them actually changes that pattern.

    It takes a tiny bit of flow-state thinking. It’s kind of like when you look at an optical illusion, and you can see the two states. If you can do that, then you can decide between two choices, and as soon as you can do that, I can build an entire interface on top of that. I can ask, ‘Do you want A or do you want B?,’ like playing ‘20 Questions.’ It’s sort of the most basic way to differentiate a user’s intent.

    Basically, we considered the hardest possible situation first: a person with no physical capabilities whatsoever. Let’s solve that problem. Then we can start layering stuff on, like gaze tracking, gestures, or keyboards, to further enhance the interaction and make it even more efficient for people with the relevant physical capabilities. But it may turn out that an adaptive keyboard is actually overkill for a lot of interactions. Maybe you can get by with much less.

    Related content
    Alexa Fund company unlocks voice-based computing for people who have trouble using their voices.

    Now, if you marry that input with the massive advancements in the last five or ten years in machine learning, you can become much more aggressive about what you think the person is trying to do, or what is appropriate in that situation. You can use that information to minimize the number of interactions required. Ideally, you get to a place where you have a very efficient interface, because the user only has to decide between the things that are most relevant.

    And you can make it much more elaborate by integrating knowledge about the user’s environment, previous utterances, time of day, etc. That’s really the magic of this architecture: It leverages minimum inputs with really aggressive prediction capability to help people communicate smoothly and efficiently.

  2. Q. 

    What types of communication does this technology enable?

    A. 

    First and foremost is speech. And an easy way to understand the impact of this technology is to look at conversational rate. Right now, this conversation is probably on the order of 60 to 150 words per minute, depending on how much coffee we had and so on.

    For a lot of users of our technology, it’s like a pipe dream to even get to 20 or 30. It can take a long time to produce even very basic utterances, along the lines of ‘I am tired.’

    Now imagine breaking through to say, ‘Let’s talk about our day,’ and carrying on a conversation that provides meaning, interest, and value. That is the breakthrough capability that we’re really trying to enable.

    We have this amazing group — our Brainiac Council — of people with speech disabilities, scientists, technologists. We have more than 200 Brainiacs now, and we want to grow the council to 300.

    Cognixion ONE demo

    One of our Brainiacs uses the headset to help him communicate words that are difficult for him to pronounce, like ‘chocolate.’ He owns and operates a business where he performs for other people. During a performance, he can plug the headset directly into his sound system instead of having to talk into a microphone.

    Think of how many other people have something to say but might be overlooked. We want to help them get their point across.

    One possibility we’re exploring for future enhancement of speech generation is providing each user with their own voice, through technologies like voice banking and text-to-speech software like Amazon Web Services Polly. Personalization to such a degree could make the experience much richer and more meaningful for users.

    But speech generation is only one function of a broad ‘neuroprosthetic.’ People also interact with places, things, and media — and these interactions don’t necessarily require speech. We’re building an Alexa integration to enable home automation control and other enriched experiences. Through the headset, users can interact with their environment, control smart devices, or access news, music, whatever is available.

    In time, a device could allow users to control mobility devices for assisted navigation, robots for household tasks, settings for ambient lighting and temperature. It’s enabling a future where more people can live their daily lives more actively and independently.

  3. Q. 

    What are the next steps toward creating that future?

    A. 

    There are some key technical problems to solve. BCIs historically have been viewed somewhat skeptically, particularly the use of electroencephalography. So our challenge is to come up with a paradigm for stimulus response that enables sufficient expressive capability within the user interface. In other words, can I show you enough different kinds of stimuli to give you meaningful choices so you can efficiently use the system without becoming unnecessarily tired?

    Then it’s like whack-a-mole, or the digital equivalent. When we see a specific frequency come through, and a certain power threshold on it, we act on it. How many different unique frequencies can we disambiguate from one another at any given time?

    A simulated view of the interface in a Cognixion device
    “For some people, we make things easy, and for other people, we make things possible. That’s the way we look at it: technology in service of enhancing a human’s ability to do things,” says Andreas Forsland, founder and CEO of Cognixion.
    Cognixion

    Another challenge is that a commercial device should require a nearly zero learning curve. Once you pop it on, you need to be able use it within minutes and not hours.

    So we might couple the stimulus-response technology with a display, or speakers, or haptics that can give biofeedback to help train your brain: ‘I’m doing this right’ or ‘I’m doing it wrong.’ This would give people the positives and negatives as they interact with it. If you can close those iterations quickly, people learn to use it faster.

    Our goal is to really harden and fortify the reliability and accuracy of what we’re doing, algorithmically. We then have a very robust IP portfolio that could go into mainstream applications, likely in the form of much deeper partnerships.

    Related content
    Amazon Research Award recipient Jonathan Tamir is focusing on deriving better images faster.

    In terms of applications, we are pursuing a medical channel and a research channel. Making a medical device is much more challenging than making a consumer device, for a variety of technical reasons: validation, documentation, regulatory considerations. So it takes some time. But the initial indications for use will be speech generation and environmental control.

    Eventually, we could look to expand our indications within the control ‘bubble’ to cover additional interactions with people, places, things, and content. Plus, the system could find applications within three other healthcare bubbles. One is diagnostics in areas like ophthalmology and neurology, thanks to the sensors and closed-loop nature of the device. A second is therapeutics for conditions involving attention, focus, and memory. And the third is remote monitoring in telehealth-type situations, because of the network capabilities.

    The research side uses the same medical-grade hardware, but loaded with different software to enable biometric analysis and development of experimental AR applications. We’re preparing for production and delivery of initial demand early next year, and we’re actively seeking research partners who would get early access to the device.

    In addition to collaborators in neuroscience, neuroengineering, bionics, human-computer interaction, and clinical and translational research, we’re also soliciting input from user experience research to arrive at a final set of specific technical requirements and use-case requirements.

    We think there’s tremendous opportunity here. And we’re constantly being asked, ‘When can this become mainstream?’ We have some thoughts and ideas about that, of course, but we also want to hear from the research community about the use cases they can dream up.

Research areas

Related content

US, MA, N.reading
Amazon Industrial Robotics is seeking exceptional talent to help develop the next generation of advanced robotics systems that will transform automation at Amazon's scale. We're building revolutionary robotic systems that combine cutting-edge AI, sophisticated control systems, and advanced mechanical design to create adaptable automation solutions capable of working safely alongside humans in dynamic environments. This is a unique opportunity to shape the future of robotics and automation at an unprecedented scale, working with world-class teams pushing the boundaries of what's possible in robotic dexterous manipulation, locomotion, and human-robot interaction. This role presents an opportunity to shape the future of robotics through innovative applications of deep learning and large language models. At Amazon Industrial Robotics we leverage advanced robotics, machine learning, and artificial intelligence to solve complex operational challenges at an unprecedented scale. Our fleet of robots operates across hundreds of facilities worldwide, working in sophisticated coordination to fulfill our mission of customer excellence. We are pioneering the development of dexterous manipulation system that: - Enables unprecedented generalization across diverse tasks - Enables contact-rich manipulation in different environments - Seamlessly integrates low-level skills and high-level behaviors - Leverage mechanical intelligence, multi-modal sensor feedback and advanced control techniques. The ideal candidate will contribute to research that bridges the gap between theoretical advancement and practical implementation in robotics. You will be part of a team that's revolutionizing how robots learn, adapt, and interact with their environment. Join us in building the next generation of intelligent robotics systems that will transform the future of automation and human-robot collaboration. Key job responsibilities - Design and implement methods for dexterous manipulation - Design and implement methods for use of dexterous end effectors with force and tactile sensing - Develop a hierarchical system that combines low-level control with high-level planning - Utilize state-of-the-art manipulation models and optimal control techniques
IN, HR, Gurugram
Lead ML teams building large-scale forecasting and optimization systems that power Amazon’s global transportation network and directly impact customer experience and cost. As an Applied Science Manager, you will set scientific direction, mentor applied scientists, and partner with engineering and product leaders to deliver production-grade ML solutions at massive scale. Key job responsibilities 1. Lead and grow a high-performing team of Applied Scientists, providing technical guidance, mentorship, and career development. 2. Define and own the scientific vision and roadmap for ML solutions powering large-scale transportation planning and execution. 3. Guide model and system design across a range of techniques, including tree-based models, deep learning (LSTMs, transformers), LLMs, and reinforcement learning. 4. Ensure models are production-ready, scalable, and robust through close partnership with stakeholders. Partner with Product, Operations, and Engineering leaders to enable proactive decision-making and corrective actions. 5. Own end-to-end business metrics, directly influencing customer experience, cost optimization, and network reliability. 6. Help contribute to the broader ML community through publications, conference submissions, and internal knowledge sharing. A day in the life Your day includes reviewing model performance and business metrics, guiding technical design and experimentation, mentoring scientists, and driving roadmap execution. You’ll balance near-term delivery with long-term innovation while ensuring solutions are robust, interpretable, and scalable. Ultimately, your work helps improve delivery reliability, reduce costs, and enhance the customer experience at massive scale.
IL, Haifa
Come join the AWS Agentic AI science team in building the next generation models for intelligent automation. AWS, the world-leading provider of cloud services, has fostered the creation and growth of countless new businesses, and is a positive force for good. Our customers bring problems that will give Applied Scientists like you endless opportunities to see your research have a positive and immediate impact in the world. You will have the opportunity to partner with technology and business teams to solve real-world problems, have access to virtually endless data and computational resources, and to world-class engineers and developers that can help bring your ideas into the world. As part of the team, we expect that you will develop innovative solutions to hard problems, and publish your findings at peer reviewed conferences and workshops. We are looking for world class researchers with experience in one or more of the following areas - autonomous agents, API orchestration, Planning, large multimodal models (especially vision-language models), reinforcement learning (RL) and sequential decision making.
AT, Graz
Are you a MS or PhD student interested in a 2026 internship in the field of machine learning, deep learning, generative AI, large language models and speech technology, robotics, computer vision, optimization, operations research, quantum computing, automated reasoning, or formal methods? If so, we want to hear from you! We are looking for students interested in using a variety of domain expertise to invent, design and implement state-of-the-art solutions for never-before-solved problems. You can find more information about the Amazon Science community as well as our interview process via the links below; https://www.amazon.science/ https://amazon.jobs/content/en/career-programs/university/science https://amazon.jobs/content/en/how-we-hire/university-roles/applied-science Key job responsibilities As an Applied Science Intern, you will own the design and development of end-to-end systems. You’ll have the opportunity to write technical white papers, create roadmaps and drive production level projects that will support Amazon Science. You will work closely with Amazon scientists and other science interns to develop solutions and deploy them into production. You will have the opportunity to design new algorithms, models, or other technical solutions whilst experiencing Amazon’s customer focused culture. The ideal intern must have the ability to work with diverse groups of people and cross-functional teams to solve complex business problems. A day in the life At Amazon, you will grow into the high impact person you know you’re ready to be. Every day will be filled with developing new skills and achieving personal growth. How often can you say that your work changes the world? At Amazon, you’ll say it often. Join us and define tomorrow. Some more benefits of an Amazon Science internship include; • All of our internships offer a competitive stipend/salary • Interns are paired with an experienced manager and mentor(s) • Interns receive invitations to different events such as intern program initiatives or site events • Interns can build their professional and personal network with other Amazon Scientists • Interns can potentially publish work at top tier conferences each year About the team Applicants will be reviewed on a rolling basis and are assigned to teams aligned with their research interests and experience prior to interviews. Start dates are available throughout the year and durations can vary in length from 3-6 months for full time internships. This role may available across multiple locations in the EMEA region (Austria, Estonia, France, Germany, Ireland, Israel, Italy, Jordan, Luxembourg, Netherlands, Poland, Romania, Spain, South Africa, UAE, and UK). Please note these are not remote internships.
IL, Haifa
Are you a scientist interested in pushing the state of the art in Information Retrieval, Large Language Models and Recommendation Systems? Are you interested in innovating on behalf of millions of customers, helping them accomplish their every day goals? Do you wish you had access to large datasets and tremendous computational resources? Do you want to join a team of capable scientist and engineers, building the future of e-commerce? Answer yes to any of these questions, and you will be a great fit for our team at Amazon. Our team is part of Amazon’s Personalization organization, a high-performing group that leverages Amazon’s expertise in machine learning, generative AI, large-scale data systems, and user experience design to deliver the best shopping experiences for our customers. Our team builds large-scale machine-learning solutions that delight customers with personalized and up-to-date recommendations that are related to their interests. We are a team uniquely placed within Amazon, to have a direct window of opportunity to influence how customers will think about their shopping journey in the future. As an Applied Scientist in our team, you will be responsible for the research, design, and development of new AI technologies for personalization. You will adopt or invent new machine learning and analytical techniques in the realm of recommendations, information retrieval and large language models. You will collaborate with scientists, engineers, and product partners locally and abroad. Your work will include inventing, experimenting with, and launching new features, products and systems. Please visit https://www.amazon.science for more information.
IL, Haifa
Are you a scientist interested in pushing the state of the art in Information Retrieval, Large Language Models and Recommendation Systems? Are you interested in innovating on behalf of millions of customers, helping them accomplish their every day goals? Do you wish you had access to large datasets and tremendous computational resources? Do you want to join a team of capable scientist and engineers, building the future of e-commerce? Answer yes to any of these questions, and you will be a great fit for our team at Amazon. Our team is part of Amazon’s Personalization organization, a high-performing group that leverages Amazon’s expertise in machine learning, generative AI, large-scale data systems, and user experience design to deliver the best shopping experiences for our customers. Our team builds large-scale machine-learning solutions that delight customers with personalized and up-to-date recommendations that are related to their interests. We are a team uniquely placed within Amazon, to have a direct window of opportunity to influence how customers will think about their shopping journey in the future. As an Applied Scientist in our team, you will be responsible for the research, design, and development of new AI technologies for personalization. You will adopt or invent new machine learning and analytical techniques in the realm of recommendations, information retrieval and large language models. You will collaborate with scientists, engineers, and product partners locally and abroad. Your work will include inventing, experimenting with, and launching new features, products and systems. Please visit https://www.amazon.science for more information.
US, CA, San Francisco
If you are interested in this position, please apply on Twitch's Career site https://www.twitch.tv/jobs/en/ About Us: Twitch is the world’s biggest live streaming service, with global communities built around gaming, entertainment, music, sports, cooking, and more. It is where thousands of communities come together for whatever, every day. We’re about community, inside and out. You’ll find coworkers who are eager to team up, collaborate, and smash (or elegantly solve) problems together. We’re on a quest to empower live communities, so if this sounds good to you, see what we’re up to on LinkedIn and X, and discover the projects we’re solving on our Blog. Be sure to explore our Interviewing Guide to learn how to ace our interview process. About the Role We are looking for an experienced Data Scientist to support our central analytics and finance disciplines at Twitch. Bringing to bear a mixture of data analysis, dashboarding, and SQL query skills, you will use data-driven methods to answer business questions, and deliver insights that deepen understanding of our viewer behavior and monetization performance. Reporting to the VP of Finance, Analytics, and Business Operations, your team will be located in San Francisco. Our team is based in San Francisco, CA. You Will - Create actionable insights from data related to Twitch viewers, creators, advertising revenue, commerce revenue, and content deals. - Develop dashboards and visualizations to communicate points of view that inform business decision-making. - Create and maintain complex queries and data pipelines for ad-hoc analyses. - Author narratives and documentation that support conclusions. - Collaborate effectively with business partners, product managers, and data team members to align data science efforts with strategic goals. Perks * Medical, Dental, Vision & Disability Insurance * 401(k) * Maternity & Parental Leave * Flexible PTO * Amazon Employee Discount
IL, Tel Aviv
Are you a scientist interested in pushing the state of the art in Information Retrieval, Large Language Models and Recommendation Systems? Are you interested in innovating on behalf of millions of customers, helping them accomplish their every day goals? Do you wish you had access to large datasets and tremendous computational resources? Do you want to join a team of capable scientist and engineers, building the future of e-commerce? Answer yes to any of these questions, and you will be a great fit for our team at Amazon. Our team is part of Amazon’s Personalization organization, a high-performing group that leverages Amazon’s expertise in machine learning, generative AI, large-scale data systems, and user experience design to deliver the best shopping experiences for our customers. Our team builds large-scale machine-learning solutions that delight customers with personalized and up-to-date recommendations that are related to their interests. We are a team uniquely placed within Amazon, to have a direct window of opportunity to influence how customers will think about their shopping journey in the future. As an Applied Scientist in our team, you will be responsible for the research, design, and development of new AI technologies for personalization. You will adopt or invent new machine learning and analytical techniques in the realm of recommendations, information retrieval and large language models. You will collaborate with scientists, engineers, and product partners locally and abroad. Your work will include inventing, experimenting with, and launching new features, products and systems. Please visit https://www.amazon.science for more information.
IN, HR, Gurugram
Lead ML teams building large-scale forecasting and optimization systems that power Amazon’s global transportation network and directly impact customer experience and cost. As an Sr Applied Scientist, you will set scientific direction, mentor applied scientists, and partner with engineering and product leaders to deliver production-grade ML solutions at massive scale. Key job responsibilities 1. Lead and grow a high-performing team of Applied Scientists, providing technical guidance, mentorship, and career development. 2. Define and own the scientific vision and roadmap for ML solutions powering large-scale transportation planning and execution. 3. Guide model and system design across a range of techniques, including tree-based models, deep learning (LSTMs, transformers), LLMs, and reinforcement learning. 4. Ensure models are production-ready, scalable, and robust through close partnership with stakeholders. Partner with Product, Operations, and Engineering leaders to enable proactive decision-making and corrective actions. 5. Own end-to-end business metrics, directly influencing customer experience, cost optimization, and network reliability. 6. Help contribute to the broader ML community through publications, conference submissions, and internal knowledge sharing. A day in the life Your day includes reviewing model performance and business metrics, guiding technical design and experimentation, mentoring scientists, and driving roadmap execution. You’ll balance near-term delivery with long-term innovation while ensuring solutions are robust, interpretable, and scalable. Ultimately, your work helps improve delivery reliability, reduce costs, and enhance the customer experience at massive scale.
US, WA, Seattle
Amazon Prime is looking for an ambitious Economist to help create econometric insights for world-wide Prime. Prime is Amazon's premiere membership program, with over 200M members world-wide. This role is at the center of many major company decisions that impact Amazon's customers. These decisions span a variety of industries, each reflecting the diversity of Prime benefits. These range from fast-free e-commerce shipping, digital content (e.g., exclusive streaming video, music, gaming, photos), reading, healthcare, and grocery offerings. Prime Science creates insights that power these decisions. As an economist in this role, you will create statistical tools that embed causal interpretations. You will utilize massive data, state-of-the-art scientific computing, econometrics (causal, counterfactual/structural, experimentation), and machine-learning, to do so. Some of the science you create will be publishable in internal or external scientific journals and conferences. You will work closely with a team of economists, applied scientists, data professionals (business analysts, business intelligence engineers), product managers, and software/data engineers. You will create insights from descriptive statistics, as well as from novel statistical and econometric models. You will create internal-to-Amazon-facing automated scientific data products to power company decisions. You will write strategic documents explaining how senior company leaders should utilize these insights to create sustainable value for customers. These leaders will often include the senior-most leaders at Amazon. The team is unique in its exposure to company-wide strategies as well as senior leadership. It operates at the research frontier of utilizing data, econometrics, artificial intelligence, and machine-learning to form business strategies. A successful candidate will have demonstrated a capacity for building, estimating, and defending statistical models (e.g., causal, counterfactual, machine-learning) using software such as R, Python, or STATA. They will have a willingness to learn and apply a broad set of statistical and computational techniques to supplement deep training in one area of econometrics. For example, many applications on the team motivate the use of structural econometrics and machine-learning. They rely on building scalable production software, which involves a broad set of world-class software-building skills often learned on-the-job. As a consequence, already-obtained knowledge of SQL, machine learning, and large-scale scientific computing using distributed computing infrastructures such as Spark-Scala or PySpark would be a plus. Additionally, this candidate will show a track-record of delivering projects well and on-time, preferably in collaboration with other team members (e.g. co-authors). Candidates must have very strong writing and emotional intelligence skills (for collaborative teamwork, often with colleagues in different functional roles), a growth mindset, and a capacity for dealing with a high-level of ambiguity. Endowed with these traits and on-the-job-growth, the role will provide the opportunity to have a large strategic, world-wide impact on the customer experiences of Prime members.