(Editor’s note: This article is the latest installment in a series by Amazon Science delving into the science behind products and services of companies in which Amazon has invested. The Amazon Alexa Fund first invested in Endel in 2018 and earlier this year participated in their $5 million Series A led by True Ventures.)
Recently, Endel launched an updated and streamlined Endel skill for Alexa that includes the “molecular mechanisms” soundscape with original vocals, music, and voiceovers by Grimes.
The company made major headlines earlier this fall when c (the artist’s new lower-case, italicized name, inspired the symbol for the speed of light) released “AI Lullaby”, a scientifically engineered sleep soundscape that’s now available on Alexa. c actually initiated the collaboration with Endel after using the app, and because of her search for sleeping aids for her young son.
Endel was founded in 2018 by a team of six. It is now a 30-person operation focused on creating personal artificial intelligence-powered soundscapes that take into account an individual’s immediate conditions. It does this by assessing a person’s current state and generating an appropriate soundscape from components of its sound engine. This process was born out of scientific principles about sound’s effect on the human body and mind.
In time for the release of the updated skill for Alexa, Amazon Science contributor Tyler Hayes spoke with Endel co-founders Oleg Stavitsky (CEO) and Dmitry Evgrafov (sound designer) about how Endel uses a variety of contextual data points to play the right sounds at the right time.
Q. What are some of the contextual signals you use to provide personalized sounds?
Stavitsky: Circadian rhythms is one. Each person’s body has a natural, daily rhythm — an internal clock. Even if you can’t explain it exactly, you’ve likely felt the physical or mental changes happening on a daily cycle. Circadian rhythm is a sleep-wake cycle that regulates the secretion of a sleep hormone called melatonin. It repeats every 24 hours and is constantly fine-tuned through natural light levels. Scientists have been observing circadian rhythm for some time now and in 2017, the Nobel Prize was awarded to three Americans for their discovery of molecular mechanisms that control the circadian rhythm.
We use these universal rhythms as a baseline for our sound personalization. Everyone’s circadian graph will look different depending on where they live and their sleep habits. We also use signals such as user location and time to estimate natural light levels for further personalization. In addition to the circadian rhythm, we use the ultradian rhythm, a rest-activity cycle that regulates cognitive state, mood, and energy level. It consists of roughly 110-minute energy level loops.
Evgrafov: Curated playlists full of piano or classical guitar may feel relaxing to some people at certain points throughout the day, but those ways of relaxing with music can’t adjust depending on individual factors. If one wants to effectively use these curated playlists for specific tasks, the onus falls on the listener to know the specifics of their circadian and ultradian rhythms. Instead, our app or skill creates a personalized circadian rhythm chart for each listener to target the user’s desired mood through sound. Are you in a natural energy entry slump, but still trying to focus? We adjust accordingly.
Diversify your Endel routines with our freshly updated Alexa skill featuring:
— Endel (@EndelSound) November 20, 2020
• The AI Lullaby soundscape with voiceovers by Grimes. Don’t miss a chance to chat with Grimes until Jan 10, 21
• A streamlined experience to address your feedback: start a soundscape in one step pic.twitter.com/S1j8P0A3hL
In the case of Alexa, we use local information such as time of day, weather, and the amount of natural light exposure through which we know the circadian rhythm phase. Alexa customers must first create an account with us to utilize the skill, and can learn about our privacy policy. With our iOS app, health data also is a key signal for creating personalized sound. Using a person’s heart rate as a real-time input indicator is one essential tool for soundscape personalization.
We can use real-time heart rate data from people wearing fitness trackers or smartwatches like Apple Watch, if they’ve agreed to allow access. With access to heart rate data, we can recognize prolonged spikes and adapt the BPM to try to bring the heart rate back to a resting level. If possible, in the future, we would be very interested in providing this kind of personalization with the new Amazon Halo.
BPM isn’t the only tool we use to adjust human physiology. One study by Luciano Bernardi looked at how swelling crescendos and deflating decrescendos can affect our physiology. Bernardi found that music with a series of crescendos generally led to increased blood pressure, heart rate, and respiration; while selections with decrescendos typically had the opposite effect.
Another study looking at effects on heart rate variability when exposed to different styles of "relaxing" music found that "new age" music induced a shift in heart rate variability from higher to lower frequencies, independent of a listener’s music preference. These and other studies suggest that music can go beyond evoking emotion to impacting cardiovascular function.
Q: How has music theory informed the types of sound your Alexa skill produces?
Evgrafov: For music composition, we first used the pentatonic scale, a set of notes ordered by pitch or frequency, because of its popularity across modern music.
Listeners may also notice that the AI-powered soundscapes are often very simple. Using less complex tones, melodies, and movement helps ease the burden on our minds. We started with simple ratios of two tonal frequencies like octaves, 2:1, or a perfect fifth, 3:2, because those are pleasing to the brain. A new model suggests music is found to be pleasing when it triggers a rhythmically consistent pattern in certain auditory neurons.
We try to reduce brain fatigue in other ways, too. While complex song structures and unique melodies may sound nice, they force our brains to work a little harder to make sense of them. This auditory experience creates alertness in listeners. Sometimes that’s the goal of the listener, but not always. It can be difficult to determine if a song uses complex or simple elements, especially without musical training. That’s why one piece of classical music might not lull listeners into a state of relaxation in the way others do.
We employ models to determine which sounds are best suited for relaxation and which are best suited for alertness and focus. Relaxation is best facilitated with mellow tones, slow chord changes, and simple structures. Our brains are constantly analyzing sound and the less detail there is, the less attention is dedicated to that task. This helps facilitate relaxation quicker and for longer periods.
The sounds that we find most calming are also linked to our biology. Research by Lee Salk dating back to the 1960s showed how infants exposed to a heart rate of 72 bpm at 85db overwhelmingly appeared happier. They cry less and put on weight easier. Studies continue to show how lower frequencies and bass can be calming.
Q. What are your plans for evolving your soundscapes, and how will science play a role in the evolution of Endel?
Stavitsky: To effectively personalize sound through time and tone, we have based our soundscapes on the scientific principles that Dmitry has described above. To validate and take our research-based soundscapes further, we have consulted many experts.
For example, in the initial stages of figuring out how helpful Endel could be for people, we contacted Mihaly Csikszentmihalyi, author of the book Flow. Csikszentmihalyi designed his own survey methodology while writing the book to figure out whether people were “in flow” — a focused mental state conducive to productivity. We adapted Csikszentmihalyi’s survey to be interactive inside the app. Listeners were continually asked about their feelings, state of being, and mood to improve the effectiveness of the sounds.
Sleep scientist Roy Raymann of SleepScore Labs has been instrumental in helping us create soundscapes to naturally facilitate sleep. The latest advancement includes incorporating a sleep onset period. To do this, the same jingle or sounds are played around the same time each night to trigger the body into a restful phase.
Going beyond sound 🔜 https://t.co/EKteYeDWUf
— Endel (@EndelSound) November 24, 2020
We use broadband noises, those from a wide range of frequencies, because broadband sound administration has also shown to reduce sleep onset latency. Further into the sleep cycle, Endel incorporates nature sounds such as waves to resemble human breathing because hearing breathing-like sounds can help lull people into sleep.
We also have partnered with Germany’s largest scientific institution to study the effect of colored noises on concentration in a workspace environment, and we’re working with a brain wave analysis company for a validation experiment. The study will monitor brain activity of participants listening to Endel, popular streaming music playlists, and silence, to compare the effectiveness at achieving the state of flow.
As a team, we’re rapidly evolving to incorporate the latest data to help listeners with their goals. One example: we’re currently exploring sound masking, which will lead to new ways of listening across varied environments. But other types of sounds and scenarios informed by real-time listener data are in the works, too.
Our unique ability to adapt to every individual and creative, multidisciplinary approach are our magic potion. The scientific principles and research incorporated into the platform are what make Endel so powerful.