Brain.fm Theory & Process

Similar documents
DAT335 Music Perception and Cognition Cogswell Polytechnical College Spring Week 6 Class Notes

Tuning the Brain: Neuromodulation as a Possible Panacea for treating non-pulsatile tinnitus?

Smooth Rhythms as Probes of Entrainment. Music Perception 10 (1993): ABSTRACT

The Tone Height of Multiharmonic Sounds. Introduction

inter.noise 2000 The 29th International Congress and Exhibition on Noise Control Engineering August 2000, Nice, FRANCE

& Ψ. study guide. Music Psychology ... A guide for preparing to take the qualifying examination in music psychology.

Expressive performance in music: Mapping acoustic cues onto facial expressions

Perceiving Differences and Similarities in Music: Melodic Categorization During the First Years of Life

Inhibition of Oscillation in a Plastic Neural Network Model of Tinnitus Therapy Using Noise Stimulus

Experiment PP-1: Electroencephalogram (EEG) Activity

An Indian Journal FULL PAPER ABSTRACT KEYWORDS. Trade Science Inc.

The Cocktail Party Effect. Binaural Masking. The Precedence Effect. Music 175: Time and Space

Structural and functional neuroplasticity of tinnitus-related distress and duration

Speech Recognition and Signal Processing for Broadcast News Transcription

Trauma & Treatment: Neurologic Music Therapy and Functional Brain Changes. Suzanne Oliver, MT-BC, NMT Fellow Ezequiel Bautista, MT-BC, NMT

Mind Alive Inc. Product History

UNDERSTANDING TINNITUS AND TINNITUS TREATMENTS

Behavioral and neural identification of birdsong under several masking conditions

Analysis, Synthesis, and Perception of Musical Sounds

Digital audio and computer music. COS 116, Spring 2012 Guest lecture: Rebecca Fiebrink

Pitch. The perceptual correlate of frequency: the perceptual dimension along which sounds can be ordered from low to high.

TAGx2 for Nexus BioTrace+ Theta Alpha Gamma Synchrony. Operations - Introduction

Auditory Illusions. Diana Deutsch. The sounds we perceive do not always correspond to those that are

Lab #10 Perception of Rhythm and Timing

Embodied music cognition and mediation technology

Modeling memory for melodies

University of Groningen. Tinnitus Bartels, Hilke

Music and the emotions

Music Performance Panel: NICI / MMM Position Statement

Musical Acoustics Lecture 15 Pitch & Frequency (Psycho-Acoustics)

With thanks to Seana Coulson and Katherine De Long!

Computer Coordination With Popular Music: A New Research Agenda 1

AUD 6306 Speech Science

6.UAP Project. FunPlayer: A Real-Time Speed-Adjusting Music Accompaniment System. Daryl Neubieser. May 12, 2016

Building Your DLP Strategy & Process. Whitepaper

Nature Neuroscience: doi: /nn Supplementary Figure 1. Emergence of dmpfc and BLA 4-Hz oscillations during freezing behavior.

Tinnitus: How an Audiologist Can Help

Clinically proven: Spectral notching of amplification as a treatment for tinnitus

Hugo Technology. An introduction into Rob Watts' technology

Pitch Perception. Roger Shepard

CAROLINE BEESE Max Planck Institute for Human Cognitive and Brain Sciences Stephanstr. 1a, Leipzig, Germany

Quantify. The Subjective. PQM: A New Quantitative Tool for Evaluating Display Design Options

Acoustic and musical foundations of the speech/song illusion

INTER GENRE SIMILARITY MODELLING FOR AUTOMATIC MUSIC GENRE CLASSIFICATION

Music Training and Neuroplasticity

Pitch Perception and Grouping. HST.723 Neural Coding and Perception of Sound

Activation of learned action sequences by auditory feedback

Effects of Musical Tempo on Heart Rate, Brain Activity, and Short-term Memory Abstract

Music and Brain Symposium 2013: Hearing Voices. Acoustics of Imaginary Sound Chris Chafe

Audio Feature Extraction for Corpus Analysis

LEVEL ONE Provider Reference

SHORT TERM PITCH MEMORY IN WESTERN vs. OTHER EQUAL TEMPERAMENT TUNING SYSTEMS

ACTIVE SOUND DESIGN: VACUUM CLEANER

The Effects of Stimulative vs. Sedative Music on Reaction Time

A SEMANTIC DIFFERENTIAL STUDY OF LOW AMPLITUDE SUPERSONIC AIRCRAFT NOISE AND OTHER TRANSIENT SOUNDS

Tinnitus Help for ipad

Time-stamping computer events to report.1-msec accuracy of events in the Micro Experimental Laboratory

Chapter 1. Introduction to Digital Signal Processing

Using machine learning to support pedagogy in the arts

TESTING THE EFFECT OF MUSIC ON THE BRAIN Carson B Cary Academy

Tinnitus help for Android

Design considerations for technology to support music improvisation

Proceedings of Meetings on Acoustics

However, in studies of expressive timing, the aim is to investigate production rather than perception of timing, that is, independently of the listene

THE EFFECT OF EXPERTISE IN EVALUATING EMOTIONS IN MUSIC

Take a Break, Bach! Let Machine Learning Harmonize That Chorale For You. Chris Lewis Stanford University

CTP431- Music and Audio Computing Musical Acoustics. Graduate School of Culture Technology KAIST Juhan Nam

Evolutionary Computation Applied to Melody Generation

Construction of a harmonic phrase

Benjamin Dann. Curriculum Vitae. Personal details. Education

HELPING BRAIN INJURED CLIENTS WITH MUSIC THERAPY

Jinsheng Zhang on Neuromodulation to Suppress Tinnitus.mp3

EFFECT OF REPETITION OF STANDARD AND COMPARISON TONES ON RECOGNITION MEMORY FOR PITCH '

Psychoacoustic Evaluation of Fan Noise

VivoSense. User Manual Galvanic Skin Response (GSR) Analysis Module. VivoSense, Inc. Newport Beach, CA, USA Tel. (858) , Fax.

Lab experience 1: Introduction to LabView

DJ Darwin a genetic approach to creating beats

Department of Electrical & Electronic Engineering Imperial College of Science, Technology and Medicine. Project: Real-Time Speech Enhancement

2018 Fall CTP431: Music and Audio Computing Fundamentals of Musical Acoustics

Features for Audio and Music Classification

AN ARTISTIC TECHNIQUE FOR AUDIO-TO-VIDEO TRANSLATION ON A MUSIC PERCEPTION STUDY

From One-Light To Final Grade

Modeling sound quality from psychoacoustic measures

Musical Rhythm for Linguists: A Response to Justin London

CHARACTERIZATION OF END-TO-END DELAYS IN HEAD-MOUNTED DISPLAY SYSTEMS

Music Perception with Combined Stimulation

Acoustical Noise Problems in Production Test of Electro Acoustical Units and Electronic Cabinets

Individual differences in prediction: An investigation of the N400 in word-pair semantic priming

August Acoustics and Psychoacoustics Barbara Crowe Music Therapy Director. Notes from BC s copyrighted materials for IHTP

Music BCI ( )

UNIVERSITY OF SOUTH ALABAMA PSYCHOLOGY

S. S. Stevens papers,

Therapeutic Function of Music Plan Worksheet

LSTM Neural Style Transfer in Music Using Computational Musicology

MS-E Crystal Flowers in Halls of Mirrors 30 Mar Algorithmic Art II. Tassu Takala. Dept. of CS

A multi-disciplined approach to tinnitus research. Nottingham Hearing Biomedical Research Unit Kathryn Fackrell

2. Logic Elements and Logic Array Blocks in the Cyclone III Device Family

"The mind is a fire to be kindled, not a vessel to be filled." Plutarch

Keywords: Edible fungus, music, production encouragement, synchronization

The Human Features of Music.

Transcription:

Brain.fm Theory & Process At Brain.fm we develop and deliver functional music, directly optimized for its effects on our behavior. Our goal is to help the listener achieve desired mental states such as focus or sleep. Music is a potent phenomenon in human auditory perception and cognition. Music can make us laugh or cry. It can spark us awake or put us to sleep. But can we purposefully design music to help us reliably shape our behavior? At Brain.fm, we first draw on neuroscience and perceptual psychology to develop hypotheses about what kinds of sounds could be useful to help us study, to push us in a workout, to get us to sleep, or any number of other possible functions. Then, we create and test these sounds on a massive scale, to find out what works. This two-stage process (hypotheses followed by testing) allows us to efficiently discover functional music by cutting down the space of possibilities before testing. This kind of process seems obvious in science, but in functional music it is rare. Some do not test their music (left three icons only, or just the middle two), while others attempt a purely data-driven approach (right two icons only). We can apply our process whenever basic research turns up a promising lead on how some aspect of sound might shape our behavior. We have already followed several lines of evidence this way, through to their successful application in music. More lines are being worked on, and more will follow after that. As long as we keep learning about the mind and brain, this process can be used to translate the latest findings into something people can use, driving innovation in functional music.

Example: Neural oscillations Populations of neurons can synchronize their activity, perhaps to communicate across distant brain regions or to better perform computations (Buzsaki, 2004). Neural oscillations thus correspond to a critical middle ground linking single-neuron activity to behavior, and patterns of oscillatory activity across particular neuronal networks can be seen as fingerprints of cognitive processes (Siegel & Engel, 2012). Recent work has shown that modifying ongoing oscillations can improve cognitive performance (Albouy et al., 2017; Ngo et al., 2013). This comes from experiments in labs using methods that are costly and cumbersome (like magnetic stimulation). But we also know that sound can modify neural oscillations (Luo & Poeppel, 2007; Doelling & Poeppel, 2015), and in sleep research simple acoustic stimuli like clicks and pink noise have been used to drive slow-wave activity, producing benefits to deep sleep and memory retention (Papalambros et al., 2017). This knowledge has critical implications for sound design. Perhaps the influence of acoustic modulation on oscillatory activity could be used to great effect if this was done deliberately. After iterating between creating and testing music we have found several regimes in which this seems to be true. A particularly strong finding was that beta-rate modulation (12-20Hz, between beats and roughness) appears effective for reducing attentional lapses, and this is now a core part of Brain.fm s Focus music technology 1. 1 This work is supported by NSF-STTR 1720698, a government grant to Brain.fm.

Example: Salience and distraction Sounds can grab our attention, even when we don t want them to. If we can understand and predict this effect ( auditory salience ), we can make better Focus music by ensuring that distracting moments don t appear. We ve actively pursued several strategies based on saliency modeling work and ideas about bayesian surprise (Kayser et al., 2005; Tsuchida & Cottrell, 2012). The result is an in-house system to reduce salience, distinguishing our music from most other music which, made as entertainment, is meant to grab your attention. Our system ensures that musical structure is free of gaps, breaks or sharp changes that are likely to cause distraction. The textural density of the sound (e.g. type and number of instruments) is not permitted to change too suddenly, and many additional measures are taken to ensure that attention-grabbing elements are subdued or removed. The two lines of work above contribute to the effectiveness of our current Focus music, but many other lines of work are progressing within Brain.fm. Here are some that have already influenced our music: Driving slow-wave activity during Sleep Habituation (e.g., if music is too fully ignored, does it lose effectiveness?) Spatial location, movement (e.g., can auditory location manipulate visual attention?) Familiarity (e.g., does one s personal history with sounds make a big difference?) In each of these cases there remains much more to learn, but we have already found ways to use what we know to improve functional music.

How do we make the music? Humans compose the musical content (the Art: melodies, harmonies, chord progressions, sound design, instrument choices, etc.) We have found no substitute for the talent of brilliant musicians in laying the foundation for a new piece of music. Then, a patented algorithmic system (A.I.) arranges these motifs over long timescales, and adds the acoustic features which constitute our core technology (the Science: modulation to alter brain activity, 3D spatialization, salience reduction, etc.) Finally, compositions are assessed by human listeners in-house and tweaked or discarded if necessary, and are tested via large-scale experiments to ensure they have the properties required to help listeners attain and sustain desired mental states over long periods of time. How do we run experiments? To test our ideas we rely on experiments measuring both behavior and the brain. We run large-scale behavioral experiments using innovative methods to ensure data quality (Woods et al., 2017). Easy access to good data allows us to run detailed and useful experiments. For example, we often test music specially generated to differ only in one aspect, so that behavioral differences can be attributed to that difference. This is a direct way to learn how sound features affect behavior. We want to understand how the brain effects these differences, so we use neuroimaging to look at brain activity in time (EEG) and space (fmri). This often allows us to make distinctions that behavioral tests might not, since neural activity may not rise to the level of behavior. For example, our experiments using beta-rate modulation for focus music found that neural populations were phase-locking (synchronizing) to a much greater degree when given music with fast modulations and a particular modulating waveshape. Behavioral results suggested this music did help people over time, but the differences in the brain appeared first, were easier to see, and verified that the experimental manipulation (added modulation) was having a measurable impact on brain activity beyond auditory cortex.

Additional competitive advantages The science-based data-driven approach described above is Brain.fm s main advantage, but several others aspects of Brain.fm are unique and highly valuable: Algorithmic music generation (with human-in-the-loop to ensure aesthetic value) allows sounds to be created rapidly and efficiently, and to required specifications. Parameterization in the generative process allows the tracks to be described with a relatively small set of numbers, constraining variables to a space of reasonable dimensionality. 3-D externalized sound: We utilize 3-D spatial audio techniques involving generalized head-related transfer functions (HRTFs) to create the illusion that sound is coming from your environment ( externalized sound ), rather than inside your head. For example, sounds can appear out in front of the listener, attracting their attention to a task in the real world at that location. Individualized content generation (under development): Brain.fm music could easily become more effective by exploiting user feedback to adjust the parameters of sound generation, so that with continued use each listener comes to receive tailor-made stimulation. This kind of radical personalization is hard for art-music (radio, pop, etc.), but for functional music this is useful and feasible. This feature is under development for 2019. Directly optimizing sound for effects on behavior: Most functional music services rely on pre-existing music. The individual tracks on those lists were created by musicians unlikely to optimize explicitly for effects on behavior these tracks just coincidentally happened to have functional qualities. Other services use natural sounds or white noise, which are not optimized for aesthetic value, but are also not optimized for any other function. Brain.fm directly optimizes sound for the function at hand (working out, sleeping, etc.), while still creating great sounding music! Acoustically unique music via patented production tools: Instead of relying on live musicians to create exactly the right sounds, we instead use purpose-built digital music systems (proprietary and patented) to shape the music and ensure that it conforms to the acoustic specifications we intended. For example, in our Focus music, we use rapid modulations of 10-20 notes per second which can be precisely applied only through a system like this. As a result, Brain.fm sounds and feels different than other music.

REFERENCES Buzsaki, G. Neuronal Oscillations in Cortical Networks. Science, 304, (2004). Siegel, M., Donner, T. H. & Engel, A. K. Spectral fingerprints of large-scale neuronal interactions. Nature Reviews Neuroscience. (2012). Albouy, P., et al. Selective entrainment of theta oscillations in the dorsal stream causally enhances auditory working memory performance. Neuron, 94.1 (2017). Ngo, H.V.V., et al. Auditory closed-loop stimulation of the sleep slow oscillation enhances memory. Neuron, 78.3 (2013). Luo, H., & Poeppel, D. Phase patterns of neuronal responses reliably discriminate speech in human auditory cortex. Neuron, 54.6, (2007). Doelling, Keith B., and David Poeppel. Cortical entrainment to music and its modulation by expertise. Proceedings of the National Academy of Sciences 112.45 (2015). Papalambros, N.A., et al. Acoustic enhancement of sleep slow oscillations and concomitant memory improvement in older adults. Frontiers in human neuroscience 11 (2017). Kayser, C., et al. Mechanisms for allocating auditory attention: an auditory saliency map. Current Biology, 15.21 (2005). Tsuchida, T., and Cottrell, G. Auditory saliency using natural statistics. Proceedings of the Annual Meeting of the Cognitive Science Society, 34.34 (2012). Woods, Kevin JP, et al. Headphone screening to facilitate web-based auditory experiments. Attention, Perception, & Psychophysics, 79.7 (2017).