Starting January 2022 we will be hosting guest speakers on a range of topics relating to music, data and the gaps between music and data. Talks will take place online on the last Monday of the month (except one Thursday) in the afternoon, UK time. If you want to receive updates on these please do get in touch.
January 31 Renee Timmers & Elaine Chew (4pm UK time)
University of Sheffield & IRCAM
Renee Timmers’ current research projects investigate ensemble performance, in particular what visual and auditory nonverbal cues musicians use to coordinate and communicate with each other during performance.
Elaine Chew’s research centers on the mathematical and computational modeling of musical structures, with present focus on structures as they are communicated in performance and in ECG traces of cardiac arrhythmias.
March 3 (Thursday) Atau Tanaka
Goldsmiths University of London
Atau Tanaka conducts research in embodied musical interaction. This work takes place at the intersection of human computer interaction and gestural computer music performance. He studies our encounters with sound, be they in music or in the everyday, as a form of phenomenological experience. This includes the use of physiological sensing technologies, notably muscle tension in the electromyogram signal, and machine learning analysis of this complex, organic data.
March 28 Blair Kaneshiro
My research focuses on using brain and behavioral responses to better understand how we perceive and engage with music, sound, and images. Other research interests include music information retrieval and interactions with music services; development and application of novel EEG analysis techniques; and promotion of reproducible and cross-disciplinary research through open-source software and datasets.
April 25 Anna Xambo
De Montfort University
I envision pushing the boundaries of technology, design, and experience towards more collaborative, egalitarian and sustainable spaces, what I term intelligent computer-supported collaborative music everywhere. My mission is to do interdisciplinary research that embraces techniques and research methods from engineering, social sciences, and the arts for creating a new generation of interactive music systems for music performance and social interaction in alignment with Computer-Supported Collaborative Work (CSCW) principles.
May 30 Jeremy Morris
University of Wisconsin-Madison
My research focuses on new media use in everyday life, specifically on the digitization of cultural goods (music, software, books, movies, etc.) and how these are then turned into commodified and sellable objects in various digital formats. My book, Selling Digital Music, Formatting Culture, focuses on the shared fate of the computing and music industries over the last two decades and my recent co-edited collections examine Apps (Appified, 2018) and Podcasting (Saving New Sounds, 2021).
June 27 Psyche Loui
Psyche Loui’s research aims to understand the networks of brain structure and function that enable musical processes: auditory and multisensory perception, learning and memory of sound structure, sound production, and the human aesthetic and emotional response to sensory stimuli. Tools for this research include electrophysiology, structural and functional neuroimaging, noninvasive brain stimulation, and psychophysical and cognitive experiments