Starting January 2022 we will be hosting guest speakers on a range of topics relating to music, data and the gaps between music and data. Talks will take place online once a month in the afternoon, 4pm UK time. If you want to receive updates on these please do get in touch.

Thursday, 3/11/22 – Atau Tanaka

Goldsmiths University of London

Atau Tanaka conducts research in embodied musical interaction. This work takes place at the intersection of human computer interaction and gestural computer music performance. He studies our encounters with sound, be they in music or in the everyday, as a form of phenomenological experience. This includes the use of physiological sensing technologies, notably muscle tension in the electromyogram signal, and machine learning analysis of this complex, organic data.

Thursday, 1/12/22 – Ian Cross

Cambridge University

Ian Cross is Emeritus Professsor of Music & Science and until 2021 was the Director of the Centre for Music and Science (CMS). Ian taught undergraduate and graduate courses for the Faculty of Music and supervised a substantial number of graduate students as well as founding the CMS. Research in the CMS investigates music from many different scientific perspectives, reflected in the wide range of publications by its past and present members. He is Editor-in-Chief of SAGE’s online Open Access journal with SEMPRE, Music & Science, on the editorial advisory boards of numerous journals, a Trustee of the SEMPRE, a Governor of the Music Therapy Charity and Chair of Trustees of KJV Community Choir.

Thursday, 5/1/23 – Johanna Devaney

Brooklyn College

Johanna is Assistant Professor of Music at Brooklyn College and the Graduate Center, CUNY. Her research seeks to understand the ways in which humans engage with music, particularly through performance, and how computers can be used to model and augment our understanding of this engagement. Her work draws on the disciplines of music, psychology, and computer science.

Thursday, 2/2/23 – Louise Harris

University of Glasgow

Louise Harris is an electronic and audiovisual composer, and Professor in Sonic and Audiovisual Practices at The University of Glasgow. She specialises in the creation and exploration of audiovisual relationships utilising electronic music, recorded sound and computer-generated visual environments. Louise’s work encompasses fixed media, live performance and large-scale installation pieces, with a recent research strand specifically addressing Expanded Audiovisual Formats (EAF). She recently completed work on Composing Audiovisually, a monograph on Audiovisual Composition, which was published by Routledge in July 2021.

Thursday, 2/3/23 – Alexander Refsum Jensenius

University of oslo

Alexander Refsum Jensenius is Professor of music technology at the University of Oslo and Director of RITMO Centre for Interdisciplinary Studies in Rhythm, Time and Motion. His research focuses on why music makes us move, which he explores through empirical studies using different motion sensing technologies. He also uses the analytical knowledge and tools to create new music, with both traditional and very untraditional instruments. These are presented in his newest book “Sound Actions Conceptualizing Musical Instruments” (MIT Press, 2022).

Thursday, 11/5/23 – Andrea Schiavio

University of York

Andrea received his PhD in Music from the University of Sheffield (2014), studying musical skill acquisition and development through the lens of embodied cognitive science. AHis professional interests lie in: (i) the role of action and interaction in musical experience, (ii) the psychology of musical creativity and education, (iii) the acquisition and development of musical skills, (iv) the links between perception, emotion, culture, and music cognition, (v) the philosophical foundations of music psychology, and (vi) the collaboration of science and humanities in music research.


31/1/22 Renee Timmers & Elaine Chew

University of Sheffield & IRCAM

Renee Timmers’ current research projects investigate ensemble performance, in particular what visual and auditory nonverbal cues musicians use to coordinate and communicate with each other during performance.

Elaine Chew’s research centers on the mathematical and computational modeling of musical structures, with present focus on structures as they are communicated in performance and in ECG traces of cardiac arrhythmias.

28/3/22 Blair Kaneshiro

Stanford University

My research focuses on using brain and behavioral responses to better understand how we perceive and engage with music, sound, and images. Other research interests include music information retrieval and interactions with music services; development and application of novel EEG analysis techniques; and promotion of reproducible and cross-disciplinary research through open-source software and datasets.

25/4/22  Anna Xambo

De Montfort University

I envision pushing the boundaries of technology, design, and experience towards more collaborative, egalitarian and sustainable spaces, what I term intelligent computer-supported collaborative music everywhere. My mission is to do interdisciplinary research that embraces techniques and research methods from engineering, social sciences, and the arts for creating a new generation of interactive music systems for music performance and social interaction in alignment with Computer-Supported Collaborative Work (CSCW) principles.

30/5/22 Jeremy Morris

University of Wisconsin-Madison

My research focuses on new media use in everyday life, specifically on the digitization of cultural goods (music, software, books, movies, etc.) and how these are then turned into commodified and sellable objects in various digital formats. My book, Selling Digital Music, Formatting Culture, focuses on the shared fate of the computing and music industries over the last two decades and my recent co-edited collections examine Apps (Appified, 2018) and Podcasting (Saving New Sounds, 2021).

27/6/22 Psyche Loui

Northeastern University

Psyche Loui’s research aims to understand the networks of brain structure and function that enable musical processes: auditory and multisensory perception, learning and memory of sound structure, sound production, and the human aesthetic and emotional response to sensory stimuli. Tools for this research include electrophysiology, structural and functional neuroimaging, noninvasive brain stimulation, and psychophysical and cognitive experiments


Want to receive updates?