Franzisca Schroeder and Federico Reuben
forthcoming
Palle Dahlstedt
David Dolan and Oded Ben-Tal
The collaboration between Pianist David Dolan and composer Oded Ben-Tal emerged out of the AHRC research network: Datasounds, Datasets, and Datasense which Dr. Ben-Tal is leading. In line with the aims of the network, this collaboration creates a dialogue across disparate musical practices: between analogue and digital music tools; between a performer-improvisor and a composer-programmer. The AI system that Ben-Tal is developing for this improvisation project relies on machine listening to analyse Dolan’s improvisations and generate – in real time and as an integral part of the performance – musical responses that relate to different aspects of his performance.
A pioneering aspect in David Dolan’s artistic practice and research is related to his work on improvising in tonal musical environments. From the start of this collaboration we have been asking ourselves how can the computer, even with developments in Artificial ‘Intelligence’ research, interact live and in real time with tonal music. But rather then focusing on traditional music theory as a source for shaping the computer’s responses, the system Ben-Tal is developing is designed with minimal pre-determined constraints. There are no assumptions that the music will be pulsed or of pre-determined tempo; conventions of tonal music are not encoded. The aim is for a system that adapts its response to what it “hears”. Most of the moment-to-moment sound production is handled by this interactive system. During the performance, Ben-Tal adjusts control parameters thus shaping the computer’s larger scale aspects of the dialogue.
The computer extracts audio features including pitch (more precisely a chromagram: c, c#, d, c# etc. regardless of octave), onset information and timbral features. Chromagram information relates to harmony but without the nuances and detail of voicing and relation of bass and chord. Onsets are parsed to extract aspects of rhythmic language Dolan is using at the moment (rhythmic but not metric analysis. The computer does not try to infer tempo or meter at this point). In the next step the computer tries to make musical inferences from these basic data points and responds – by generating new material and by manipulating the incoming audio signal.
Oded Ben-Tal is a composer and researcher working at the intersection of music, computing, and cognition. His compositions range from traditionally notated pieces, to interactive pieces for performers and real-time electronics and multimedia work. In recent years he is particularly interested in applying cutting edge computational tools in music including machine learning and machine listening. Since 2016 he has been researching music AI in collaboration with Dr. Bob Sturm. This work has been supported by two AHRC grants. He is currently leading an AHRC research network – Datasounds, Datasets, and Datasense – examining new data rich technologies for music creation more broadly. His music is regularly performed in the UK and internationally (Covid permitting). Oded is a senior lecturer in the Department of Performing Arts, Kingston University, London.
Professor David Dolan, an international concert pianist, researcher and educator has devoted a part of his career to the revival of the art of classical improvisation and its applications in performance. In his world-wide solo and chamber music performances, he returns to the tradition of incorporating extemporisations within repertoire in embellished repeats, eingangs & cadenzas, as well as improvised preludes, interludes and fantasies.
His research focuses on searching for and applying expressive narrative and creativity to repertoire and improvised performance in solo and ensemble situations.
Yehudi Menuhin’s response to this CD, “When Interpretation and Improvisation Get Together”, was: “David Dolan is giving new life to classical music.” David is a professor of classical improvisation and its application on solo and chamber music performance at the Guildhall School of Music and Drama in London, where he is the head of the Centre for Creative Performance and Classical Improvisation. He also teaches at the Yehudi Menuhin School, and has been conducting masterclasses and workshops in major music centres and festivals worldwide.
Marc Leman Music interaction poses challenges for interactive music and AI applications
In this talk I will focus on music interaction and show how it may challenge current research in music and AI. First, I will introduce three core theoretical concepts of modern musicology, which support an empirical/computational approach to scientific research about music interaction. These concepts are: (1) embodiment, (2) predictive processing, and (3) expression. Together these concepts are fundamental for our understanding how people interact with, understand, and experience music. For example, when you dance (or gesture) to music, you act as if you generate the music, creating a motor-emotional empowering effect. Can music and AI deal with this type of human embodiment? Next, I will discuss the role of monitoring with motion capture, EEG, and AR/VR/XR technology as component for machine-based interactions. Monitoring or machine sensing requires the extraction of proper features about human musical gesturing, needed for decoding human intentions during musical interacting. Researchers often rely on insider-knowledge about musical gestures, such as researchers playing violin who study violin playing. How much insider-knowledge is needed for an AI-machine to predict human intentions while interacting? Thirdly, I will discuss the concept of biofeedback as it is probably the holy grail for future humanoid AI-based music applications. Biofeedback can be understood as a AI-machine response to monitoring biological parameters of human performers. From interactive AI-machines, humans might expect human-like timing and human-like expressions, used for building a theory of mind theory needed for interacting. Humans respond and express to time-critical goal-directed (gestural) behaviours and intentions. Can AI already offer such time-critical speed and flexible interaction? Clearly, biofeedback offers ways to break through the human action-perception cycle (e.g. through sensing the body) generating profound effects. I show a very successful application in the domain of music and sports. But should we call a successfully engineered biofeedback device “AI”? And what about the surrogate worlds offered by AR/VR/XR technology, and their creative applications? Process in empirical/computational musicology of the 21st Century draws upon the capacity for understanding music performance from the viewpoint of social human musical gesturing. Statistical methods, machine learning methods, or AI-methods offer interesting components for tools that drive human creativity, forcing us to think about the essential components of human-music-machine interactions.
Marc Leman is Methusalem research professor in Systematic Musicology, director of IPEM, Institute for Psychoacoustics and Electronic Music, Dept. Musicology and former head of Department. He founded the ASIL (Art and Science Interaction Lab) at the KROOK (inaugurated in 2017). His research is about embodied music interaction, and associated epistemological and methodological questions. He published several monographies (e.g. “Embodied music cognition and mediation technology”, MIT Press, 2007; “The expressive moment”, MIT Press, 2016) and co-edited works (e.g. “Musical gestures: sound, movement, and meaning”, Routledge, 2010; “The Routledge companion to embodied music interaction”, 2017). He teaches courses in Music Psychology (BAIII) and Music Interaction and Technology (MA) at UGent. He was supervisor and co-supervisor of several large National and European projects. He consulted the EU Commission, national research councils, institutions and projects and supervised > 45 PhD students. He was editor in chief of Journal of New Music Research and member of the Scientific Advisory Committee (SAC) of Science Europe. Ongoing projects, among others, include Expressive music interaction: Methusalem II (2015 – 2024), and Conbots (ICT-EU, 2020-2023), about exoskeletons in music interaction. In 2015 he received the Ernest-John Solvay price: a five-yearly FWO Excellence award in Humanities.
Ana Clemente – Interdisciplinary cross-fertilization: From cognitive psychology to AI and back
Humans and AIs are cognitive systems, immanently evaluative and creative. In this realm, two central questions arise: How can AI systems contribute to understanding human cognition and creativity? Conversely, how can psychological knowledge inspire and inform further advancements in AI? In this talk, I will briefly discuss some examples of such mutualistic relationships. In particular, I will focus on computational models of human appreciation and applications of psychological science to AI.
Ana is a cognitive scientist and a professional musician. Since January 2022, she has been a postdoctoral researcher at the Institute of Neurosciences of the University of Barcelona and Bellvitge Biomedical Research Institute. She is the recipient of the Barron Award (2022) from the APA Division 10 (Society for the Psychology of Aesthetics, Creativity, and the Arts). Her research interests include (but are not limited to) intrinsic motivation, learning, predictive processing, reward and creativity, with a particular and transdisciplinary focus on music, both as an enthralling cultural product and biological capability and as a means to understand psychological processes and their underlying neural mechanisms. Therefore, her research lies at the intersection of multiple fields, including cognitive psychology and AI, the topic of the present talk.