Workshops

The sign-up for all workshops is closed now but you can sit in the workshops if there are available space.

Beyond Performance Recordings: Strategies for Capturing, Taxonomizing, and Preserving Live-Coded and Improvisatory Electronic Music Practices
Presenter: Hunter Ewen (Amper Music)
Time: Aug 5, 10:00-11:00
Place: Room 3.3 (Conference Room)
Max Attendees: 20

Description
What does it mean to “capture” a piece of music? Is a copy of a musical score enough to fully articulate and reproduce a performance—especially considering the broad and diverse approaches electronic and electroacoustic composers require? A performer could be composing onstage, building an instrument, developing an algorithm, or collaborating globally from their laptop. Where acoustic music often relies on a causal relationship between a note on a page and a particular sound in a listener’s ears, live-coded and improvisatory electronic music provides infinitely more options for performers and performer/composers. As such, we must rethink our understanding of what it means to “record” a performance.

More information:
http://www.hunterewen.com/main.html
Beyond Performance Recordings Abstract and Description.pdf

Controlling DC Motors and Solenoids for Kinetic Sound Art and Music Workshop
Presenter: Steven Kemper (Rutgers, The State University of New Jersey)
Time: Aug 5, 10:00-13:00
Place: Room 3.4 (Seminar Room)
Max Attendees: 15

Description
Combining computer music control techniques with the direct actuation of sounding materials in the physical world can produce exciting sonic results. While pioneering artists have used motors in their work for decades, advances in low-cost electronics and easy-to-use microcontroller platforms have made this technology more accessible to musicians and sound artists. This workshop will teach participants the basics of motor control for the purposes of creating kinetic sound art and music. Topics covered will include an introduction to DC motors and solenoids as well as how to control these motors using the Arduino microcontroller platform and a computer. These technical concepts will be put into context through a discussion of contemporary kinetic sound art and robotic musical instruments. Participants will be able explore the sonic possibilities of motor control themselves by composing short studies for a group performance at the end of the session.

More information:
Kemper_ICMC_Workshop.pdf

Workshop on Design Strategies for Audio-Haptic Composition
Presenter: Lauren Hayes (Arizona State University)
Time: Aug 5, 10:00-12:00
Place: Room 3.1 (Suchang Hall)
Max Attendees: 14

Description
By examining the relationships between sound and touch, new compositional and performance

strategies start to emerge for practitioners using digital technologies. In this workshop we will explore why vibrotactile interfaces, which offer physical feedback to the performer, may be viewed as an important approach in addressing potential limitations of current physical dynamic systems used to mediate the digital performer’s control of various sorts of musical (and other) information. We will examine methods where feedback is artificially introduced to the performer’s and audience’s body offering different information about what is occurring within the sonic domain. We will explore mapping strategies, as well as placing of vibration on the skin. Participants are encouraged to bring their own music, sounds, or instruments and experience how they feel in addition to how they sound.

More information:
https://www.pariesa.com/home/category/Sound%20and%20Touch

Soundcool: Smartphones, Tablets and Kinect for Collaborative Creation
Presenters: Jorge Sastre (Universitat Politècnica de València), Roger Dannenberg (CMU)
Time: Aug 5, 11:30-13:00
Place: Room 3.5 (Lecture Room)
Max Attendees: 20 (age 10-99)

Description
We will lead children and adults to generate a creative work using the Soundcool system, which is a free system designed for young people to work with electroacoustic music, and video since version 3. The workshop will be developed around a performance prepared for the MarketLab of Sonar+D from Sonar Electronic Music Festival in Barcelona (Spain), the HoloSound performance. The sound creation is preceded by listening the different VST instruments, with envelopes, real-time processing of instruments, voices or percussions with Soundcool effects and also VST effects, samples, loops created with Soundcool, etc. All of them will be then used for the performance with the Soundcool system. We will introduce the basic modules of Soundcool, such as Player, SamplePlayer, VST host, Keyboard, Mixer, etc. We will learn how to control modules with apps on iOS and Android (please bring a mobile phone or tablet if you have one). Finally, a real-time creation by all workshop participants will be performed and we will discuss concepts and education with Soundcool.

More information:
http://soundcool.org/

Making Musebots @ ICMC: A Workshop
Presenter: Arne Eigenfeldt (Simon Fraser University)
Time: Aug 5, 14:00-18:00
Place: Room 3.5 (Lecture Room)
Max Attendees: 20

Description
This 4 hour workshop will introduce Musebots, a specification and set of tools for collaboratively creating networked generative music agents. The workshop will introduce the concept of Musebots and give existing examples, as well as introduce the latest web-based version which uses WebRTC and WebAudio. The second half of the workshop will involve creating and/or adapting musebot templates, so this workshop is aimed at ICMC participants who have familiarity with either Javascript (for the web-based musebots), MaxMSP, or Max4Live. Musebot templates also exist in PD, Java, Extempore, and SuperCollider; however, these platforms are unfamiliar to the presenter (come, but I can’t support you!).

** Participants are required to bring their own laptops and ethernet adapter! Ethernet cables will be provided. WiFi is possible, but less reliable. **

More information:
2018 musebots: http://musebots.weebly.com/info.html

Music for Mediation and Meditation for Music
Presenter: Kim Je Chang (Academy Of Meditation Arts)
Time: Aug 5, 14:00-18:00
Place: Room 1.1 (Lecture Room)
Max Attendees: 30

Description
The title of this workshop is “An Experiment to find out the Technique to improve the cognizing power of meditator by releasing the congestions inside bodily organs with the help of sound resonance phenomena.” In this 4 hours workshop, we’re going to attempt a practice of one hour meditation(30 min. short & shallow breathing + 30 min. observation of painful bodily sensations) practice together on the chair after short lecture. After meditation we’ll try to find out the exact frequency of the congestions in the bodily organs of selected meditator with the help of AI technology and try to create the exactly same frequency of the congestions of the selected meditator. We believe that this experiment will be able to release the congestions of a meditator shortly and help him/her to reach deep meditation stage very quickly. By this experiment we’ll try to find out the formula of this process of releasing congestions and if it is possible we hope that this technique would help many meditators who’re suffering by the energy stagnant inside the bodily organs. In this workshop musicians would find out new role in the society to help professional mediators by releasing congestions inside bodily organs with the help of resonance phenomena. In addition if we’ll be able to arrange the sounds produced by this process properly, we hope that we’ll be able to create new style of music which has artistic value.

More information:
Music for Meditation and Meditation for Music.pdf

Live Coding with Csound
Presenter: Steven Yi (Rochester Institute of Technology)
Time: Aug 5, 14:00-18:00
Place: Room 3.3 (Conference Room)
Max Attendees: 30

Description
This workshop will introduce users to live coding techniques and practices using the Csound sound and music computing system. Attendees will work through a series of practice-based exercises to explore live coding with Csound themselves.  We will use the presenter’s csound-live-code project to explore various approaches to sound design and real time event generation. We will explore topics such as: metronomes and time; hexadecimal notation for percussion writing; score generation using realtime, callback, and event-time (i.e., temporal recursion) coding approaches; and more. Modern Csound 6 syntax and practices will be used for the workshop.

The target audience for this workshop includes those new to live coding and/or language-based systems; those looking to employ live coding as part of their composition workflow; and those seeking to perform music live with code.  No prior knowledge of Csound is necessary.

More information:
http://kunstmusik.com/
ICMC2018_Workshop_Csound_Live_Code.pdf

Collaborate on Building Improvisatory Interactive Realtime Works in Max/MSP
Presenter: Esther Lamneck (NYU) and Cort Lippe (University at Buffalo, New York)
Time: Aug 5, 14:00-18:00
Place: Room 3.1 (Suchang Hall)
Max Attendees: 20

Description
The workshop will be dedicated to building collaborative pieces which explore Esther’s use of the rich sonic material of the Hungarian Tárogató in improvisatory live electronic environments. Composers/Sound designers and/or visual designers are invited to submit material they would like to develop during the workshop consisting of Max/MSP/Jitter patches along with musical sketches/ideas destined for live electronic art environments.  Participants will be encouraged to collaborate during the workshop.

The short-term goal will be to create etudes/sketches during the workshop, and the long-term goal will be to identify and begin collaborations with Esther for subsequent performances by her or the New Music Ensemble at New York University, which Esther directs. Materials (no notated scores) along with any questions can be sent to Esther Lamneck at el2@nyu.edu.

We will look at various real-time analysis tools with the goal of taking information from a performance and mapping this data to audio and visual control, allowing performers to influence musical and visual parameters in an improvisatory interactive environment

More information:
https://steinhardt.nyu.edu/faculty/Esther_Lamneck
https://www.cortlippe.com/

Improvisation and Voice
Presenter: Paul Botelho (Bucknell University)
Time: Aug 5, 14:00-16:00
Place: Room 3.4 (Seminar Room)
Max Attendees: 17

Description
The workshop will investigate the extended voice, computer/electronic performance, and improvisation through listening, mindfulness, and practice. The voice, as an instrument, will be examined with a focus on the use of extended techniques as a catalyst for improvisation. The use of computers and electronics as transformative and provocative tools will be explored. As well, the role of computers and electronics as performing instruments, in the context of improvisation, will be investigated.

Performance will be a central component of the workshop with hands-on engagement by participants. Listening exercises will act as a foundation bringing participants into the mental space necessary for successful improvisatory performance.

More information:
ICMC 2018 – Botelho – Improvisation and Voice Workshop.pdf

Immersive Audio : Key to the Next Generation Media Contents
Presenter: Devin Choi (Sonictier)
Time: Aug 7, 12:00-13:30  / Aug 8, 12:00-13:30
Place: Room 3.1 (Suchang Hall)
Max Attendees: 30

Description
‘Forget surround sound. Immersive audio is the revolutionary solution to the media contents in the future. Period.’ This workshop will lead you to a whole new level of sounds by suggesting different immersive audio solutions for different kinds of media. Despite the rapid growth of multichannel audio systemin cinemas and houses, music was mostly produced in Stereo. From Binaural to Ambisonics, Applying Immersive audio technology to computer music might entirely change the way of listening to music. With the help of Audio plugins and DAWs that support Immersive audio, musicians can create a workflow that truly moves listeners. Attendees can experience the genuine Immersive audio contents and comprehend the concept of NGA (Next Generation Audio) and it’s trends in the industry.

More information:
http://sonictier.com/

HASGS: Composing for a Hybrid Augmented Saxophone of Gestural Symbiosis (Canceled)
Presenter: Henrique Portovedo (CITAR, School of the Arts at Portuguese Catholic University)
Time: Aug 10, 12:00-14:00
Place: Room 3.1 (Suchang Hall)
Max Attendees: 50

Description
This project is part of the research driven by the saxophonist and sound designer Henrique Portovedo, designated Multidimensionality of Contemporary Performance. Starting as an artistic exploratory project, the conception and development of the HASGS (Hybrid Augmented System of Gestural Symbiosis ) for Saxophone became, as well, a research project. The project has been developed at Portuguese Catholic University, University of California Santa Barbara, ZKM Karlsruhe and McGill University Montreal with insights from researchers as Henrique Portovedo, Paulo Ferreira Lopes, Ricardo Mendes, Curtis Roads, Clarence Barlow, Marcelo Wanderley. On this workshop we will explore techniques and approaches of composition having as starting point some of the pieces already composed for the instrument.

More information:
https://www.henriqueportovedo.com/
HASGS, Composing for an Hybrid Augmented Saxophone of Gestural Symbiosis.pdf
HASGS Live at Malloca Saxfest2017.mov