IRCAM welcomes several installations selected by the NIME 06 jury. Improvisers, performers, instrument inventers, and experimenters in digital technology, these artists push the limits of what is planned and what is composed, and make "interaction" their principal credo.
Acousmeaucorps (pronounced "acous-mo-cor") is an interactive sound installation that creates an acousmatic body space using a video camera, a computer, and four speakers.
The video camera (situated above the space and facing downward) is connected to a computer running Max/MSP/SoftVNS which uses movement and position data to generate spatialized sound. The human body thus becomes a performance instrument, generating and triggering sounds which build musical sequences through walking, running, making arm movements, or even just flexing one's fingers.
For the current version of Acousmeaucorps, the sounds are played on two levels. One is a resonant "mass" that seems to move like water in a wading pool, favoring different pitches depending on the area of movement. The other is the triggering of different "found objects" that seem to jump out of very specific locations within the space. Together the sound types encourage people to shed their inhibitions and enjoy searching with movements that become fluid and questioning, the idea of "body" and "space" taking on new significance. The visitors, moving freely within this space defined by the four corner speakers, become the creators and performers of their own simultaneous music and dance.
With the help of La Grande Fabrique, Dieppe
Musical interface using floating balloons.
The source sound for this work is taken from ping-pong balls rolling on an electric fan. The movement of the floating balloons then processes the sound. A video camera and a computer running Max/MSP/Jitter track the colors and the movements of the balloons. Each color creates its own effect, and the performer can control the movement by moving their hands and changing the wind from the electric fan.
Transformation of a 250 year-old loom into a sound and image instrument. The work was created in the context of northern France, where the loom has played a very important role in the region's economic development and recession during the last century.
Its mechanical motion, its sounds, and the flow of the threads do not only evoke an industrial past but also a whole set of collective emotions that range from poetic to distressing, depending on the person. It is in a way comparable to Japanese Haikus, where minimal words and syllables can generate a magnificent array of images.
At the same time, the loom is also of remarkable global significance to the history of new media art. Art historian Lev Manovich cites a remark made by Ada Augusta, the first computer programmer, who said: "the Analytical Engine weaves algebraical patterns just as the Jacquard loom weaves flowers and leaves...". The connection between the Jacquard loom and the Analytical Engine is not something historians of computers make much of, since, for them, computer image synthesis represents just one application of the modern digital computer out of thousands. But for a historian of new media it is full of significance. Here, this readymade from 250 years ago speaks individually to each spectator, each one having the freedom to make his own connections with local or global history and to weave a unique soundscape based on his experience with the work.
"The submitted work has won the unanimous approval of the jury, which recognizes the presented object and its magnificent poetic dimension (similar to the poetic effects of Japanese Haikus). The project is unanimously commended by the jury." Le Fresnoy–Studio National, 2005.
Electronic Engineer Francis Bras (InterfaceZ) | Studio Decors Jean-Pierre Courcelle | Video Capture Concept Jean-Baptiste Droulers | Production Manager Isabelle Bohnke | Artistic Adviser David Link, Andrea Cera, Christophe Kihm, Madeleine Van Doren.
Special Thanks to Richard Campagne, Matthieu Chéreau, Alain Fleischer, José Honoré (Musée du Jacquard), Joelle Pijaudier, Pascal Pronnier, Blandine Tourneux.
A Le Fresnoy, Studio national des arts contemporains, à Tourcoing Production
16:9 is an intercreative sound installation. Using a wireless, portable, and easy to use interface, sounds can be mixed and freely spatialized on a speaker canvas of several square meters by painting with colors on a handheld touch screen.Through the specially developed interface, the virtual painter is able to move freely in front of the canvas to interact with the acoustic painting in a tactile and playful way. 16:9 offers a projection screen for a user-generated audio painting.
Sixty-four independent speakers installed as a pattern in a large white field comprise the projection screen. Adapted to the architecture of the space, the speaker matrix is designed to be hung on a wall like a painting. The array of speakers offers a precise placement and radiation of sound. Realistic depth effects as well as great freedom in sound positioning are easily attainable and form the basis for the intercreative audio painting. In 16:9 urban sounds are used to produce the different sound/color textures in real-time (while exploring the city in which the installation is shown the composer seeks out unique sounds that are characteristic of this urban space). Out of taken recordings, the composer creates sound/colors that are later the base for a synesthetic virtual painting. By changing the character of these sounds or colors over several degrees of abstraction it is possible to experience a synaesthetic urban situation.
In 16:9 the visitor is integrated in the installation as the source of action. Without action no sound will appear on the canvas. Because the decision of how to mix and place a color depends on the visitor, the result will always be a unique interpretation. The installation will no longer only represent an idea thought out by the artist. In a creative process the painter is able to reflect his thoughts into the installation.
Sponsored by Apogee, der Senatsverwaltung für Wissenschaft, Forschung und Kultur, Wolf and Elisabeth Teige, Dipl. Ing. Folkmar Hein, Thomas Seelig, Michael Hoeldke, and Dipl. Ing. Manfred Fox. Very special thanks to our producer Catherine Mahoney in New York, Dipl. Ing. Osswald Krienke from Digital Audio Service in Berlin, Hanjo Scharfenberg/ Galerie Rachel Haferkamp in Cologne and Apple Computers.
The focal point of this installation lies in the relationship between the visitors and an abstract audio visual world, the intersection between the space of a person in a room and an imaginary geometrical and acoustical space.
Processes: an abstract world is created in real-time by the means of generative drawing and electronic sounds. The evolution of the visual and acoustical processes depends on interaction with the visitors but is also partly autonomous, using information gathered from various sources on the net. The images represent an imaginary space in which relationships between entities are visible. It's a shifting scenery with a strong graphical look; like a painted film.
Interaction: the interaction gives the visitor the opportunity to interact directly with the process by means moving in the installation. The interactive system observes the exhibition space with a camera and then extracts information from the visitor's movements and behavior to control sonic and graphical processes.
Media: two layers of media appear in the installation: one consists of prepared footage and audio-recordings from urban and architectural spaces. The other is images and sounds from the actual gallery space. These are mixed, modified, and projected as fragments and textures into the imaginary space.
Space: the installation space is modified in such a way as to guide the visitor's attention from the actual to the imaginary space. Two images - one large, one small - are present, as well as an immersive surround sound system. A semitransparent screen divides the space, creating a path, which the visitor can take, from an "outside" with a single view to an "inside" where all elements of the installation are present.
Interactive installation that enables a synchronous expression sound and light.
Audiences can simultaneously control sound and a three dimensional light object, which appears in a cylindrical display. By moving handheld control device in the air, audience members can experience harmonious expression of sound and light. Once the audience member swings his arm up, the light object and sound emerge in space. When swings his arm up again, the light object vanishes without a trace.
The Audio Shaker explores our perceptual understanding of sound. Anything sung, spoken, clapped, whistled, or played near it is trapped inside, where it takes on an imagined yet tangible physicality.
Sounds caught in this void are transformed, given weight and permanence, reacting directly to the shaker's movements, subtle or violent. Shaken sounds have to settle down before becoming still and silent, behaving more like fluid than transient energy.
The linear timescale of sound is broken, a conversation is split into words and mixed up in the shaker, and can be poured out separately, tipped out in a simultaneous splash or added to and shaken up further.
Put simply, it is a tactile container to capture, shake up and pour out sounds. Creating a rich, intuitive experience that is purposefully open to interpretation and imagination.
Sonobotanics is still a widely unknown science; it studies plants whose life experience is predominantly in the auditory domain.
Institute for Predictive Sonobotanics
Foundation for the Auralisation and Computation of Transient Objects
Since the 1970's Dr. Hortensia Audactor has carried out the core research in this area. Despite difficulties encountered in the publication of her results, she has collected a substantial body of research about the growth patterns, communication behavior, and other characteristics of these plants.
Recently, the field of Predictive Sonobotanics has been founded, attempting to create models of the plants with the aim of predicting the behavior of sonobotanic plants and to gain a deeper understanding of the subtleties in sonobotanic plant behavior.
In the exhibition models of the Periperceptoida Dendriformis Sensibilis and the Periperceptoida Dendriformis Imaginaris are presented.
Immersive interactive environment that makes use of small spherical electronic interfaces in an audio installation.
Transduction.2 happens in a space where five balls (the spherical interfaces) lie spread about here and there on the floor. The visitors just have to pick them up to trigger spatial sound diffusion and to interact with Transduction.2's immersive environment. By manipulating the interfaces, users trigger the creation and the distribution of sounds.
Sounds in Transduction.2 are only produced when an interface is being moved. The distribution of the sounds is determined by the position of the interfaces in the space. The sounds "follow" the interactors as they move in the space. The system also allows users to "throw" sound from one speaker to another by simply making a movement to throw the spheres toward the chosen speaker. Interactors can control the volume of the sound according to their position in the space.
Transduction.2 uses three types of sound: sound samples, sound produced by algorithms, and live sound introduced by the interactors in the system. They are sounds from natural elements (waves, wind, rocks, seagulls), from human activities (laughter, foot steps, kids on a swing), and from objects (gears, rolling trains, paper folding). Interactors have the possibility of introducing sounds into the system via a miniaturized microphone inserted into one of the interfaces. This allows interactors to input spoken words, breathing, music, rhythms or any other sound proposition. By moving the interface, they modify the sound they have input in real-time. These modifications are created using the same structure as the one used for the sound algorithms. The input of live sounds allows users to influence Transduction.2's sound environment in a very personal way and open up to the creation of unique sound performances.
Ubisense, a company based in Cambridge (England) and Denver (USA) developed the Local Positioning System (LPS) used in Transduction.2. For more than two years the creator of Transduction.2 has been a part of their research network.
From Thursday, 1 to Saturday, June 17, 11am-7pm / IRCAM