Thurs June 6th : Keynote presentation : Friday June 7th

The Cybersonica symposium is a gathering of some of the leading innovative musicians, sound artists, broadcasters, software and hardware developers working within the realms of contemporary sound art. There will be presentations and demonstrations of software and instruments as well as the latest research.

The first day introduced by John Eacott and Prof. Mark D'Inverno of the University of Westminster focusses on the practice of sound art, live performance methods, algorithmic and generative composition, spatial sound, networked performance, new instruments and interfaces and includes a keynote presentation by Michel Waisvisz, composer / performer and head of STEIM performance research studios in Holland.

Day two opens on the theme of interactive sound and audiences and includes a panel hosted by Dan Gardenfors of the Interactive Institute, Malmo. This is followed in the afternoon session by a discussion on copyright and the commerce of new sound media hosted by Dr. Richard Barbrook from University of Westminster's Hypermedia Research Center.

Thursday June 6th
Morning session, 10am - 1pm

Registration and coffee from 9.30am

Welcome by Prof. Mark d'Inverno
Introduced by John Eacott

The practice of sound art - new instruments, interfaces and networked performance

10.10 Mark d'Inverno and John Eacott - Intelligent Technologies and Generative Processes for Embedded Ambient Music

10:30 Fredrik Olofsson (Sweden) - Wanna form a virtual band?

10:50 Tom Betts - Pixelmap

11.10 Iliyana Nedkova - Sounds From Near and Far

11.30 Coffee

11.50 Walter Fabeck - Introduction to the Chromason

12.20 Jonah Brucker-Cohen (USA) - Physical Web Interfaces

12.40 Andy Birtwistle - Insects, urine and flatulence: the radical potential of Mickey Mousing

 

Afternoon session, 2pm - 5pm
Introduced by Prof. Mark D'Inverno

Live performance methods, Algorithmic and generative processes

2.10 Nick Rothwell - cassiel - symbolic sequencing technology

2.30 Nick Collins - Interactive Evolution of Breakbeat Cut Sequences

2.50 Tim Sayer - software interfaces in improvised music

3.10 Matt Flax (Australia)- Digital Time Code System for non-linear music

3.30 Coffee

3.50 Thor - IXI Software

4.00 Marc Lafia (USA)- Algorithms and Allegories

4.10 Gabriella Braun* - ANTARCTIC WAVES

4.20 Michel Waisvisz

Keynote presentation - Michel Waisvisz

Michel Waisvisz is known for his highly physical, sensitive and ecstatic electronic music shows using The Hands (a gestural sensor instrument) he developed at the STEIM foundation in Amsterdam. Waisvisz has since the late sixties developed whole new ways to achieve a physical touch with electronic music instruments; sometimes literally touching the electricity inside the instruments and thereby becoming a thinking component of the machine. He was one of the first to use synthesizers on stage and the first to develop and perform with what is now called gestural MIDI controllers. He also is the inventor of the CrackleBox and The Web.

Beside the solo performances he also has collaborated with a great variety of musicians/ composers: Laurie Anderson, Steve Lacy, DJ Spooky, Najib Cheradi, The Nieuw Ensemble, Willem Breuker,The San Francisco Symphony Orchestra, Maarten Altena, etc. Waisvisz has been the co-founder of electrical sound festivals in Holland. The so called Touch exhibitions, with electronic music instruments that can be played by the visitors, has travelled through Europe. He leads the STEIM foundation in Amsterdam where performance artists from music, theater, dance and new media art, and DJ's and VJ's, work to develop their personal electronic instruments.

<top>

 

Friday June 7th

Morning session, 10am - 1.30pm
Registration and coffee from 9.30am
Introduced by Anders-Petter Andersson Installations

Interactive sound and audiences10.10 Anders-Petter Andersson & Birgitta Cappelen, Interactive Institute, Malmo Sweden - From Key To Field

10.30 Dan Gardenfors (Sweden)- Designing sound-based computer games

10.50 Rikard Lundstedt (Sweden)- Sound Room Composer

11.10 Kathryn Best and Giles Rollestone* - Urban Feedback, interactive CD Rom

11.20 Stefan Cartwright - CUBOP

11.30 Coffee

11.50 Robin McGinley (UK) - Earth's Original 4 1/2 Billion Year Old Electronic Music Composition (A Work in Progress)

12.10 Alexandre Plennevaux (Belgium)- LAB[au] laboratory for architecture and urbanism

12.30 Matt Rogalsky (Canada) - radio silence

12.50 - 1.30 Panel session chaired by Anders-Petter Andersson

 

Afternoon session, 2.30 - 5pm
Introduced by Dr. Richard Barbrook.

2.30-2.50pm presentation by Charles Kriel - 'Noise and the Uncanny'

3.00-4.30pm panel discussion on copyright and commerce

4.40-5.00pm presentation by Lara Blazic and Sveta Bankerovic

<top>

Abstracts & Biographies

Anders-Petter Andersson, Interactive Institute, Malmo Sweden, Birgitta Cappelen, School of Arts and Communication, Malmo Sweden - Mufi (Mu -music Fi -field)
anders-petter@interactiveinstitute.se

In this paper we want to explore Field as a concept and as a metaphor and how it changes our understanding of interactive systems. We want to show the change from a mindset where one uses and designs Keys and Controls to a mindset where one uses and designs Fields of potentialities. A Key is a tool to open a door. It is a button to push in order to control something, a musical instrument or a way to make a choice. It is an absolute characterisation of the music seen as the dichotomy between Major and Minor. A Key is unambiguous. A Field is something we cultivate where what we plant grows. It is an open area to walk through. It is an area of knowledge with yet unidentified constraints. A Field is ambiguous. We live at a time when the network is a major material factor, after postmodernism where absolute references to values and meanings died. Therefore we consider Field to be a more relevant concept making us understand what we create and use, than the functionalistic concepts of Keys and Controls.

If we look at our selves as creators and explorers of Fields the old functionalistic mindsets must let go of its grip. The audiences become creators of the Field in similar ways that the composers are. As composers we have to create potentialities and not finalised works to be performed in front of an audience. We create elements of an unlimited Field of potentials, nodes in a network. Nodes that relate potentially to other nodes and in real time create the tune, realize the potential. This changes the meaning of linear composition, still building on and transforming old principles of composition. We use the interactive installation Mufi (Mu -music Fi -field) in order to exemplify our arguments.

<back>

Kathryn Best and Giles Rollestone - Urban Feedback - interactive CD-ROM

'Urban Feedback London Tokyo, Tokyo Nomad' CD-ROM captures and relays a sense of Tokyo through the relationship between time-based media and the subtleties of interaction. It forms part of a continuing series of city-based projects that reflect the fractured experience of the urban environment.

'Tokyo Nomad' is a reactive environment which, over time, reveals ambient views and impressions of Tokyo; the city you inhabit, or Tokyo the city you dream of visiting. 'Tokyo Nomad' reveals events and visual experiences over time, mapping time to place, mapping the real time of your computer clock to the experience of Tokyo as an abstract, imagined place. The 'time-based' engine contains sixteen different events, shifting and revealing in response to the phases: Morning, Afternoon, Evening and Night. Within each of the phases are four interactive themes: No Address, Without Words, Station and Balance. The events are layered, eroding over time to reveal glimpses and reflections of the visual complexity of Tokyo.

www.urbanfeedback.com

<back>

Tom Betts - Pixelmap
nullpointer@odessadesign.co.uk

'Pixelmap' allows generative music producing programs (such as Max and PD) to generate simple geometric patterns in the tradition of Norman Mclaren's animations (using MIDI data). It combines the act of a live generative performance(all synthesis based) with co-sequenced / interpreted generative images.

http://www.nullpointer.co.uk
http://www.dividebyzero.org
http://www.orphanrecords.co.uk

<back>

Andy Birtwistle - Insects, urine and flatulence : the radical potential of Mickey Mousing
ab27@cant.ac.uk

Contemporary computer technology now presents the audio-visual composer with the possibility of mixing what is in effect a live, improvised film. As an example of such a practice, this presentation will focus on Mick Grierson's audio-visual composition 'Remote Control' (2002). In many ways such work can be viewed within the context of synesthetic artistic traditions which find expression and theorization in the work of composers, painters, filmmakers and computer pioneers. However, I propose that the 'termite art' of Carl Stalling, House Composer for Warner Brothers cartoons between 1936 and 1958, is now becoming increasingly relevant to an understanding of the future possibilities of audio-visual composition. Digital technology allows the composer to fulfill the true entomic and urinary lines of flight originally proposed by the residents of 'Termite Terrace'.

Contemporary popular music provides its own challenge to western art music, through its use of montage, fragmentation, quotation and pastiche - elements shared in common with the compositional work of Carl Stalling. But Stalling's music, by nature of the fact that it was composed to work with images, also featured the sonic illustration (by morphological sound effects) of visual events pejoratively termed 'Mickey Mousing'. Western art music has tended to reject those sounds it considers unmusical. It has always held itself - and has been held up to be - the abstract art from par excellence. Extramusical or worldly sounds have no place in Western art music. The intertextuality of Stalling's work is an intrusion - albeit musicalised - of the outside into the sealed world of Western music. Forms of dissonance, a devaluation of the classical, the introduction of the incomplete and a lack of closure all point to a radical trajectory in the work. And yet it is cartoon sound's adherence to the image, its lack of isolation that is most radical. Mickey Mousing punctures the bubble in which western music has placed itself, forcing an acknowledgement of an 'outside', an other: in this case, the visual. Not only does Mickey Mousing destroy the notion of an isolated specificity, of an abstraction from all else, but it also introduces ideas of other kinds of structuration and other ways of considering structure.

Within the cartoon all these potentially turbulent elements are clearly contained. It is only when taken out of context of the cartoon and compared with other music does cartoon music begin to build a space for itself - the space we might expect. This space is now provided by live audio- visual composition. I use the notion of the refrain - in the terms proposed by Deleuze and Guattari in their essay 1837: Of the Refrain - to examine the mechanics by which a morphology shared by image and sound bites out its own audio-visual territory as live composition progresses.

<back>

Gabriella Braun - Antartic Waves
gabi@braunarts.com

ANTARCTIC WAVES will offer a unique collection of creative tools for making music inspired by the important scientific work being explored in Antarctica. It uses as its unifying principle the concept of waves.

Waves represent:
- the methods used by scientists to both measure and understand Antarctica,
- the ways in which we both make and hear (and sometimes see) music,
- a series of metaphors connecting this scientific work with musical ideas.

The central concept for these music making software tools is a new idea that extends the notion of visualising data. Scientists who need to understand complex data sets are familiar with the idea of representing data graphically and sometimes dynamically. Recent developments in hardware and software have provided a new range of powerful software tools for visualising data. We have taken this idea one sense further and designed software that enables the user to musicalise data - in other words software that allows us to hear scientific data as well as see it. The results will be offered as a creative toolkit that draws on scientific data as the basis for creating music.

Antarctic Waves will be distributed freely to each UK secondary school or college where GCSE & A-Level music or equivalent qualifications are taught. This market is recognised as being substantially under-provided for in terms of composition resources as most emphasis is placed on the history, theory or performance of music. Teachers are known to generally lack confidence in teaching composition, and SciArt projects are very rarely considered, much less taken on. Antarctic Waves aims to address this as well as aiming to have a wider appeal to a non schools-based commercial market.

Antarctic Waves will encourage young musicians to create new musical works inspired by the last great wilderness. The interactive ‘musicalising toolkit’ will encourage students to use their own musical styles to give musical expression to data that reflects global problems, such as climate change and ozone depletion, creating a cross-cultural bridge between science and arts. These musicalising software tools are unique to Antarctic Waves and are based on an original idea by the Braunarts team being developed specifically for this project. Antarctic Waves will tap into young people’s deep interest in environmental issues and explore Antarctica’s importance in understanding our fragile planet.

Musicalised data includes satellite-tracked Wandering Albatrosses, infrared imagery of the atmosphere, 'Whistlers' - electrical disturbances created in the northern hemisphere and heard at the Antarctic, and ice layers in a glacier.

Antarctic Waves is a unique collaboration between the British Antarctic Survey, Braunarts and the Philharmonia Orchestra.

www.antarcticwaves.com
www.braunarts.com

<back>

Jonah Brucker-Cohen (USA) - Physical Web Interfaces
Research Fellow, Human Connectedness Group, Media Lab Europe, Dublin, Ireland
jonah@coin-operated.com

Physical Web Interfaces is a concept that attempts to change our relationship to computers and conventional Internet interfaces by bringing virtual processes into the real world. As interfaces change from the desktop metaphor into new organic forms, there is a realization that we - not the computer - have become the ultimate interface for interactive media. By attaching new physical inputs (besides the keyboard and mouse) to things we perceive as truly virtual, we add a socio-critical dimension to the interaction of people and machine when they meet at the interface. Thus we can make the intangible tangible. In this presentation, "Physical Web Interfaces", I will explore our relationship to computers from a purely human perspective - where machines must adapt and respond to us, not us to them. I will also demostrate ways we can connect virtual net spaces with our physical surroundings by discussing several of my projects including "Crank The Web" - a personal control for bandwidth, "Site-Traffic.net" - a two-way tele-presence musical sequencer, "LiveWindow" - sensory input to a browser, "SearchEngine" - a physical search engine, "SpeakerPhone" a telepresent sound environment, as well as projects that allow for artistic collaboration between people with mobile devices.

Jonah Brucker-Cohen works as a Research Fellow at Media Lab Europe in Dublin, Ireland. He received a MPS from the Interactive Telecommunications Program at New York University's Tisch School of the Arts, NYC and worked there from 1999 to 2001 as an Interval Research Fellow creating interactive digital / networked projects. His writing has appeared in WIRED Magazine, I.D. Magazine, Print Magazine, Time Out New York, he is an Internet music columnist at Magnet Magazine, and was chosen as a nominating judge for the 2000-2001 Webby Awards.

http://www.coin-operated.com
http://www.coin-operated.com/projects

<back>

Stefan Cartwright - CUBOP
stef@rechord.com

CUBOP - a compositional instrument is an instrument for creating and playing with moods. A kind of audio visual wallpaper which you can personalise to taste. Like any instrument there are certain rules which when applied create a more pleasing effect. This piece came to us as we developed our free jazz programming style. The joy of this is that unlike most computer processes, we have no preconception of what the interface will sound like, or how it will behave. We don't use a score. We improvise until a coherent language emerges. This is not the end of the story. Unlike jazz, the audience is a vital part of the composition process. We provide mood-morphing interactive environments. The audience plays the melody.

rechord are artist/designers working with interactive media. Less interested in prescribing experiences and more in evolving environments/communication strategies which engage and inspire playful participation.

www.rechord.com

<back>

Nick Collins - Interactive Evolution of Breakbeat Cut Sequences
n.collins@mdx.ac.uk

This research project combines two previous lines of investigation based on SuperCollider 2 (McCartney 1998) class libraries for breakbeat cutting (Collins 2001, 2002a) and interactive genetic algorithms (Collins 2002b). The BBCut Library provides support for experiments in audio cutting, facilitating the writing of new cut procedures without rewriting synthesis code. The GAParams Library has base classes for genetic algorithms and user interface tools for interactive evolution.

The space of possible cut sequences for an audio cutting algorithm is a massive mathematical territory, but effectively explored via genetic algorithm in collaboration with a human auditor. The paper relates a number of experiments based on the evolution of algorithmic composition routine parameters. (Dahlstedt 2001) showed great potential in the combined optimisation of synthesis and algorithmic composition parameters (low and high level perceptual entities) and we build upon this foundation.

The parameter space of the original automatic breakbeat cutting algorithm, BBCutProc11, is explored. A new cut procedure, MotifCutProc, which manipulates an intermediate level of hierarchy between block and phrase, is introduced. Motifs for the routine are evolved using the genetic algorithm. The simultaneous evolution of multiple layers of breakbeats is shown to be a powerful method of arriving at polished resultants, as long as a few restrictions are in place to enforce a degree of synchrony between voices. The results are promising and open up a world of possibilities for developing effective custom audio cutting solutions.

<back>

Walter Fabeck - Introduction to the Chromasone
walter@chrom.demon.co.uk

Since 1994 I have been developing and refining a unique gestural interface. This project started when I was studying "Sonology" at the Royal Conservatory of the Netherlands. STEIM in Amsterdam gave me support to develop the instrument . I have performed with it at Paradiso, Amsterdam; Podewil, Berlin; Mercat des Flors, Barcelona; and in numerous events in London and the UK including the "HyperEvent" and "Treason of Images" at the South Bank Centre; the Disobey club at the Garage; the 291 Gallery in Hackney; the Abney
Park Cemetery, Stoke Newington and the roof of the Royal Festival Hall. Collaborative commissions have integrated my performances with Film, Theatre, Video and Contemporary Dance.

WHAT IS THE CHROMASONE
The instrument was conceived to perform electronic music with a uniquely responsive interface, allowing an unprecedented degree of expressive control over numerous sonic parameters, whilst at the same time examining the fundamental issue of performer-system interaction: the two-way process of control and feedback.

The earliest form of the instrument consisted of a pair of Datagloves equipped with ultrasound transmitters. In this system the signals from the bend sensors and the ultrasound sub-system (for spatial mapping) were combined in a STEIM Sensor-Lab to produce MIDI output for external synthesizers and samplers. The left-right or 'x-axis' mapped to pitch, with a 128-note 'virtual' MIDI keyboard calibrated to the exact dimensions of an acoustic piano. The front-back or 'y-axis' was mapped to volume.

This concept remains at the core of the instrument, but there have been many refinements including the addition of a vertical 'z-axis' for timbral control; extra switches on the gloves, and radical visual design and construction by Tim Gravestock which gives physical definition to the planes of movement through a chromium and perspex structure supporting foot sensors; ultrasound receivers and a 2.1 metre long "pointer". This consists of a thin rod on a pivot which allows the x-axis to be rotated and tilted with respect to the 'y' and 'z' axes: practically speaking, this enables the pitch field to be rescaled with respect to the volume and timbre fields through physical gesture, drastically altering the behaviour of the sonic palette to which the interface is linked. This, in combination with the auxilliary switches and foot sensors gives the Chromasone its unique identity.

www.chrom.demon.co.uk

<back>

Matt Flax - Digital Time Code System for non-linear music
flatmax@ieee.org

A time code system is described which allows linear and non-linear alteration for multimedia systems and streams. The time code system is adaptable to any time code protocol. The time code system is computationally inexpensive.

http://mffmtimecode.sourceforge.net/

<back>

Dan Gardenfors - Designing sound-based computer games
danga@interactiveinstitute.se

This project deals with creating sound-based computer games, which are games that use sound as their main medium. Auditory games can be described as computer applications somewhere between the fields of interactive music and auditory interfaces. While most existing interactive audio applications resemble virtual worlds that allow the user to explore ‘compositions’, auditory games characteristically have clearly defined goals. Still, there is no clear border between what is a ‘sound toy’ and what is an auditory game.

Many interactive audio creations depend on some form of graphical interface to communicate the ways in which the sounds can be manipulated. I try to design applications that do not depend on graphics, so the games can be played with sound as the only output. This approach enables visually impaired users to play the games. So far, all my work on sound-based games has been done in collaboration with the Swedish Library of Talking Books and Braille (TPB). The games are intended to be accessible and fun for everyone who likes to listen to music and play with sounds in general.

Today, only a limited number of sound-based games exist and many of them are basically speech-synthesised versions of text-based games. However, as the use of speech in games limits the number of simultaneous sounds and makes real-time interaction difficult, I try to create games with as little speech as possible. Instead, these games rely on sonic illustrations, corresponding to the images of more modern graphic-based games. Illustrating events with sound only is often quite difficult, since it is not as customary as is graphic illustration. I believe that if a language of sonic illustration would be established, it could generate a completely new genre of games that can be played with a pair of headphones and a game pad only. It is, in any case, clear that designing fun and functional, as well as aesthetic, sound-based games is a great challenge. The appeal of such games, as with all computer games, basically depends on two factors: the actual concept, task or game objective, and on the design, which in this case is the sound design.

In 2001, I developed two simple auditory games together with TPB: a memory game, and a game based on the classic mathematical problem, The Towers of Hanoi. These games convey simple images by the use of illustrative sounds, reducing the use of speech to a minimum.

The instructions are presently only available in Swedish, but the actual games contain no words.) Currently, I am exploring real-time interaction in the Tag Game. This game does not wait for user input the way the previous puzzle games do. The Tag Game will be published by TPB in June 2002.

www.tpb.se/spel/ljudspel/index.html

<back>

Adam Hoyle & Lewis Sykes - the iRiealism project
adam@suckmypixel.com
iriealism@lewissykes.com

The iRiealism project started in May 2001 and work has included a live MIDI controlled visual show for Landslide at the Big Chill Festival in Lolworth Castle, live Vjing at the Traffic monthly club night in Old Street and at the post tour X-mas party for Ash.
Current development of the iRiealiser, a live audio-visual engine, has resulted in a piece for soundtoys.net. Ultimately the duo intends to play their real-time controlled audio-visual works to a wider public.

The website includes an overview of the project, a log of the development and online demos of work. The iRiealism project is the realisation of a long term association between the two collaborators who have basically pissed about together with music, film and technology for over ten years.

Adam Hoyle
As lead programmer for seminal arts collective. AudioRom, Adam has been working at the bleeding edge of interactive music since the mid Nineties. From AudioRom's acclaimed "Shift Control" through to "Antartic Waves", his more recent collaboration with Braunarts and the British Antartic Survey, he has always pushed the envelope of the medium. The iRiealism project lets Adam become the live performer he’s always wanted to be.

Lewis Sykes
With a background in music, experimental film and graphic design, Lewis fused these interests together through an MA in Hypermedia Studies at Westminster University. His subsequent involvement with Cybersalon, Ninjatune and PirateTV focused his approach to interactive multimedia. The iRiealism project lets Lewis build the real-time, audiovisual instruments he’s always imagined in his head.

www.iriealism.com

<back>

Mark d'Inverno and John Eacott - Intelligent Technologies and Generative Processes for Embedded Ambient Music
M.dinverno@wmin.ac.uk
john@informal.org

A consequence of cheap and accessible processing power is the shift in the pattern of behaviour relating to music consumption from that of passive consumer to the active creator. In addition, it enables everyday objects to be embued with some kind of processing which enables them to both communicate with each other and make decisions based on their current state and that of the environment. This leads to the notion of Ambient Intelligence where intelligent computational entities are interwoven into the very fabric of our lives. We envisage a scenario where many such music devices can not only make intelligent, sympathetic decisions about sound generation but would also interact with other devides both musical and otherwise in a massively dynamic and unpredictable environment.

The field of computer science which is concerned with building systems which are inherently distributed, dynamic, open and social in this sense is known as intelligent agents. In this paper we set out some possibilities for what we call Embedded Ambient Music, where intelligent, interactive music generation could continually and dynamically surround and sustain our day to day existence.

www.informal.org/eam/

<back>

Charles Kriel and Gary Carter - VJ Kriel
charles@kriel.tv

VJ Kriel is Radio 1's first resident VJ, performing at all of BBC Radio 1's live dance events, as well as serving as Resident VJ for Pete Tong's Essential Selection. He has been cited by The Times as club culture's first superstar VJ, and regularly gigs in Ibiza, Ayia Napa, and across Europe. Since Spring 2000, he has performed for nearly 1.5 million clubbers internationally.

Kriel has organised a compact system that allows him to "scratch and mix" video in the same way a DJ "scratches and mixes" records. He works regularly with Pete Tong, Judge Jules, Seb Fontaine and a selection of the world's best DJ's, and has performed at BAFTA, the Muzik Awards, film and CD launch parties, and every major club and dance event in the UK, Ibiza and Ayia Napa.

In addition, he's about to complete my PhD at Central Saint Martins in the psychology of noise, and how that changes as technology upgrades. He has also been awarded by Prix Ars Electronica for his Electroacoustic music composition.

Currently, Gary Carter and VJ Kriel have been giving a series of joint talks, they recently spoke at the b.tv conference in relation to the future of social-creative internet practice. Gary, in addition to working extensively as a choreographer and performance artist, originated Big Brother as well as several other programs, and has been producing them internationally for Endemol.

www.bbc.co.uk/radio1/dance/kriel.shtml
www.bbc.co.uk/radio1/festival_gallery/index.shtml

<back>

Marc Lafia - Algorithms and Allegories
marclafia@earthlink.net

Much of contemporary art practice is both produced and can be read with the notion of the algorithmic as its predominate trope. Similarly one can read older art practices as working under the aegis of allegory. Of course these registers and metaphors can be used to parse a distribution of artistic and cultural production across time in multiple directions. With the advent of computation and the network, more and more contemporary art and sound work turns its attention to sequencing, loops, replication, modulation, mutation, generative systems, database and interface as instruction sets or grammars both as ways to conceptualise work and to produce it. The paper looks at a wide range of stratagems in works of sound, architecture, visual arts, and film to illustrate a correspondence between the allegoric and the algorithmic. The talk aims is to encourage both practitioners and theorists to engage these two notions, the allegoric and the algorithmic, as a way to consider and produce work.

Marc Lafia works as a conceptual artist, filmmaker and information designer in San Francisco. Since receiving an MFA from UCLA in 1989, he has carried on an art, design and concept practice ranging from conceptualising music videos for Madonna, to developing video games, adapting comic books, teaching at the Pasadena Art Center College of Design, working with the Museum of Modern Art, New York, exhibiting films at Rotterdam, Seattle, Mill Valley, and other international Film Festivals and showing media work at the ZKM, The Walker, The New Museum in New York and other international venues.

Marc is also founder and information architect of the highly acclaimed ArtandCulture.com which allows for the experiential, contextual and associative exploration of the arts. He is currently working for the ICA in Boston as a visiting curator and preparing several installations including a work for the first International Video Biennale in Israel.

<back>

Rikard Lundstedt (Sweden) - ForeSite Composer, virtual sound environment
rikardl@interactiveinstitute.se

In collaboration with Peter Warren of the Interactive Institute in Malmo I am developing a tool for two-dimensional composition of interactive music. The pieces 'Sound Rooms' you create with this tool are technically 3D game levels with tones in them. The stage is a big black room with a white grid on the floor. Different static tones are placed on different locations in the room.

When you walk around, steering your virtual self with a joystick, you travel among the tones like in a three-dimenssional partiture and the musical course of events is decided by the directions you take. The piece is furnished with tones rather than composed on a timeline.

The tool we are developing; ForeSiteComposer, is a modification of ForeSiteDesigner, which is a concept development tool for architecture and workspace design etc.

http://space.interactiveinstitute.se/staff/rikard.lundstedt/

<back>

Robin McGinley (UK) - Earth's Original 4 1/2 Billion Year Old Electronic Music Composition (A Work in Progress)
robinmcginley@hotmail.com

At any one moment there are several electrical storms in progress around the planet. This installation takes as it's starting point, and explores, the interception of impulsive electro-magnetic signals generated by lightning. A considerable proportion of radio atmospherics is due to the direct and indirect effects of electrical storms on the upper layers of the atmosphere.

Through a network of inputs and outputs, utilising both antique valve-based short wave radio equipment, and the latest DSP computer technology, the installation allows us the opportunity to hear the Earthís own natural electro-acoustic composition, which is as old as the planet itself, and is continuously unfolding around us.

The input channels of the system, which are derived from a combination of real-time reception of short-wave atmospheric emissions and digital recordings of Sferics (short for VLF (Very Low Frequency) atmospherics) and natural thunder, are fed through a constantly cycling sequence of DSP effects, which further transform the natural material. Triggers, distributed throughout the space operate each input channel, which enable the channel for set periods of time before a fade out. The four output channels are also controlled via another series of triggers, which are dependent on population and activity within the space.

The installation thus creates a seven-bit time-sampling matrix giving 128 temporal variations, and like the natural composition itself, is unlikely ever to repeat itself. This work also allows the audience an unusual proxy control over the manifestation of an elemental force of nature.

<back>

Iliyana Nedkova - Sounds From Near and Far
Curator-in-residence, New Media Scotland
Associate Curator, Foundation for Art & Creative Technology (FACT)
iliyana@mediascot.org

Sounds From Near and Far is an Internet audio artstreams connecting Sofia, Edinburgh and Liverpool. It will be premiered as part of the Micro-festival of Digital Culture Sofia April 2002 - a collaborative project with the Sofia-based Red House Centre for Culture & Debate and Interspace Media Art Centre. Broadcasts will be presented on-site, on-air and on-line - at the tentative festival venues including National Fine Arts Academy, British Council, Radio France International (RFI) in Sofia and New Medai Scotland website.

New Media Scotland is an Edinburgh based agency enabling cultural activity shaped by new technologies. We provide information, support research and development in new media, and create opportunities for artists. New Media Scotland produces, exhibits and tours digital artworks across Scotland, the UK and internationally.

www.mediascot.org

<back>

Fredrik Olofsson (Sweden) - Wanna form a virtual band?
fredrikolofsson@mac.com

My idea has been to build a simple and easy to use framework for webjamming using the OpenSound Control (OSC) protocol and SuperCollider by James McCartney. The framework now exists and it lets members of our virtual band contribute with their own favorite instruments, i.e. sounding patches written in SuperCollider, to be played by them self or by others live via Internet. A short clip from one of out first jam sessions is located here... http://www.fatplastic.com/filez/samband2.mp3

The number of people participating in this virtual band is varying from time to time and the overall structure is very loose. At its most there has been seven players from both the U.S. and Europe jamming together using this framework. And whenever there is a concert gig coming up, we just bring as many members as possible together over the net and makes sure there is one computer running the current set of instruments at the concert location. As no audio data is ever transferred, just control information, we have got no problems with latency or lousy audio quality due to compression.

The framework is currently based on a peer-to-peer network structure but that is subject to change in the near future. A client/server network has some significant advantages in this case mostly concerning distribution of instruments within the network and dynamic connections. The new server application that I'm now working on will let people login, share instruments with other connected players and jam all using just a simple SuperCollider client patch.

http://olofsson.da.ru

<back>

Alexandre Plennevaux (Belgium)- LAB[au] laboratory for architecture and urbanism
alexandre@lab-au.com

We recently developed an online project 'space, navigable music' (http://www.lab-au.com/space) which consists in a 3D world (based on the VRML programming language) that the user can edit through its navigation (structurally speaking, its x,y,z positions, that he has the possibility to record) and by drag and dropping WAV sounds into space. When dropped, these sound files are represented in the 3D space as spheres and provided the user's computer has a quadriphonic sound system, these sounds are spatialised, i.e, if the sphere is on its right, he will hear the sound from the right, if the sound is behind him, he will hear it behind him., etc... The user can additionally change the pitch of any soundsphere and change its volume (acoustically and spatially).

Additionnally, the user can record its positions in space and then have its recorded string of positions be played back by a travelling of its camera through the sequence of the recorded coordinates. In this manner, the project proposes to create a navigable music video clip...

http://www.lab-au.com

<back>

Matt Rogalsky - Radio Silence
mrogalsky@mail.wesleyan.edu

I would like to present my 'radio silence' installation Ellipsis. It was first shown last fall at Diapason Gallery
in New York, and then at Sleeper in Edinburgh, for a month each time. And it was up for a day recently at the Slade Gallery during the Voice and Technology symposium.
There is an image in this series:

http://mrogalsky.net/portfolio

<back>

Nick Rothwell - cassiel - symbolic sequencing technology
nick@cassiel.com

Nick Rothwell (cassiel) has been developing and using symbolic sequencing software in live performance for several years. The custom-built software uses a binding language which can be dynamically reprogrammed in performance, resulting in organic and constantly changing patterns of notes and timbres which are a fluid hybrid between loop-based sequencing and programmable arpeggiation. The sequencer can be controlled by keyboard or MIDI control surface - we have used fader boxes as well as a Buchla Thunder touch-sensitive MIDI controller. The system has been used live for contemporary dance projects (Edinburgh Cyberia Cafe, Edinburgh Fringe, Traquair Fair). The software is implemented as a native external for Cycling '74's Max/MSP, and runs on a G3 PowerBook, controlling either MSP audio systems or outboard hardware (Korg OasysPCI DSP farm, Nord MicroModular).

Nick Rothwell is a freelance composer/sound designer/programmer who has worked for choreographers and producers the UK and US as well as Istanbul, Vienna and Frankfurt. Recent projects include nonlinear video/audio cueing systems for dance projects at Vienna Volksoper and Ballett Frankfurt. Future projects include live performance for Colourscape (September) and MSP sound programming for a Choreodrome dance project at the ICA (October).

http://www.cassiel.com

<back>

Tim Sayer - software interfaces can play in improvised music
timsayer@fc.marjon.ac.uk


As far as my research is concerned I am keen to explore the role that software interfaces can play in improvised music. I am in the process of writing a paper which attempts to locate the origins of the improvisational process within the real of human cognition and to explore this in relation to language production. The evolution of an idea from subconscious to conscious mind is a particularly fascinating area for me and I hope to develop software interfaces to interfere or rupture this process using performance systems which listen to performers and provoke them to behave in ways contrary to their natural behaviour. My aim is to develop listening agents for SuperCollider which have visual behaviour and properties that will govern their appearance in director. I have managed to get Director and SuperCollider working together and to have different aspects of the agents visual behaviour to create sonic material in SC. The listening bit is going to be the big challenge for me. I have a working interface which generates real-time material and passes it to and affects a visual interface in Director which would form an effective projected installation.

Along the way, it seems to me that I will have to adopt a design method for creating the sound agents in SuperCollider, something that I have seen very little published material about in relation to generative music but something I'm sure would be of great benefit to the generative music community, designing a linear sequence is easy where as designing a dynamic process is not.

<back>

<top>