Thursday, 27 March 2014

12 Post - Performance

I demonstrated my finished work during my presentation on 24.04.2014. Here is the process that I have to go through in order to set up the performance.


  • Instal master patch on main computer
  • Log in to 11 receiving computers, ideally ones facing into the room.
  • Check and write down the ip address of each computer
  • Instal the receive patch on each computer
  • Check the audio settings are correct on each computer
  • Change any ip addresses that are different in the master patch
  • start the composition and check the audio and visuals are working on each machine
  • Put each receiving computer into display mode
  • Start the composition
Here is a video of the finished work, unfortunately it is very difficult to capture the surround sound experience of the piece on video but this gives an approximation.


11th Post - Network Composition Research

As i developed my networking patch i began to look at the work of other artists and groups who use networks in their work. John Matthias, who is an associate lecturer at Plymouth Uni has been part of team which developed a piece called The Fragmented Orchestra a network composition which takes fragments of live audio streams from locations all around the UK. Below is an better description and link to the work.

The Fragmented Orchestra is a distributed musical instrument which combines live audio streams from geographically disparate sites, and granulates each according to spike timings of an artificial spiking neural network. 
                                                                                                                                    Grant (2008)

http://www.academia.edu/4067716/The_Fragmented_Orchestra_by_Dan_Jones_Tim_Hodgson_Jane_Grant_John_Matthias_Nicholas_Outram_Nick_Ryan



This project is on a far larger scale than mine but some of the principles are the same. However, the Fragmented Orchestra has a far greater level of interactivity and this is an area that I would like to develop further in my own work.


The Fragmented Orchestra is fundamentally a distributed system, comprised of an interconnected network of communication nodes. Given that these sites are scattered through-out the length and breadth of the UK, it is entirely reliant on the availability of a network infrastructure that is capable of transmitting audio data in real-time over a great distance. Indeed, this kind of project has only been rendered technically feasible in recent years, courtesy of the rapidly accelerating rate of consumer-grade internet connectivity. 
                                                                                                                                     Grant (2008)

Locus Sonus are a group based in the south of France but with contributors from all around the world. Their work has some similarities with the Fragmented Orchestra in that it uses audio streams from different localities which are then utilised in performances and installations. Here is a description of their work from their website. http://locusonus.org/

The main part of our current investigation concerns the transport of sound (and sound ambiances) which has lead to the construction of streaming systems as well as sensorial and experiential environments which favour different listening experiences, synchronous and asynchronous, local, distant, geographically identified, « autophone » and « chronotope »: the networked sonic spaces. Our use of streaming technology is unusual in that it consists of a network of « open mikes » (web-mikes) which continuously transmit the unadulterated (in so far as that is possible) sound of the environment in which they are placed: sounds which carry with them the sense of the space in which they propagate not so much sound sources as sound « reservoirs » . In all cases the question is one of « sounding out » spaces and the perception of their site-specific (in-situ) and time-specific (in-tempo) nature - atmosphere, architecture, expanse, contextualization, soundscape, perceptual appropriation - are some of the elements taken into account in the setting up of these microphones. 


Here is video of one of their networked audio pieces.


Grant, J. 2009. The Fragmented Orchestra by Dan Jones, Tim Hodgson, Jane Grant, John Matthias, Nicholas Outram, Nick Ryan. [online]Available at: http://www.academia.edu/4067716/The_Fragmented_Orchestra_by_Dan_Jones_Tim_Hodgson_Jane_Grant_John_Matthias_Nicholas_Outram_Nick_Ryan Accessed 28.04.2014

Locus Sonus. 2014. Locus Sonus – Audio In Art [online]Available at: http://locusonus.org/ Accessed 28.04.2014

Wednesday, 26 March 2014

10th Post - Testing the Network and Adding a Visual Element

I tested the fully networked piece on 10.04.2014 and i was surprised at how well it worked. The audio was generally well synced and the composition was not drastically altered or out of time. However, the play back was not perfect and there seemed to subtle timing issues that made it slightly different each time. This was mostly to do with how the rhythmic elements interacted. especially the moving elements such as the kick and high hats which seemed to disappear sometimes. I assumed this was to do with the randomised gate system which made those parts jump around from machine to machine. 



In the top right corner you can see the system for moving the kick and hats around. In this version, however, the kick and hats did not have their own individual buffer and groove. I realised that I needed to make a different version of the composition where i bounced the kick, hats and melodic elements as individual stems as opposed to this version where the 12 different audio files often had more than one sound.

This was a time yet another time consuming aspect of the composition but it was necessary. David Strang helped me to improve the system for moving elements and suggested using the bonk object which measures the attack of a sound and a counter so that the kick would move after a certain number of hits. Below you can see that the kick is now only sent out of 4 gates which limits it to one end of the room. The hats however, are moving through 8 machines which gives the piece a more controlled and defined sense of space.



The next thing to add was a visual feedback system using the computer monitors and again David Strang was able to help me realise this idea. I wanted the screens to flash from black to white when that machine played a sound. David showed me how to use the peakamp and clip objects as well as the swatch object so the screen would move through the grey scale from black to white depending on the intensity of the sound.



When i tested the new improved patch with the visual receive patches on 12 machines the effect was very impressive and made a huge improvement to the piece. The video below, shot on my phone gives some idea of how it looks but fails to do it justice.
                  

 

Bellow is the final version of the main patch. Here there are duplicates of the melodic parts that i was able to mix into the composition live. The original idea was to make these duplicates go in and out of time with the composition similar to the Steve Reich's phasing tapes. Unfortunately i did not have enough time to develop this and i was afraid of making my patch more complex incase it stopped working before the final presentation.



Thursday, 6 March 2014

9th Post - Networking with max objects netsend/netreceive

After the last performance of the composition i decided that i needed to try networking the the computers together in order to give me more control when test and performing the piece. Using the netsend and netreceive objects in Max MSP it is possible to send audio from one main computer that will be received on multiple machines. This means the start, stop, volume and play back speed can all be controlled from one machine. To make this work, i first built a small patch to test sending one or two tracks of audio.


This was successful so i moved on to building a version which would play all the twelve parts of my composition. I am yet to test this version and the patch below shows the same ip address for each buffer/groove. When i test this version i will need to fill in the correct ip address for each machine. The patch also contains a randomised gate which will route the kick drum between all of the different machines although it is not yet connected.


It is hard to predict how well the network will cope with sending so much audio. If there is only a little latency then this may contribute an interesting unpredictability to the piece and work in a similar way to Phil Kline's multiple cassette recorders. However, if the network cannot cope with this much audio then the effect may be unpleasant and make the music completely in-cohesive and unlistenable. If this is the case i will need to reconsider my options. One possibility is only sending control information via the network and having the audio files on each computer. This would still allow me to control certain aspects from a central computer. Another possibility is to change the nature of the composition. I could use less channels and machines and smaller sections of audio. It would be interesting to explore aspects of Riley's In C and the In B Flat piece which allow you to build up combinations of audio that all work together but would lead to a different result at each performance.

Another aspect that i am keen to add to my piece is a visual one. I would like the screens of each machine to light up when the machine is playing a sound. The video below shows two examples of networked computers playing music and the second example uses the screens to add a visual element.

Friday, 31 January 2014

8th Post - New Version = New Ideas

This week I tested a new version of the piece which i developed after the previous test (see last post). This version is much more rhythmic and contains a kick drum and hi-hats that provide a steady pulse for the more abstract found sounds and melodic parts. It also had 12 different sine waves which built up and then became randomised. For this test i bounced 12 separate versions for 12 machines, each of which had unique sounds and and some shared sounds. I wasn't able to upload them all to Soundcloud in time so it was necessary to import the bounces onto the computers from a memory stick. This was very labour intensive and there were only 5 people at the tutorial so some people had to start 3 computers. As a result some machines were significantly later than others. However, the overall effect was still musical and interesting. 

The found sound rhythms again worked well, these sounds were were all individually rooted to a different machine as were the sine wave tones. This gave a feeling of movement around the space, as the computers are arranged around the edge of the room which is a large rectangle. The kick drum and hi-hat sounds were each on an individual machine. This meant that there was no movement in these sounds and that was a criticism from some of my peers. I also felt that if i pursue this direction then i would try and make those sounds move too. 

The next step for the development of this piece is to look into using Max MSP in order to network the computers. This will allow me to have elements of control such as volume, mute, and possibly speed of playback from a central computer. It would also allow me to build a patch that i could install on each computer which would hopefully make performing the piece easier. Another idea I am keen to introduce to the piece is having a selection of material that audience members would be free to start and stop as they see fit. This idea is based on the Terry Riley composition "In C" and the internet collaborative project "in B flat" which was conceived by Darren Soloman (see link below). This material would work on the same principle as both of these works in that it would be in the same key and would work in what ever combination it was triggered.

http://www.inbflat.net/

The trial version of the new piece is available below.



Tuesday, 14 January 2014

7th Post - 1st Test = New Ideas

On Monday (13th Jan 2014) I tested my piece, as it is (see 5th post), in the Scott 105 using 9 computers. I was quite surprised by the results. It sounded very different to the approximation that i produced in Logic. Hearing it spread out around a large room on multiple machines was very useful and ideas that i had ruled out from the prototype piece such as complex rhythms and lower frequency sounds seem much more possible and even necessary.





Aspects that i felt worked well were the glitchy-found sounds and the other percussion sounds that seemed to jump from machine to machine. This is something that i will build on. Sharp, short sounds seem very effective as your ear can easily detect them appearing in different places in the room. I like the idea of creating complex rhythms that seem to jump around the room in sequences or at random and i think there is a lot of potential for this.

The melodic ideas were fixed to particular versions and although they sounded nice, they were a bit boring but pad type sounds maybe useful to counter balance the percussion. The glockenspiel worked quite well as it is semi percussive and you could hear the delay type effect between different machines.

Overall i felt that the piece became a bit boring after about 2 minutes and there was a lack of lower frequency sounds. The speakers on the IMacs can cope with frequencies down to around 100hz so i can use some bass and kick drum sounds to remedy this. Another idea that struck me was building up chords or drones gradually with a single note on several different computers. I am definitely now thinking of writing separate versions of the same piece for up to 10 machines. David Strang suggested using sine waves to do this as they will interact with the harmonics and resonance of the space. He suggested looking into the work of Phill Niblock who builds up multiple single tones to make dense, sustained drones. He describes his work on his website. 

I recorded tones played by an instrument (by an instrumentalist), arranging these single tones into mutli-layered settings, making thick textured drones, with many microtones. In the early days, I prescribed the microtones, tuning the instrumentalist, when I was using audio tape. Later, I used the software ProTools, and made the microtones as I made the pieces.”

                                                                                                                                      Niblock (2013)
I like the idea of building up chords or drones using individual tones on separate machines but i want to sequence them and have them move around the room in interesting ways, maybe moving in and out of sequence with the the percussion sounds. 


I think my piece is starting to move away from the original idea of the clouds of sound which are a feature of Phil Kline's work and is becoming a composition for multiple machines that will have aspects of a surround sound piece. However, i still want to incorporate some of the original ideas as well as the newer ones. Another consideration is adding another layer of control by networking the computers and having access to volume, mute and speed of play back from one master computer. This would require some work in Max MSP but it should not be too complicated and would also allow me to have playback of the different parts of the composition from the Max patch on the computer. The audience members could select a part from a drop down menu as instructed or at their own choice depending on how the piece develops.

This aspect of a hidden layer of control could bring an interesting dimension to the work, possibly similar to the competitive compositions of John Zorn or Iannis Xenakis where players try to gain control of the piece. If I have control of volume and mute perhaps the people sitting at the computers will notice and turn the volume back up. 

There is certainly a lot to consider but my main focus is still the composition.


Thursday, 5 December 2013

6th Post - Ideas for Controlling The Piece

I have also been thinking about how this will be performed and how much control I want to have over the performers/audience. At the moment there are 3 different versions that will need to be divided up fairly evenly between the audience. This could be done by assigning each person a number that would relate to the version they would need to download. Once everyone has there music downloaded to their device it would simply be a case of "1, 2, 3 Go!' and the the piece would play out on the phones.

This is how i plan to test my prototype as i want to keep it as simple as possible to start with. However, in later versions i would like to bring in sounds from other devices such as the computers in Scott 105 (our teaching room for music tech at Plym Uni). It would be interesting to have people starting the new audio at different points in the performance and this could be done via a colour coding system for the computers and a message projected onto the screen of the classroom. 

Another idea would be to make lots of different versions of differing lengths and let people re-trigger their audio as and when they want. This would make the piece a bit like Terry Riley's In C and would make the end result even more unpredictable. 

I think my ideas about control and organisation of the piece will develop naturally as the piece develops. For now the most important thing is testing it in a live situation with multiple devices.