Clark Bursary UK Map Clark Bursary UK Digital Art Award

5th Clark Bursary

Come Closer

by squidsoup

View Come Closer | Journal | Proposal | About squidsoup

Watershed Fri 11 June 2004 Workshop

Image 2 Image 3
(Select images above for larger view)

with Gill and Laura, Dani Alex and Paddy (Watershed), Constance (Bristol Uni) and us

Fun day - 33 10/11-year old kids come over to learn about and experience 3D and stereo imaging. We literally invade Watershed's Waterside 1 2 and 3 :-) The hordes are split into 3 groups, variously exploring Come Closer with the wearables or a joystick, and also making stereograms of themselves using a pair of digital cameras and Photoshop. The day wraps up with us all being treated to a slideshow of everyone in red/blue 3D on the big screen, followed by SpyKids 3D.

Apart from a few headaches and people walking away cross-eyed, things went well. Interviews with some of the participants should be online soon...

Image 4 Image 5 Image 6
(Select images above for larger view)

Watershed Fri 28 - Mon 31 May (Bank Holiday Weekend)

Image 1
(Select images above for larger view)

4 day showing for Come Closer, along with Dane's LoveMatch. All goes well, lots of people and comments/feedback. Interesting chat with a behavioural psychologist - this is an area of collaborative research that I think would produce some interesting results.

Futuresonic Wed 28 April - Sat 8 May

Watershed very generously offers Dani's services for the duration of the show (nearly 2 weeks) which proves invaluable.

Several lessons learnt, mainly to do with self-sufficiency. DON'T rely on anyone else's network (especially if you are sharing a network with another piece of net art called WiFi Hog), ignore network managers when they say the network is fine, don't count on the internet, and put WiFi base stations as near as possible to the transceivers in wifi-heavy environments. After much stress, with Dani Cliff and three squidies all pulling much of our remaining hair out, we restructure Come Closer to run off a local server and then, unbelievably, everything slots into place. One evening we are saying 'it can't be done - let's run it in automatic demo mode', the next day - and for the rest of the show - everything works well.

Even the old Jornadas we're using run for 8 hour slots no problem. The piece can be left to the Urbis invigilators for hours at a stretch.

Again, responses (at least written) seem positive. The 3D goggles lend the piece a fun/gimmicky edge which is no bad thing.

Futuresonic itself - interesting - several other pieces and discussions on 'locative media' (this is what we do apparently). check out "Sonic Interface" by Akitsugu Maebayashi (some details at http://www2.gol.com/users/m8/), also work by Steve Symons. Also good to see Tom Melamed peddling his Schminky wares (and helping us out again :-).

Talk to Bronac and Rachel from ACE - beginning to look at where next to take the piece, in terms of future development.

Fri 19 March - Wed 28 April

The rethink on Dandelions results in a combination of the mirror version of interactivity with the dandelion-scape. Avatars move around as bulges under a virtual blanket, creating new topographies and wind eddies that blow the dandelions around. This seem to work much better. Questionable aesthetic changes - the 'blanket' is a wireframe mesh. Adds to the sense of perspective but detracts from the visual purity. Contrasts quite effectively with the soft (relative) realism of the seeds though.

Public Trials (Thu 11 - Thu 18 March)

Image 7
(Select images above for larger view)

Reduced to a single day at the end of a very long week. But finally, the last bit of technology falls into place and suddenly we have movement on the projector that corresponds to movement in the space. Major result. The paintbrush finally works - now to paint the picture.

Initial trials with a sound-only piece prove less than convincing, so we opt for an audio-visual approach, using the 3D specs and sound in a single piece. This makes sense for several reasons, but mainly the idea of using the projection as a mirror into an altered virtual space is definitely enhanced by the use of stereo vision - the impression of distance and space is important, and the idea works well with the sonic Come Closer idea.

By the trial day we have two separate pieces to show, that use the same setup in different ways. 'Dandelions' uses the average position of everyone in the space to steer your collective way through a cloud of dandelion seeds; 'Mirror Mirror' creates an abstracted columnar avatar for each person in the space, and uses the positions of these as the starting point for a drone that focusses on the distances between people - gentle deep drones when people are near each other, grating louder higher pitches when they move away.

Reactions to the trial seem positive, judging from the comments book. Some 50 people tried the piece out.

Generally the 'mirror' idea worked better as the connection between what each person does and the visual representation is clearer. Dandelion needs a rethink.

We KNOW that using the average position of several people in an active space doesn't work, as this is very close to a setup we used at the ICA (with another project, 'altzero') three years ago. Some of these reasons:

  • lack of awareness of others physical movements
  • no direct correlation between what you do and what you see
  • the instinct to not get close to others (that is the core inspiration for this piece) results in only marginal changes to the average overall position of everyone in the space.

This once again proved that such a setup only works well with one person, or with two or more people physically tied together

Posted 13 Feb 04

Come Closer: Another week of extremes. On Tuesday at about 3pm, Tom from Mobile Bristol turned our deep frustrations to instant ecstasy as the last link fell into place and we got the handheld iPacs talking, via an Elvin server, to Director. Geeky, possibly, but very exciting. For the first time, the project seemed possible.

A long list of technical snafu's excepted, the week was probably more positive than it felt. Headway was made on the visuals side, the idea and concepts are beginning to feel a bit more lived in, and a bigger picture is also emerging about how this fits into what we're doing on a broader scale, and where we can go with it.

The BIG problem with this project has been the number of technologies and people involved; Mobile Bristol, HP, Watershed, three of us....WiFi, iPacs, C, ultrasound rigs, Elvin, Director... hopefully we are all getting our communications sorted out. There's a lot of work to do by 11th March, we may not get the time to massage the thing as much as we'd like to, but it still seems that it'll run on the day. Cliff, Dani, Gill and Tom - we love you!

Data (over)flow: Feel mildly battered after last week (9-13 Feb). Our first milestone was reached, however, on Tuesday when we finally managed to get elvin and director formally introduced, and communicating. Hooray. Big up Tom and Dani. That meant Come Closer was off and running. Obviously Wednesday frustrated that when the ultrasound rig went down. What this did was to highlight the need for standing back and trying to preempt technological problems for DoF. Thoughts of technology leading the creative etc being dismissed out of hand. Well, at least until the damn gadgetry works properly.

As we are now working on each of the projects consecutively, rather than in tandem we can plan DoF in a lot more detail before we go any nearer to starting production on it. Over the next few weeks we will think about exactly what type of data streams we want to pull into the project and how we can go about capturing these. Up until now our thinking has been to use a set of various environmental and atmospheric sensors. What might work better for us is to use the same way of capturing all data (e.g. microphones and webcams) but treating each stream in a different way. However, a likely testing session with the sensor kit might resolve this.

Posted 23 Jan 04

Image 0
(Select images above for larger view)

Data (over)flow
Problem: Tested possibility of projecting from Watershed's Waterside 3 space onto River Frome at night. Although only using a low lumen projector it quickly became apparent that there were some "issues" with our original idea. In other words without a very powerful projector nothing would be visible when projected onto the quay wall on the opposite bank of the river. On top of that we have the wrong kind of dirty water, green gunk absorbs ALL light emitting nothing back for an immersive user experience.

Our fear is that we could spend a lot of time trying to get the thing working on a tech level, and that might impact on the quality of the actual content. We would rather spend longer on the fun bit that ultimately makes an idea work, or not.

Solution: Attach 2 video cameras to the exterior of Watershed and point them at the opposite bank. One with a blue filter, the other with red. They will be pretty close (approx same distance apart as a persons eyes). These would create a view of our original projection area. However this view will be itself projected inside the Watershed as our background. ON TOP of this we will then project our Data(over)flow (DOF) piece.This double projection (the video of outside plus DOF over the top of it) will, in turn, be filmed and then shown realtime on other screens around Watershed (plasma, cinemas, bar). Overall a viewer will get an abstracted experience of looking at the River Frome and what might be flowing through and around it at any time.

Notes.
Data input from sensors including:

  • barometric pressure
  • humidity
  • light level
    plus:
  • ferries passing through shot
  • mic in bar to capture noise levels
  • traffic light rhythms (via webcam pointing from back of WS towards bottom of Park St)
    this solution will actually give us more scope to push what we do in terms of interpreting the data and giving viewers a fun, interesting insight into "sensor art".

Posted 22 - 23 Jan 04

Come Closer: Finally underway. 2 solid days feeling our way semi-blindly round the technical (and other) issues involved in this all-new wifi world. Cliff (Randell) incredibly helpful at leading us through this wireless spaghetti junction. Amazingly, he'd made a prototype of the idea based on our descriptions, using (among other things) an early 70's analog synth connected to a mini version of the ultrasound position sensing rig. After long discussions we opted for a quad - or surround speaker based sound system, rather than headphones. To be honest, this is partly for technical reasons, but also to get a more natural and warm overall sound. Wearing headphones in a darkened room would also increase the anxiety/claustrophobia.

Down to Watershed with Cliff. The Ultrasound Rig works, kind of, but Cliff said it should be more accurate and so it was agreed that we needed to double the transducers to 8 to get better position fixes. Various other technical problems, mainly to do with networks (there are two and they don't like each other), meant that progress was slow at least on the getting stuff to work front.

However, we do have a much clearer idea of what we want to get out of this project, and it does seem like it CAN be done (which is great because the other project we are planning CAN'T work in it's initial form). We even have a plan for getting there.

We also talked again about an outdoor version, which starts to light a small spark. As GPS (the main currently available technology for location detection outside) is only accurate to several metres, but is almost national in scale, the idea of turning much of Bristol into a virtual theremin is intriguing.

Back to the indoor version: we are now ALMOST at the point where a location signal sent from an Ipaq (handheld computer) in gallery 2 can be received by Director, ready to be processed and messed about with. For holding our hands this far, huge Big Up to Cliff and Dani :-)

We plan to work on various aspects of the project away from here, as the technology is fairly portable. Back on the 9 Feb for a full week.

Latest project description: It is said that we can 'feel' the presence of others. Come Closer, a new piece by digital artists squidsoup, uses wearable technology to make us aware of other people by hearing their presence.

Come Closer is about relationships and proximity, represented by sound in a darkened room. The closer two people get to each other, the more acutely aware of each other's presence they become. This may be comforting or disquieting. The space between them is filled with sound that is affected by their movement and position. With more people in a room, complex harmonies begin to appear and disappear, allowing scope for cooperation and confrontation, intimacy and rejection.

One becomes highly attuned to the presence of others, each subtle change in tone signifying movement. How close are these people? What is their intention?

Multiplayer theremin and intimate virtual experience, Come Closer plays with our sense of personal space, and our sense of presence in the physical world.