0 Comments
Viewing single post of blog a-n bursary: Another Realness

The invention works only when the viewer stands in a particular location. I wanted a way of showing it at the Fusebox so I made a sequence for the corridor outside the office. New England House, built in 1963, was the first purpose built industrial complex. Its wide corridors and industrial lifts are characteristically brutalist. Neil and I used footage of the performer Mim King swimming under water that I’d shot in 2011 for Flickers: Under the Water for White Night, Brighton. Now we had a woman swimming down the corridor and disappearing through the door.

I was given the opportunity of speaking and demoing at the Virtual Reality and Heritage or GLAM (Galleries, Libraries, Archives and Museums) VR Meetup at Fusebox. This would happen in the evening but how could I demo at night? Light levels affect how the image merges with the real environment. Joe from Wired Sussex suggested I talk resident, Dr Esin Yavuz, Co Founder and Chief Scientist of Cyanapse who are building an automated image editing platform which includes AI-powered photo filters. Would it be useful for the viewer to set the lighting of the scene, depending on when they viewed it? Though this will be something really useful for the future, for the time being we went low tech: I took a night time photo of the corridor and Neil composited in the swimmer to the new background.

I wanted to make a series of sequences that viewers could follow around the building, so set about trying different settings for our existing footage of underwater Mim. We tried her swimming across the lightwell towards the viewer and floating in the massive industrial lift. While there are all the problems of scaling the figure to look right in the setting, which Neil managed to solve as usual, the biggest challenge was that I couldn’t get the foreground and the background to line up in terms of scale. Both these new sequences had window frames in the near foreground and no though I took photos at many different focal lengths it remained difficult to match up the photos with what I could see with my live eye. As I understood it, a 50mm lens should mimic our vision. But this wasn’t working.

Attempting to make these sequences had revealed a couple of excellent and interesting problems.

How can a photograph accurately match how our eyes see the scale of near foreground and background objects?

How can the viewer change the lighting of the background to fit the light levels when they view it?

I spoke at the VR Meetup (49:51) and demoed the corridor sequences. People had difficulty lining up the lightwell sequence as expected but the original one the corridor seemed to work. I chatted to fellow speaker Dr. Abigail Wincott of the University of Brighton spoke about her project Towards a Museum of Sound: Using location based apps to create a collaborative, creative methodology to develop and evaluate sonic heritage.


0 Comments