This week I started work on an 8 week schedule of learning and making through exploring new software as outlined in my application for the professional development bursary (which I am very grateful to have been awarded, thanks a-n!). My overall aim for this project is to work towards create virtual environments that can be responsive to reality in such a way that artworks (or elements of artworks/performances) can exist inside them. Creating new spaces for art that utilize ubiquitous hardware (phones, tablets, laptops).

Initially I have been exploring the potential of a software called Z-Vector, that ‘samples reality’ using a Kinect sensor, using the 3d data, video and audio. I have uploaded a short video showing a test of live performace using combinations of settings I have developed over the past few days.

https://vimeo.com/162527067


0 Comments

Having come to the end of the self directed programme I set out to complete for the a-n professional development bursary here are some reflections and outcomes.

I am pleased to have had the time to research and learn new digital processes. As well as learning new skills, this has resulted in the development of a platform that I can use to project ‘feedback loops’ from live 3d depth maps as part of performances and exhibitions. I was able to test this system at two events during this time. Many thanks to Grasslands and Bedlam for providing these opportunities. This was also possible to due to hardware, software and tutorials I was able to access through the funding from a-n. I found the mentoring element (thanks Hellocatfood) particularly useful in terms of providing feedback a critical time, references and the opportunity to meet practitioners relevant to the current direction of my work. These elements combined have helped to make collaborations and interactions with other artists (and spaces) more achievable. I hope to be completing further related projects in the near future.


0 Comments

Following two weeks of concentrated work on the programme I outlined for the a-n PD bursary I was able to put learning and work made in this time in to practice at a music night in Wolverhampton put on by local promoters ‘Bedlam’. Exploring the context of such an event as a platform for a time based performance, I wanted to alter the conventional performer/stage/audience relationship by performing in a way that is affected by the dimensions of the venue and people present. Using a PC application I have developed, where two Kinect scanners work together to produce a ‘third’ aggregated 3d space on screen, a visual feedback loop is created where the audience can recognise and look back at themselves as part of an extended performance space. I used my own body movement control one of the scanners and affect this space to connect further with the audience. Pre-recorded sound was triggered and played through digital reverb effects that formed the sonic boundaries of this virtual environment. The other aspect of the performance was to exhibit another virtual space at a different scale and sound level, for each member of the audience individually by presenting a small handheld display, for 20 second intervals. The effect being to highlight the acclimatisation to a feeling of multi-presence through engaging with multiple screens simultaneously. Both these aspects of the performance create uncanny space by combining digital screens, sensors, glass and sound in a way that is unfamiliar but still talks to our senses of perception.

4min video edit: https://vimeo.com/175113818

Top image: performance setup and main screen

Below: handheld screen box (animation played through glass semi-sphere)


0 Comments

Last Saturday I was able to take much of the learning and systems I have developed through my self development programme and put them in to practice at a 1 day residency at Grasslands, a project led by Dan Auluk. Sharing the residency with 2 other artists I was able extend the project space through live projection and 3d output from kinect scanners in a way that captured fragments of their presence and output. The projections were also interacted with by other visitors to the space.

https://twitter.com/oscar_c_d/status/739142250584756224

I further documented the space with 3d scans for future use and to to be part of an extension of the Grasslands project through an experimental online space.


0 Comments

Since following an online course on ‘machine learning’ I have been able to use some of what I have learned to manipulate Kinect and Leap Motion data to create visuals that respond to an environment through the 3d depth data provided by Kinect, and gestures through Leap Motion and also the level and pitch of audio input. Here is a sample of one of the presets I have generated, projected on to a make shift environment made from found materials.

https://vimeo.com/166276820


0 Comments