0 Comments

Michael from Fabrica has taken the camera away, to try and strip away the infra-red filter. This would be lovely if it worked. I bought a huge set of infra-red lights that I hadn’t budgeted for, and so far have not seen a noticable difference. The camera sees the lights, but not to the intensity I would have hoped for. We think the infra-red filter may still be blocking a lot. So today, it will be interesting if we get a ruined camera or a unblocked camera. Either way, these are the things we have to work through this week.

I dreamt about work all night last night and jumped out of bed anxious – I need to still look at a lot more video tracks. I need to reflect a bit more about what we are understanding about the video that is there. We need to think about ways of displaying pixy that best suits the video content. At the moment, with its wood backing, its not easy to move things around. I am hoping today we can get a bit more sit and look time.

I need to edit in the last footage I have shot. Get the timings and pacings right. For some of my shoot now, they are about two hours long – which leaves for an incredibly diverse database to grap from. However – these don’t work so well on Pixy – but I still want to play with scales, maybe it we just concentrate on the eyes, for example.

Each time I do a render, it takes about 4 hours for the small screen – about 13 hours for a bigger file. The file sizes are crazy.


0 Comments

Its been a busy day. Trying to set up the cameras, getting the right footage shown – it seems the screen likes to show men (they sort of sway which suit the screen, where as women seem to nod, which doesn’t suit the screen). This screen likes movement – but movement of a horizontal nature. This isolates me to two male tracks (I have shot about 35 portraits now). Lately I have just been working with what is around me – shooting with one light on a black background. The pixy display doesn’t like the shadows this causes. So, there is not a great database to choose from.

We had the work open at Lighthouse today, from 3-5pm. We had a few people through. Discussed the work. It was interesting to see everyone playing with the technology. Things worked – things didn’t – people seemed to enjoy it though – it will be interesting to see the feedback of the evaluation that starts on Wednesday.

Jamie Wyld spent a lot of time in front of the camera. Interestingly the camera interprets him as sad, and everyone else was happy. I think in the end Jamie and the computer both got happy but it took a while.

The pixy likes faces, semi -close up with sideways movement. How do you direct for that? ;). Who knows – but we have footage to work with. It will be nice to see the second screen up and running. See more group interaction or contagion.

Jane and Karl were in again today – its been great having them about to discuss ideas with. Jane has been posting to the blog as well.

Michael from Fabrica gallery came in – we discussed the work -= how to move ahead. It will be a busy week of testing, further building and getting a balance that suits Natacha and Michael and myself.

We had the work crash three times today. Not a good start. It seems to be something with the mind reading technology and the Pixy screen. I have sent it on to Jeff to hve a look at.

Jeff has send back the secondary screen option – so we can run two screens from the one computer. I will install it tomorrow. I haven’t heard from Gordon, but i am hoping we will have two screens to look at tomorrow.

Had a discussion with Natacha and Michael – about what they want out of this collaboration – about how they see things extending the screen.

Camera placement is still an issue. Its like you should walk around with it. And maybe this is something that could work, but it can’t… We need the lighting to follow you around as well. In my search for more naturalistic and fluid interaction I seem to have reached a few problems.


0 Comments

JANE MCGRATH

Today I’m back at The Lighthouse and it was really interesting to see the Pixy up and running. It is quite startling to see how much the human mind can read and interpret from a limited number of pixels. This makes me think a great deal about inference and suggestion. I am planning to design, build and programme a series of ‘bio digital ‘chairs, and the power of suggestion , contamination and transference is key. I am now encouraged to explore the notion that less can certainly be more or at least ‘as much as’…

Some faces worked better, some glided across the screen with ‘intense’ detail and others seemed to sit back and get lost. A mans back could be read in 3- 6 pixels!

The notion of spaces inbetween and inbetween spaces is cropping up – in the physical sense of this work, ie the spaces between the pixels the spaces between the batons, the spaces between the viewer and the screen. The experience of walking into the pixy screen is also interesting. For me the experience of being inside the screen was ‘live’ when I was face onto the pixels (with my back to the audience. ) When I turn forward I see only the back of the batons and I am disappointed – I am left wanting to see the pixels all around – Now I have this strange urge to run in a forest of pixels, knocking into them and seeing faces and figures every where I turn. May be a I have a weird deep seated urge to be inside a machine??

We had an interesting chat about the space between the action on screen and the reaction of the user and how the software reacts to the users face. About the moments when we are left waiting. I love the idea of working with that (un)comfortable waiting, those odd frozen moments. Tina explained the timings of the video reactions and how these are programmed and what work is ongoing at MIT re further development. Its fascinating.

I want to use my practical work to dig deeper into those ‘moments’ – those spaces of ‘pure potentiality’ – moments when there is a temporal suspension. When magic happens –or maybe doesn’t. When we must wait and see.

After a very interesting lecture by Jonathan Gilhooly at Brighton Uni we were talking about film (the old fashioned stuff) and the spaces between the frames, we wondered when watching a film actually how many blank spaces of a film strip we actually see (even if the brain did not register them – they still exist .) I’m sure it’s a very high percentage of blank spaces that we filter out … I would like to explore those spaces – who knows what films they could hide.. also what happens in the digital realm, the spaces between the pixels, the Pixy is a great place to explore and I’m really excited to see it up and running.


0 Comments

spent the day editing video of the shoots this week. Trying to take my voice out of the sound track, and then impprting it into after effects to get the size, the levels. It taken ages, but i have done two shoots now – and tomorrow will begin an the edit points – lists of in and out points.

Beautiful day in Brighton. Sun was shining- the beach was packed. My son loved it. We took him to the park in the beach and he played in the paddle pool.

Michael and Natacha came to dinner. Matt cooked Moroccan Lamb. We didn’t talk about work at all. That was lovely. it was great just to hang out and not think about it.


0 Comments

Over the Lighthouse Residency, an aim is to work on two more sculptural versions of Chameleon Prototype 08. We are also working with Kim Byers and Nadia Berthouze from the HCI lab at UCL to test how people are responding ’empathically’ to the different screens. Gordon Brand was at Lighthouse for the day yesterday – and we tested different shapes to project upon. We were in the secondary studio at Lighthouse – the same studio that I have been using to shoot new digital portraits. Gordon thinks that by Tuesday afternoon he might be able to have three secondary prototypes worked out. By the end of yesterday, we discussed having a screen with three layers. A back transparent layer with a slight concave shape. That feeds into a transluscent white flat surface. This meets a concave white surface that meets th flat surface in the middle.

I need to send Gordon some video to project on the experiments. I need to do this tomorrow. I am hoping that Gordon can deliver the screens on Tuesday afternoon.

I am having trouble with the cameras of the face tech reading technology… My firewire cameras don’t work with it. uSB cameras do work – but they don’t like dark light. I have bought a huge range of infrared lights but so far, other than putting them directly in front of the face – I can’t see a difference. I was hoping for more of a ‘flood’ of infra-red light. So far. not working. at all.

Gordon Brand has bought some pencil cam to test, and we are hoping to embed these in the screen. The placement of the cameras is difficult. Ultimately, the cameras want to be right in front of the face which ofcourse, doesn’t work conceptually at all. Both distracting, can’t see image, and more and more. don’t know how we will ever work through this. We need a zoom camera. More resolution. manual zoom, USB camera that copes well in dark light. I have gone through about seven cameras now. I am not sure where or if the type of camera I want exists.


0 Comments