For the past six months I have been learning how to use motion sensors, remote cameras and face recognition software to create my first interactive artworks.

This blog outlines some of the progress I have made. I have kept it fairly high-level but please do get in touch if you have any questions or queries or if you would like to compare notes. I’m interested to hear how others have got on and to connect with other creative coders.

Many thanks to a-n for the bursary that made this possible.

Supported by a bursary from a-n The Artists Information Company.


0 Comments

I have also been developing Shutterbug, which comprises a small motorised camera mounted on a raspberry pi. It wakes up every 10 minutes and takes 50 random pictures of its surroundings and stitches them together into a digital collage of what it ‘sees’. The results are fed back into the gallery space to provide a realtime digital eye on proceedings. I have shown this work twice now, once at the ALL TEN:th anniversary exhibition and once at the Cambridge PiJam, and I have been surprised at how much interested it has generated.

This is my final post for this blog as my a-n grant comes to a close. Thanks to a-n for the opportunity to learn so much! I would love to hear from others who are also working with Python code and interactive technology as part of their practice.


0 Comments

I have just realised that the pics I used in my last post about face recognition could not have been from the sensor that I was referring to [the one that had its own built in face recognition circuitry]: because that sensor did not have a camera. It could not take any pictures. It could only receive a visual input, assess that input, and send a signal if it thought there were any faces present.

So this post is where those pictures should have been: they were taken by a camera, and then each picture that the camera generated was assessed to see if there were any faces present, and green bounding boxes are then drawn round the part(s) of the picture that are considered to contain faces. A subtly different way of doing things.

This two-step approach requires 1) setting up a camera and being able to get it to take a picture and then 2) assessing that picture for faces. Neither step is super-tricky, but both are at the fiddly end of the python coding spectrum. Put it another way, I needed some help with both stages (thanks Dr Footleg!).

So the above image is the very first picture that I managed to take automatically by using python code to fire the camera. As you can see it is upside down: I had yet to work out how to rotate the image. You can also see that it was late into the night and I am looking a bit incredulous – it is finally working!

Later I managed to get better pictures, and then add the face recognition analysis (see previous pics!). Complicated? Yes, fairly, but by no means impossible. Reliable? Not totally. Worth doing? Well, you’ve got to start somewhere.


0 Comments

Let’s face it, sorting out how to achieve face recognition sounds daunting – but it is surprising just how much is relatively plug and play these days.

For example, a friend at Cambridge MakeSpace told me about a motion sensor that has some elementary face recognition built into the chip itself, so that when you wire up the sensor to the right GPIO pins on your Raspberry Pi, you can in theory read off a set of numbers that tell you if there is a face in the frame, what size it is, and what degree of confidence it has that it is indeed a face and not an odd pattern on the carpet. It can even tell the difference between a head facing forwards and a head pointing away.

This sounded perfect for me – for Catatonic I wanted to be able to tell when someone was moving away from the installation so that I could lure them back with more cat pics.

Unfortunately the sensor was out of stock almost everywhere, and reading the reviews it seemed to have reliability issues. But I did manage to track one down and install it – and it worked! I was almost impressed; however, it only worked over a range of about a metre or two, and the reliability was about 80%. So my punters would have to be pretty close to trigger a response, and there would be a fairly high chance that I wouldn’t notice when they moved away.


0 Comments

A simple PIR (Passive Infra Red) motion detector

Basic motion sensors are surprisingly small and fiddly. They need to be connected to GPIO pins on the Raspberry Pi which you can do using simple leads that push into place, or you can solder joints for a more permanent connection. Soldering is also fiddly.

Once you have your wiring in place (ensuring that you get the connections correct to avoid power surge and subsequent sensor meltdown) you can write some python code to interrogate the status of the GPIO pins and find out if the sensor has sensed anything. If it has – bingo, you are on your way. What would you like to do with this information? Now you are back on terra firma and *just* have to do some creative coding that responds to the movement. Game on.

The sensors that I tried were only moderately or intermittently sensitive. Sometimes it seemed as though you could dance a waltz in front of them and nothing would happen, other times the merest twitch would set them off. This made it difficult to generate consistent responses for even relatively simple scenarios, such as the presence or absence of a viewer in front of the sensor. For more complex scenarios, such as detecting when the viewer is moving away from the sensor, the success rate was lower still.

 


0 Comments

In truth I hit loads of snags along the way – it is the nature of programming to have things fall over. Searching for the error online often provides a fix, but there were several times when I reached the outer limits of my own technical capabilities. This is where being part of a community of coders helps. I was amazed at how knowledgeable others are and how deep their understanding of what is going on.

For example, I needed to refresh the screen with cat pictures, but the Raspberry Pi couldn’t cope – it ran out of memory and the refresh rate got slower and slower. I thought this must be an absolute limit of the Raspberry Pi and that I would need to use something more powerful to run the code, but it turned out that I just needed to empty the trash. Including a line to explicitly clear down the memory after every cat display solved my problems.

This may sound fairly straightforward (and in some ways it was) but things are only ever easy if you know the answer; otherwise, minutes can turn into hours and hours into days whilst trying to solve what seems to be a trivial problem.

The alternative would be to pay someone to code everything for me; and this is tempting, except that (a) I don’t have the money and (b) I think it is important to get to grips with some of the code myself. Some of the best outcomes I have created have come from unexpected glitches during my search for a solution.


0 Comments