The following post was written on the 15th November but due to an unforeseen event that involved my laptop’s hard drive dying, I wasn’t able to post it on here until now. The reason for the delay also strangely ties in with the theme of this post…


We felt the need to have another workshop day before we settle on the direction we want to take and what we would like to make for this collaboration. We decided to look at programming sensors in this second workshop and it turned out to be a full on day of getting things to function properly and fixing problems instead of experimenting with the technology.

Mike showed me what he had lying around in his workshop from his previous projects and laid out what was to me a plethora of wires, transistors, things called breadboards (http://en.wikipedia.org/wiki/Breadboard) , chips, sensors and other electronic things. He demonstrated how these things could be set up and wired up a distancing sensor connecting it to a LED light. He then showed me how to program it on the computer so that when something is near the sensor, the light will come on. We later switched the LED light with a motor so if something is near the sensor the motor will turn on. So far things were running pretty smoothly.

Looking at the codes took me back to my programming days at university ages ago (I studied Computer Visualisation and Animation and did a year of Computer Science previously). Perhaps using this technology would enable me to merge programming into my practice and see it in a different light. Previously everything I did in computer graphics was on the screen and in terms of creating artworks I usually prefer something more tactile, something “real” and not experienced only on a screen hence I switched to drawing on paper. What Mike showed me here demonstrated something different, it uses programming to control something physical and made it more interesting for me.

Previously we discussed an idea of creating a mutable canvas, a drawing surface that alters depending on the location of the artist. We decided to quickly prototype a crude version of this, by just having the surface rotate when the artist is near a certain location of the drawing surface.

We painted blackboard paint on the surface so we could draw using chalk. I guess it retains the ephemeral theme we both like. Whilst waiting for the paint and glue to dry we looked at another sensor, the 3 axis accelerometer, which measure movement in three dimensions and it is what the Wii controller uses. To demonstrate what it does Mike hooked it up to an RGB LED light so it would change colour as the sensor moved in different directions. This was something that Mike has done before and had codes to do this from previous project. The thing that comes with technology is sometimes you encounter situations where it just decided not to work and you have no idea why as everything you have done worked the last time. Unfortunately this happened to us and finding the source of the problem took a huge amount of time, but luckily we did get it to work in the end. Seeing the accelerometer made us think of wearable devices, perhaps something the artist could wear whilst drawing. The data could then generate another piece of work and so adding another dimension to the drawing process.

Time passed, the paint and glue were dried and we then returned to our crude mutable canvas to attach the moving part to the motor and the distance sensor. I guess this workshop day was not a day for technology, what we set up previously with the distance sensor failed to work and we ended up spending more time trying to figure out why again. We checked the wiring and the code, tested the components individually and everything seemed correct and it should be working.  We ended up feeling tired and frustrated so we decided to call it a day and I am sure we will get it to work once we feel less frustrated and defeated by technology.


Before the second workshop Mike and I did more research on the drawbots and found more existing examples:




We played with the idea of creating automated drawing tools in the last workshop and witnessed other examples at the Kinetica Art Fair and noticed how most of them are predetermined machines. Having also looked at the online examples online with less predetermined drawing bots, I wonder if they will generate something interesting other than just be emulating someone drawing random lines? We knew that to develop this idea further we will need to make the bots more advance and we agreed that making bots is not that interesting unless they interact with the artist in some way and perhaps interact with the lines the artist draws, adding and/or erasing them.

Reflecting back on both workshops and our discussions so far, I find myself more drawn to the direction of using sensors, perhaps picking up signals from the artist and do something interesting with the data in real time, i.e. as the artist creates. It makes me think of translation. The conversion of one thing such as movement into something else. I like the idea of translation and will think more about this and share this with Mike. I think I am more drawn to this direction than the bots, partly I am unsure why add bots into it, why do we need automated drawing tools? In addition, are we trying to develop tools in this collaboration or actually make an artwork or both? For me, creating a work using the data/signals picked up during the process of creating would be more interesting. Despite the day’s technological failure, I am still happy to look further into using and programming sensors and come into a more finalised idea of what to make.


Kinetica Art Fair showcases works in the realm of interdisciplinary new media art, paying focus on the convergence of art and technology, which means a good place for Mike and I to get some inspiration.

There was a varied selection of art at this year’s fair, ranging from high-tech interactive work to aesthetically pleasing holographic artworks to moving sculptures that are simply constructed out of suspended weighted strings and motors. There was even, what was to me, a conceptual performance piece by Sam Meech, “8 Hours Labour – Rates for the Job”, where he was using a knitting machine to create banners at the fair. The stitched banner incorporated contemporary data about working hours within the ‘digital’ economy to map the shift from Robert Owen’s 8 hour day ideal, where each misplaced stitch represented an hour of work done outside of the 8 hour ‘contract’.

Another work that stood out for me was Temporeal by Maxime Damecour, who used wagon-wheel effect to do real time animation of a sculpture. I was drawn to how it was created and I found out here: http://misc.nnvtn.ca/temporeal.html

Works that are most relevant to our project were the automated drawing machines and there were quite a few of them. One thing we noticed was they were all pre-programmed machines and we are aiming for something that is less predetermined in our project, something that generates unpredictable results.

It was insightful to see current examples of work that merges art and technology. However I feel that there wasn’t any work that really caused an impact on our project. The next stage is for us to develop further the ideas we generated in our last workshop and to make some decisions on where we are heading and then to create a few prototypes for us to experiment, with the ultimate goal of developing a body of work to exhibit in the future.

Mike and I had a chat after the fair and it seems like we are both interested in the ephemeral and performative aspect of the work we are making.


(Post by Bettina)

Having introduced Mike to my way of working it is now my turn to visit his workshop and see what he does.  The first thing I saw was the lathe and after a quick demonstration of what it does, we began to see whether there was a way to combine drawing with it to create some hybrid objects.  I started making marks using a colour pencil on the dowel as it spins, adding different pressure to create different tones. Mike then in relation to the marks cut into the dowel using different tools. We made a few versions of this, which ended with us using different colours with each colour representing a different tool to use. The process was fun, particularly the colouring part for me. The resulted objects looked like strange handles.

Next we decided to create some drawing tools. We wanted to draw using more of our arm movements, so we drilled holes into dowels to fit charcoal of different sizes into them. It turned out the charcoal was too brittle and was difficult to draw with. The charcoal would snap easily with part of it stuck in the dowel. I ended up stabbing the stuck charcoal with a screwdriver to create charcoal powder to then scattered on the paper by hitting the dowel with the hole to the ground against the paper. It created interesting marks.

Mike then on a whim decided to attach an off centre motor to the dowel to make the dowel wobble but the motor was not powerful enough to cause an effect. However this sparked the idea of creating automated mark making tools – Drawbot4000 and eventually Dustbot.

Drawbot 4000 is essentially a block of wood attached to a motor with legs and charcoal attached underneath. Its aim is to draw as it moves across a surface. Dustbot is similar but a wooden container holding charcoal powder that sprinkles charcoal as it moves along the surface. They are not robots per se and need refinement but they produced some promising results (see pictures).

After the session, we both would like to further develop these automated drawing tools to make them move more efficiently and look into different shaped legs or drawing materials they could have.

Mike mentioned swarmbots. Swarm robotics involves coordinating a large amount of simple physical robots, where a desired collective behaviour emerges from interactions between these robots and their interactions with the environment. It would be interesting to have these robots react to marks I make, which could then resulting in me “conducting” these robots. Having these robots interacting with physical environment will be interesting too.

We also discussed the possibilities of experimenting with materials that react to the environment such as heat (thermochromic materials) and light (photochromic pigments/paint). Working with these materials would enable different experiences to the art work depending on the environment it is displayed in. However not being able to see the whole piece fully each time could mean the making process would be recorded in the form of a time lapse video perhaps.

I think we will have a lot of experimentation to do. Next meeting will be at the Kinetica Art Fair and I am sure we will find even more inspirations there.



(Post by Mike Kann 06/10/2014)

While on holiday in Spain, we planned to do a little workshop where Bettina could show me her artistic process and invite me into it.  In the lead up to this we discussed our ways of working and visited a couple of really relevant sites, in particular the prehistoric caves of Serinya, where mark making had been going on with pigments for tens of thousands of years.

For our session, our canvas wasn’t a cave wall, but a large terracotta tiled mezzanine floor, while our pigments were a variety of coloured chalks.  The aim of the workshop wasn’t to draw anything in particular, but to follow the lines we felt like making with the flow of our body movements or a particular pattern that interested us.  There was a distinct difference between Bettina and my mark making – Bettina’s flowing, intersection lines and patterns and my more rigid repeating echoes of forms with block colours.

As the session flowed on, we became interested in the remainder and artefacts of the work we had created.  Shards of chalk had snapped off and remainders of larger pieces had congregated in the grouting between the tiles.  These dusty fragments became another, less precise, mark making tool that let us add another element to the drawings.  This layer, overwriting and adding to previous, deliberate marks was more arbitrary – smudging and mashing together dust and shards to add a filter to what we had done before.   As this was such a satisfying technique, we deliberately started to pulverise larger pieces of chalk to see how the dust would make marks through being hit and to coalesce all the colours.  This echoed the pigments and mark making of prehistory that we had seen a few days before and also created a history of pigment making on the tool we had used for the purpose.

Finally, we decided to clean the mezzanine with a hose and stiff broom.  We thought that this would be interesting as we both share an interest in impermanence and that the colours of the chalks being mixed as they were washed away could create an interesting effect.  As we sprayed the surface down, pieces of chalk joined into little clusters from the flows of the water which had turned a very subtle mauve from the mixture of all the pigments we had used.  Ending the session with a subtractive technique seemed appropriate, but in the end we still created an unintentional additive outcome.

This session was really about exploring mark making in a non-goal oriented setting – to really explore and play with lines, forms and colour for the sake of self-expression.   This lack of direction was a complete departure for me – as a designer my sketching is usually to find form for an object, to solve a technical issue or to practice my techniques by trying to reproduce reality in life drawing.  This freedom allowed both of us to draw in the moment, creating forms and patterns as we felt rather than anything predetermined or with a greater meaning.  The expression of our own artistic sentiments, aesthetic sensibilities and how we move in space was expressed by the mark making in this session.  I think that this physicalisation of the interplay between our distinct approaches really helped both of us to get an insight into each other’s way of working.

Finally, the impermanence of the mark making and the artwork itself is really important.  On a personal level it helped me to become less precious about what I was doing and to really get into the space of play and actions without consequence.  It also made the performance of drawing as important as the drawing itself – both combined to create an artwork but as both were so short lived the act and the record of the act became as important as each other.  Finally, the only record of the work that we created (beyond the photos) was the tools that we used – I think that the tool as a record of the artwork is an interesting avenue to explore in the future.



Written on 08/09/2014

Outside of this collaboration I am currently involved in the early stages of a transmedia project and to find out more about this multiplatform way of working and storytelling, I attended a Learn Do Share event in London last week and realised my findings are also relevant to this collaboration and I shall share some of them here.

Learn Do Share is a grassroots community for open collaboration, design fiction and social innovation. They organise events, labs and peer production and I do recommend attending one of their events. These two days involved talks and workshops on design thinking, purposeful and participative storytelling, digital technology and iterative design and rapid prototyping, all with social innovation in mind. There was a heavy focus on transmedia storytelling, design and digital technology and I even got to try on the Oculus rift. What I noticed in some of the talks were further examples of wearable technologies used for creative purposes. One that moved me a lot was:

The Eyewriter developed by members of Free Art and Technology (FAT), OpenFrameworks, the Graffiti Research Lab, and The Ebeling Group communities, where they worked with TEMPT1, a LA graffiti writer, publisher and activist, who was diagnosed with ALS in 2003, which has left him almost completely physically paralyzed except for his eyes. The collaboration resulted in developing a low cost, open source eye-tracking system that enabled TEMPT1 to draw using just his eyes and with the use of projections was able to create graffiti again on the side of a building whilst being in his bed. To find out more about this project, watch the video on http://eyewriter.org/. This project’s long term goal is to create a professional/social network of software developers, hardware hackers, urban projection artists and ALS patients from around the world who are using local materials and open source research to creatively connect and make eye art.

Other examples from the event showed how the collected data from wearable technology and sensors were used to adapt and influence storytelling. One project, My Sky is Falling (MSiF), an immersive experience that harnesses technology and story to create empathy for the challenges faced by foster care children, used collected data to analyse audience/participant’ responses as part of its iterative design process. They described it as designing with data and released a whitepaper about the project and their findings. This could be downloaded here: http://www.myskyisfalling.com/

Carrying on from my last post, data seems to be the recurring theme. An interesting thought was a question raised in a talk by Chris Sizemore (Editor at the BBC), which was “What is the digital-self?”. With biometric data that could be used for the purpose of the quantified self, there is also data about our preferences, habits and activities and places we go to, our conversations and the data that we share on social media. In another talk “I am not an API”, Emer Coleman (TransportAPI) highlighted that we are not only generating content as if we were daydreaming, we are more and more embedding ourselves into content. Thinking about it, with the likes of Spotify and Netflix, we no longer own but subscribe to content and along the way generating more data of ourselves, our likes and dislikes. All these personal data create a well-rounded description of us, together with our history forming a version of ourselves, perhaps forming another reality – the digital reality that is in a “cloud” somewhere maybe? Another thing that stayed in my mind in Sizemore’s talk was could digital technology help us be more reflective?

Reflecting on the above and the event, what I have described so far is about objects collecting data, what about object communicating these data with other objects? The Internet of Things, a trendy term, is what Mike is researching for his PhD. It is to do with how objects can communicate with each other, passing data to one another over the internet. An example of its use could be your smart fridge recognises you are low in milk and reorders online for you (on this note check out this article). How can this relate to artists and their tools and their creative process? What happens when the tools talk amongst themselves, and what if what they do with the data is unpredictable as if they have a life of their own – the secret life of objects?