Christine Marie

An exploration of Creative Technologies
Age of the Interface


@kristenemarie’s GIF from GifBoom: Weezer  (Taken with GifBoom)

@kristenemarie’s GIF from GifBoom: Weezer (Taken with GifBoom)

Taken with instagram

Taken with instagram

Contextual Statement

My work for this project has been based around the conceptual research and communication of what Stroodle is about, along with considerations in the design and interaction of the Interface for both our Website and Stroodle.
I started with our Website,, developing and populating this with our conceptual and contextual research along reflections around the idea of the interface, it intuitiveness and artificial digital collaboration spaces and we can create communities generated data visualisations.  
Secondly acting as a facilitator at the DCT Learning and Teaching day to AUT Staff, for the tutorial, Network Learning Environment, - The Power of Prezi.
And finally adding to the concept ideas and functionality behind Stoodle, the street drawing App.  Critical discussion in regards to the Graphic User Interface.

My critical thinking has been informed by a few different factors.  One is the hands on work at the Center for Learning and Teaching were I am a part of the iPad Project, hired as a Learning and Teaching Technology Enabler (LATTE).  This project is made up of various staff from the the Cflat department and a few heads of departments from other areas within AUT, all having different levels of understandin of technology.  This is a place where we discuss, inform and experience the uses of this mobile platform.  Over the semester I have reviewed Apps for the ipad, attended meetings and discussions in regards to the ipad in education and how we can use these as a collaboration tool.
The work at Cflat has enabled me to observe how people interact with the intuitiveness of the iPad and iPhone.  I found that once people understood the gesture and how the icons worked the iPad became a confident part of their day.  People responded to the GUI the most, how the App looked was important to them, even if it was a productive App.  Evernote winning over the built in Notes App.  

A major opportunity to be informed by experts in the area of mobile technology was being able to attend dev/world/2011.   Through the conference the word magic was used a lot, the user must think everything is just happening as it should, the GUI make act Seamlessly
Salter, G. (2011) stated in his dev/world/2011 conference lecture, Designing User Interfaces for iPhone and iPad, that ‘we must make things magical and sparkly but they also must work like they are supposed to and be intuitively relate-able to your market. This begged the question;
Who is our target market and what are successful developers already doing to achieve a connection between these people and their companies product?

Other areas of research and contextual understanding for this work comes from artists like Aaron Koblin how uses Web App and asks questions of his users to form interetesing community generated art.  This idea has been a key element in the development of Stroodle.  The desire for us to create something that makes the user question their everyday movements, not just getting from A to B but seeing their everyday travel as an art work, expanding from that and sharing it with other.  
The App is a location aware tracking devices.  Results can relate to the idea of Desired Lines, will we patterns start to form in the users participation.   Stroodle is a digital experience for tagging the earth and personalising the space around you.  Humans have always sought to personalize their surroundings and brand what they claim as theirs. Does the absence of a physical mark, created for all to see, change the implications of the mark created?

Our final outcome for this project would be for people to instinctively understand and use the app for its desire purposes.  To create the data as a live feed to a website for all to see, and have views explore other users experiences world wide.
The success of this App will be it being available on the Apple App store, and seeing people in the mobile device community using the app the in real world.  

For further information go to


Aaron Koblin - Work. (n.d.). Aaron Koblin. Retrieved July 7, 2011, from

Aaron Koblin: Artfully visualizing our humanity | Video on (n.d.). TED: Ideas worth spreading. Retrieved July 4, 2011, from
TedTalk from Aaron Koblin

Borenstein, G. (n.d.). Face Fight - collaborative drawing machine. Face Fight - collaborative drawing machine. Retrieved July 7, 2011, from

Miller, J. (2006). Geting Conceptual about Interface Design. IEEE Computing Society, 06(1089-7801), 86 to 90. Retrieved July 20, 2011, from

RATTI, C. (n.d.). Phone-Call Cartography - The New York Times - Breaking News, World News & Multimedia. Retrieved July 17, 2011, from

Stevens, Q. (2007). The ludic city exploring the potential of public spaces. London: Routledge.

Te Tuhi Video Game Machine. (2009, April 11). Te Tuhi. Retrieved July 18, 2011, from

10 Heuristics for User Interface Design. (n.d.). Jakob Nielsen on Usability and Web Design. Retrieved August 22, 2011, from

Chrome Experiments - Home. (n.d.). Chrome Experiments - Home. Retrieved August 16, 2011, from

GITE, V. (2009, July 19). Top 10 Open Source Web-Based Project Management Software . nixCraft: Linux Tips, Hacks, Tutorials, And Ideas In Blog Format. Retrieved August 7, 2011, from

GPS Drawing. (n.d.). GPS Drawing. Retrieved August 15, 2011, from

Panzarino, M. (n.d.). The Top 40 Best Multiplayer Games for iPhone and iPad - TNW Apps. The Next Web - International technology news, business & culture.. Retrieved August 7, 2011, from

Paper.js — About . (n.d.). Paper.js — Paper.js . Retrieved August 17, 2011, from

Patricio Gonzalez Vivo. (n.d.). Patricio Gonzalez Vivo. Retrieved August 8, 2011, from

Jung Huang, D. T. (n.d.). Ambient Intelligence: Blending into the digital environment. The University of Auckland. Retrieved August 15, 2011, from

Turkle, S. (n.d.). Digerati: The Cyberanalyst: Sherry Turkle. Edge : Conversations on the edge of human knowledge. Retrieved July 22, 2011, from
“Your words are your deeds, your words are your body. And you feel these word-deeds and this word-body quite viscerally. Similarly, the idea of multiplicity as a way of thinking about identity is concretized when someone gets an Internet account, is asked to name five “handles” or nicknames for his activities on the system, and finds himself “being” Armani-boy in some online discussions, but Motorcycle-man, Too-serious, Aquinas, and Lipstick in others.”

Turkle, S. (n.d.). Wired 4.01: Who Am We?. . Retrieved July 22, 2011, from
“That we are moving from “a modernist culture of calculation toward a postmodernist culture of simulation.” In an introductory programming course at Harvard University in 1978, one professor introduced the computer to the class by calling it a giant calculator. Programming, he reassured the students, was a cut-and-dried technical activity whose rules were crystal clear.”

Turkle, S. (n.d.). TEDxUIUC - Sherry Turkle - Alone Togetherâ€マ - YouTube . YouTube - Broadcast Yourself. . Retrieved July 22, 2011, from

Visual Complexity: Mapping Patterns of Information. (n.d.). Cool Hunting. Retrieved August 26, 2011, from

interface: Definition from (n.d.). Wiki Q&A combined with free online dictionary, thesaurus, and encyclopedias. Retrieved August 15, 2011, from

Md Daud, S. K., Mustaffa, F., Hussain, H., & Osman, M. N. (n.d.). Creative Technology as Open Ended Learning Tool: A Case Study of Design School in Malaysia. World Academy of Science, Engineering and Technology. Retrieved June 27, 2011, from

Berkum, Scott. “Thinking in Desire Paths « Scott Berkun.” Scott Berkun. N.p., 15 Feb. 2011. Web. 5 Nov. 2011. <>.

"CoMob." CoMob. N.p., n.d. Web. 5 Nov. 2011. <>.

Evitts Dickinson, Elizabeth . “What is the Salt?.” N.p., 26  Feb. 2008. Web. 30 Oct. 2011. <>.

"Word Spy - desire line." Word Spy. N.p., n.d. Web. 5 Nov. 2011. <>

Conceptual Statement

Smart Systems II - High Speed Photography Smart System

“A reflex is distinguished from other behaviors by mechanism.”

Reflexes can be informed by environmental forces, instincts that may have developed over the years, a reaction of timing triggered by sound or movement.  Humans sense and detect these elements in our world and  motions around us, we continually send messages to our brains, neural messages, which relay these messages back to our bodies for a action to be taken.

Like a Micro-Controller is programmed to react to its environment via programming and sensors it responds to the same forces that we do. But do we all react the same and at the same speed. By trying to capture moments of entropy we aim to test this theory.

In a split second movement can happen, entropy can happen, the increase of disorder to a structure. When the structure of an items starts to break down and blend with the environment around it. Gas being released into the air is one for of entropy. Our aim was to capture the destruction of Jenga. A game that has one starting point, a structured tower of blocks, then over time broken down until it topples over, spilling and breaking down.  Capturing the disordered state became our mission, the moment randomness and expression of disorder.


Final Results and Top 4 images:

The top two images are taken by the smart system, they managed to capture the movement of entropy, the increase of disorder in the structure of the Jenga.  The smart system reacted to the movement and sounds, capturing the images at the right time according to its programming.

The bottom two images are take my human reaction time and instinct. These were the best two from the human interaction category.  Although these photos do not show quite the entropy increase we were looking for, the human element and error I’ve mention earlier has give us some images we were not expecting.  Almost a before and after shot in one image.  The fact the we were using a digital camera the double exposure effect was one of the last things Theresa and I were thinking we were going to see.

Video Documentation of our smart system in action.

As the speed photography method is done in the dark it was hard to capture a lot of detail of the smart system working.  This video demonstrates the processes we went through to from set up to trigger and capture. 

Our Smart System in action.

I am now under the impression that I can not move and react as fast as a programmed micro controller can.  We managed to get more images of movement from the Smart System we had devised.  The tricky part is getting the Jenga to fall in anticipated place, in line with the laser and pezo we used to trigger the flash.  

This round of images is our formal test for “our” reactions to the toppling Jenga. 

Pull the string and then time the flash going off.  Am I faster than a Arduino that has been programmed to react in these conditions?

Could I detect movement in the dark at the right time?  In one sense, we were aware of when the Jenga had been triggered and only had a few split seconds to trigger the flash, but I was still relying on my instincts and there is always room for human error.   With human error and timing I think we accidentally managed to come up with some results.  We captured a few double exposure shoots, and I think maybe one good shot showing the beginning of Entropy, the Jenga starting to fall. 

The second component of the smart system is the sound sensor. The Arduino reads the sound waves from the microphone through the amp. The diode protects the Arduino from the negative portion of the AC wave. The sound threshold is adjusted depending on what level we want the sensor to register.
This is the first part to our smart system - the flash trigger. By connecting optoisolator to the camera flash with a PC sync cable, we can isolate the flashes voltage from the Arduino’s. With the flash pin on high, when I press the button the flash triggers.

One option to trigger the flash I thought of was using a PIR senors, a motion detector. Attached is a simple code that would get a reading from the PIR, read, then trigger the flash.

A flaw in this idea is that for the camera to capture the image we are using a dark room with an open shutter, then the image being capture by the triggering of the flash, so the motion senor would/may not work in the dark.  Theresa has come up with alternate methods, one by using a laser and a photocell and the other is triggered by sound.  Again the correct capture comes down to timing.

Our Arduino and the components we used.

Testing, round 1

Theresa and I went into the Chrome Key room to do a first test with her patch and circuit. Unfortunately the optoisolator had stopped working so we are having to order a new one. With saying that we decided to test the lighting and theory behind this method.  

Our first few shots we were testing the right position for the placement of the flash/lighting, and the also the right settings for the camera.   

This also gave us some time to test one of my ideas and this is, can I manual capture what the smart system has been programmed to do.  We set the Jenga tower up then I manually had to time knocking over the Jenga, and trigger that flash with my other hand. Trying to capture this movement was by prue instinct, as we were in complete darkness, I was pretty much guessing when I would be able to capture the movement/disorder.