I have been getting very technical with the sketches, the last 48 hours. A balloon model was created and connected with the wrists. From the perspective of facing the camera to the front, the only viable joints to use was the wrists. 3 different interactions were designed in order to explore the movement. Figure 1 is about slow movements to gently keep the balloon floating, a minor movement across the X-axis was added to it. As more drastic movement of the balloon didn’t match the input. However the coupling isn’t perfect but good enough to explore and get a sense of the movement.

A bit harder to program was the dynamic transition between slow to fast movement. A clear decoupling between the speed used to intentionally shoo away to the balloon felt not entirely natural. Either my hands are moving to fast for the webcam or the software is lagging behind, either way this sketch could be improved. A code snippet was added to the end so if speed is sustained at a certain level the balloon would eventually fly away. In my sketch I didn’t manage to make it fly away graciously it only disappears from the canvas. While in the gif it shows to be working but in reality the part where it disappears was could come whenever speed was created. So I’m guessing my code isn’t up to par.

The last sketch was made to experience the air moving through the hands more freely and carefully. While the balloon didn’t disappear exactly when I stopped the movement I could still sense some connection to it. The way that air move around my hands felt natural and the coupling made sense. The difference from previous sketches is that the movement is more calm and controlled. Here again we can see a shortcoming of my programing skills, the balloon is supposed to fall down when I stop my movement…

Experiencing all of these movement with the wrists as the joints being tracked, I came to the conclusion that the software actually tries its best to track points that aren’t really 100% visible all the time. I guess the backend, machine learning, comes into play and estimates where those points are. There is a paradox to these sketches and that is that the coding is done with a 2-D plane in mind. The X- and Y-axis has 4 quadrant that measure values constantly while my body is tracked in a 3-D space. Making sense of this conversion between real world, space and time to digital values is beyond my understanding but so far with the experimentations done we can identify problematic areas.
All in all there seems to be one way interaction in these sketches, mostly because the computer is looking for my movements and reacting accordingly. A nuanced dimension to the sketches could be shown where I, the user, wait for the computers move or response. Maybe we could have thrown the balloon to each other and avoid it falling down to the ground. A lot of ways of enhancing the nuance are visible but programing them is another feat I can’t accomplish on a short time or by myself.