M1 and further development

After some coaching and further discussion we started to tinker with the input interaction. It seemed a bit crude and not really reacting well to delicate and fine sounds. From my end, I developed the code with a external mic which worked average at best which is strange because it is supposed to be a “quality mic for online communication” according to the manufacturer.

As we started to add some more interactive gist to the output we were thinking of the actual point of it. What it is supposed to do and if it can inspire the user to use skill in a more interesting way. Trying out the triggers as it looks like now is a bit strange. I experience that it’s really hard to repeat triggers with e.g. sound made by the body. Unless great concentration is put into replicating the same sound, I can’t get the triggers to behave consistently, which adds some frustration in the interaction.

So far attributes like responsiveness, accuracy and consistency is areas of exploration. The output right now is tuned to be window which will display a color with a specific lightness and saturation. We thought about coupling the values of lightness and saturation with frequency and loudness. Point is to craft an experience with the interface where the user can imagine a color and try to do a sound which they think would correspond to that “imagined” color.

Leave a comment