Moving on with the progress, we started to think about what we could achieve with frequency threshold, peak and sustain. We haven’t set on the output yet but feel that showing something visual as an output would be something we could manage to do with our coding skills. We touched on the subject on using sound as an output. How could it be meaningful? Usually a conversation works in asynchronous way, meaning: X is inputted and A is the result. This action is followed by a time difference between the two. We tried this by talking to each other at the same time. Trying to speak and comprehend what the other person is saying at the same time was difficult and not something that is considered meaningful in an interaction. Rather it would be annoying and uninviting.
Sound as an output was sloped as we believe that the effort required to dwell deeper into different sound libraries, would take more time from actually experiencing the interactivity. Although we have 3 functions there is a potential to craft a dynamic output and so far we have learned that those triggers work quite alright.
When triggering them they show a rectangular element being highlighted.

This picture above illustrate a quick sketch we did. When trying to set of the triggers I found that I used my voice to trigger them. My microphone didn’t really pick up the sound I did with my hands or when I was using other materials. I could make it trigger but then I found my self banging real hard on table, wall etc. Which is not a pleasant input interaction.
This quick little video illustrates a crude form of a visual interaction. The visual representation is a bit misleading here. The colors are not coupled with input interaction so more or less useless other than highlight that the element is triggered.