Applied to audio signals, our technology allows to convert sound to image and vice versa. Applied in realtime, this can be used to create an animation of evolving sound timbres:

We believe that sound characteristics such as "sharp", "smooth", "warm" or "edgy" can be recognised via some visual counterparts. Do you agree ?

We highly recommend listening to this on speakers, not your mobile phone. A mobile speaker will not be able to reproduce the sound properly. Having said that, even with top quality speakers you will see changes in the visual patterns when you barely perceive a change in the sound timbre. This is simply because humans are better at detecting visual patterns than sound timbres. These visualisations are like audio X-Rays - no change in timber will escape your eye.

As the timbre changes over time, the visual pattern representing the timbre changes accordingly. Note that the pixels you see are the colour-coded values of the soundwave, meaning you do SEE the sound. This is very different to the somewhat random or "interpreted" sound visualisations you may know from media players. Here, what you see IS what you hear. (It's also very different to spectograms, as our technology works in the time-phase domain, not in the time-frequency domain.)

For the geeks amongst us: to create this animation, we played a single note on a (real) Minimoog synthesiser and slowly "tweaked" its configuration, therefore changing the timbre of the note played. At the same time, we also let our prototype repeatedly calculate the timbre of the past two seconds worth of sound. These timbres were visualised as frames of the animation and kept in sync with the sound played back (with a framerate that can certainly be improved). It's far from perfect but it works - and it can be made perfect.