aux synesthesia is the outcome of an interdisciplinary project between Kunsthochschule Berlin-Weißensee and Freie Universität Berlin.
The vision of aux synesthesia is to change the "view" how we as visual human beings use our senses. With the combination of VR-technologies we created an experience based on auditive signals, coupled to an augmented reality. This augmented reality is an abstracted reflection of the true world, reduced to the most necessary details we need for an orientation in our environment. Those details will show up as digital sound waves which react different to certain materials, moving objects oder changing situations.
aux synesthesia acts as an interface between the physical and the virtual reality, creating an installation on a performative level. This experience gives a limitation of what we need to see and what to do miss in world which full of a sensory overload concerning the digital and real environment.
Peter Sörries
Jan Batelka
Thushan Satkunathan
As visual beings, we are able to use our senses without restriction. It is a matter of course for us to move through our environment. We have become accustomed to this condition – we unconsciously react to diverse impressions and thereby overlook changes. The boundaries between real and artificially created realities blur perceptible.
Above all, acoustic signals are coupled to any interaction and are a resonance of an environment. Each texture or materiality has its own individual echo and can be distinguished and demarcated from other unequal objects.
aux synesthesia uses this potential and changes our cognitive ability that is taken for granted. This project creates an interface between real and virtual realities Acoustic signals traverse space, are absorbed, reflected or changed in their quality. These auditory dimensions are translated in an extraordinary virtual space, based on the physical reality by reducing it and reinterpreting our consciousness into a narrative and intuitive experience.
For making this augmented exploration we created a physical and a virtual prototype.
The physical prototype is based on the Kinect-technology and recognices the surrounding in real time. The read in data will be translated into to an auditive stereo signal which increase or decrease in its frequency if objects comes closer or the position of the subject changes.
The virtual simulation acts as full representation of the real environment – coded in Unity and ready for HTC Vive. A sound wave layer is "placed" over the real world and appears with every auditive interaction in a fractal visual echo.
Create a Core77 Account
Already have an account? Sign In
By creating a Core77 account you confirm that you accept the Terms of Use
Please enter your email and we will send an email to reset your password.