top of page
IMG_5078 (1)-2.jpg

Platform Project 2

Exploring the world of spatial sound whilst inducing intuition on the complexities of algorithms.



In February of 2024, the Platform Project of Emergence 2 began working on a new project. The goal of this project was to use an innovative spatial sound system called 4DSOUND to make complex algorithmic technologies more intuitive. To be able to experience them. We elaborated on this goal by formulating a concrete problem to address:

When an AI system behaves ‘human-like’ in some specific task, it is easy to assume that it experiences the world in a ‘human’ way, which is to say similarly complete and generalized. This ascribes its competencies and understanding that it does not actually possess.

The way AI systems interpret our world is unlike the way we do this ourselves. Though its results can be accurate, being aware of this difference is important.

Research question

From this problem, we formulated a research question, the most important question we wanted to answer with our project:


Within this framework we started experimenting with sound as a medium to tell these stories. Stories of algorithmic translation, interpretation, imagination. Below are a few of these explorations. 

How can we utilize the 4DSOUND system to shed a light on the inhuman way AI systems process information?


Behind The Scenes


What we see

Nick Cave.png

What the computer sees

An early experiment we did entailed translating an audio file to an image, modifying this image in some way, and then translating this image back to an audio file again. In this example we used the song ‘Dreams’by Fleetwood Mac. The effect is interesting. 

In another example we used a text by french author and theorist Roland Barthes from his book A Lover’s Discourse, which reads as an encyclopedia of human love. This text was then read by a computer generated voice, which is a nice contrast to the ultimately human and emotional text.

A similar operation was performed on this audio file. After translating it to an image, this image was printed out. After this, we took a picture of the image and translated this back into an audio file. The result: mostly noise. Data is changed when it moves through different media. In this case: the printer, the lens and sensor of the camera. Information, we might say, is lost in this process. 

bottom of page