top of page
Achtergrond.png
IMG_2599.jpg

Platform Project 2

Exploring the world of spatial sound whilst inducing intuition on the complexities of algorithms.

 Explore our process timeline here ↓ 

CONCEPT RESEARCH CONCEPT RESEARCH CONCEPT RESEARCH CONCEPT RESEARCH CONCEPT RESEARCH  

Introduction

In February of 2024, the Platform Project of Emergence 2 began working on a new project. The goal of this project was to use an innovative spatial sound system called 4DSOUND to make complex algorithmic technologies more intuitive. To be able to experience them. We elaborated on this goal by formulating a concrete problem to address:

When an AI system behaves ‘human-like’ in some specific task, it is easy to assume that it experiences the world in a ‘human’ way, which is to say similarly complete and generalized. This ascribes its competencies and understanding that it does not actually possess.

The way AI systems interpret our world is unlike the way we do this ourselves. Though its results can be accurate, being aware of this difference is important.

Research question

From this problem, we formulated a research question, the most important question we wanted to answer with our project:

 

Within this framework we started experimenting with sound as a medium to tell these stories. Stories of algorithmic translation, interpretation, imagination. Below are a few of these explorations. 

​How can we utilize the 4DSOUND system to shed a light on the inhuman way AI systems process information?

1 - SEE WHAT A COMPUTER SEES 

What we see

Nick Cave.png

What the computer sees

In this experiment we did some research on how image recognition algorithms function. Often, the way these systems interpret image data is through a string of values, each representing a single pixel. To get an idea as to what this actually looks like, we took a simple very low resolution image of singer Nick Cave and represented this the way a computer might see it. Pixel by pixel. Each frame in a 60fps video represented one pixel. The result is completely meaningless to us of course. It would be impossible to reconstruct the image just from seeing the video. To a computer, however, this data is completely valid and useful. An interesting contrast.

2 - AUDIO TO IMAGE TO AUDIO

Original version

Dreams - Fleetwood Mac

Dreams_edited.jpg
00:00 / 00:15

Blurred version

Dreams - Fleetwood Mac

Dreams_Blur_edited.jpg
00:00 / 00:15

An early experiment we did entailed translating an audio file to an image, modifying this image in some way, and then translating this image back to an audio file again. In this example we used the song ‘Dreams’by Fleetwood Mac. The effect is interesting.

3 - IMAGINATION IN DISTORTION

Original version

Interpretation distorted voices translated.png

Distorted version

Interpretation distorted voices (1).png

A further exploration of the experiment described above concerned another text by Barthes. This time, the text was read both by a human and a computer. Overlaying these two voices once again resulted in something most would call ’noise’. Unintelligeble speech. When we fed this audio into a speech-to-text algorithm however, it came back with something interesting. A russian text that, when translated, reads like contemporary poetry. What's interesting is that the distortion caused by overlaying the audio files actually birthed something new. Distortion, it seems, is key in the generation of new ideas.

bottom of page