“We see a lot of landscape, and think of a lot of thoughts. The thoughts re-construct another scape. People, nature, and city. In the many scenes we see and feel, what does the machine think?”

In 2017, We proposed a "NEUROSCAPE" system for artificial soundscape based on the multi-modal connection of deep neural networks. “NEUROSCAPE" is a combination of the words "neuron" and "landscape," which means memories-scape restructured by artificial neural networks.

We developed a system that automatically maps the corresponding sound/image, after analyzing the natural or urban scenery image with artificial intelligence algorithm.

This system detects elements related by word through label detection algorithms by inputting landscape images of a city or nature. The detected words are calculated by 527 categories of audio data set keywords using a GloVe algorithm. The detected keywords retrieve the most relevant audio files and images from the final sound library through a sound-tagging algorithm.

Through this system, we proposed several cross-media artworks. Those artworks aim to raise a fundamental question on the "coexistence of humans and technology." 

Seungsoon Park, Composer & Media Artist (Neutune)
Jongpil Lee, Algorithm Developer (Neutune)
Juhan Nam, Advisor (KAIST & Neutune)