NEUROSCAPE V1
뉴로스케이프 V1
NEUROSCAPE V1
뉴로스케이프 V1
2017
MIXED-MEDIA INSTALLATION
VIDEO
SOUND
WEBCAM
CCD CAMERA
DEEP LEARNING ALGORITHM
SEUNGSOON PARK
JONGPIL LEE
EXHIBITED AT
Art Factory, Daegu, Korea, 2018
KAIST Vision Hall, Daejeon, Korea, 2018
Platform-L, Seoul, Korea, 2017
SEMA Gallery, Seoul, Korea, 2017
KINTEX, Ilsan, Korea, 2017
Supported by
Ministry of Science and ICT Korea Foundation for The advancement of Science & Creativity
뉴로스케이프 V1
2017
MIXED-MEDIA INSTALLATION
VIDEO
SOUND
WEBCAM
CCD CAMERA
DEEP LEARNING ALGORITHM
SEUNGSOON PARK
JONGPIL LEE
EXHIBITED AT
Supported by
Ministry of Science and ICT Korea Foundation for The advancement of Science & Creativity
“We see a lot of landscape, and think of a lot of thoughts. The thoughts re-construct another scape. People, nature, and city. In the many scenes we see and feel, what does the machine think?”
In 2017, We proposed a "NEUROSCAPE" system for artificial soundscape based on the multi-modal connection of deep neural networks. “NEUROSCAPE" is a combination of the words "neuron" and "landscape," which means memories-scape restructured by artificial neural networks. We developed a system that automatically maps the corresponding sound/image, after analyzing the natural or urban scenery image with artificial intelligence algorithm.
This system detects elements related by word through label detection algorithms by inputting landscape images of a city or nature. The detected words are calculated by 527 categories of audio data set keywords using a GloVe algorithm. The detected keywords retrieve the most relevant audio files and images from the final sound library through a sound-tagging algorithm.
Through this system, we proposed mixed-media installation. The mixed-media installation allows users to capture desired landscape scenes and control the sounds. The system interacts with the six images and sounds that are most relevant to the selected image. It generates a kind of audio/visual collage artwork in real time. This artwork aims to raise a fundamental question on the "coexistence of humans and technology."
In 2017, We proposed a "NEUROSCAPE" system for artificial soundscape based on the multi-modal connection of deep neural networks. “NEUROSCAPE" is a combination of the words "neuron" and "landscape," which means memories-scape restructured by artificial neural networks. We developed a system that automatically maps the corresponding sound/image, after analyzing the natural or urban scenery image with artificial intelligence algorithm.
This system detects elements related by word through label detection algorithms by inputting landscape images of a city or nature. The detected words are calculated by 527 categories of audio data set keywords using a GloVe algorithm. The detected keywords retrieve the most relevant audio files and images from the final sound library through a sound-tagging algorithm.
Through this system, we proposed mixed-media installation. The mixed-media installation allows users to capture desired landscape scenes and control the sounds. The system interacts with the six images and sounds that are most relevant to the selected image. It generates a kind of audio/visual collage artwork in real time. This artwork aims to raise a fundamental question on the "coexistence of humans and technology."