My name is 김수현 (金琇賢, Soohyun Kim).
Here is my CV: (link)
Contact: soohyun@ccrma.stanford.edu
Soohyun Kim is a PhD student at CCRMA, Stanford University, advised by Prof. Chris Chafe, Prof. Julius Smith, and Prof. Ge Wang.
He also completed his master's degree at CCRMA advised by Prof. Julius Smith.
His primary research interests lie in (1) creative and human-playable sound synthesis through neural networks (Differentiable DSP/Physical Modeling) and (2) human-AI interaction design for new music performance.
He holds a bachelor's degree in electrical engineering with a minor in physics from Korea Advanced Institute of Science and Technology (KAIST) (GPA: 4.09/4.3 (Major: 4.16/4.3) (Summa Cum Laude)), where he did research internships at following labs:
- Music and Audio Computing Lab (KAIST), Prof. Juhan Nam
- Statistical Speech & Sound Computing Lab (KAIST), Prof. Hoi-rin Kim
- Smart Sound Systems Lab (KAIST), Prof. Jungwoo Choi
Beyond his academic work, Soohyun is a music producer and recording/mixing engineer trained in South Korea, with experience in multiple popular music production projects. As a musician, he is a guitarist and singer.
Soohyun's research objective is to shape and guide the future of AI music performance and composition, steering it towards a more artistic, aesthetic, and playful paradigm. His ambition is to elevate AI from an automatic audio tool to an intrinsic artistic medium for music performance and composition. He aims to explore and demonstrate how musicians can expand their artistic expression through AI, and how AI can offer enjoyable and playful musical experiences for both amateur musicians and the general public.
- What unique qualities does AI inherently possess as an artistic medium, particularly in the realms of music performance and composition?
- What new musical expressions are achievable only through the unique abilities of AI?
Soohyun did Audio AI/DSP Internship at SAMSUNG Research America's Audio Lab, where he did research projects in developing new Machine Learning and DSP-based technologies for audio, electroacoustics, or multimedia (including non-linearity control of loudspeakers).
My Research Interests:
1) Human-AI interaction design for new music performance
- By utilizing AI, I aim to expand the realm of music performance through new interactions, previously difficult to realize only with traditional sensor devices. This approach doesn’t merely map the coordinates of body movements directly to sound but utilizes AI's ability to understand the semantic meanings of gestures or postures, allowing for more emotionally bonded mapping for a more expressive playing style. While maintaining such research ethos, my research focus on expanding the scope to include other instruments, different sound expression elements, and other types of non-verbal interactions. Ultimately, my goal is to create and showcase new and unique music performance pieces.
2) Creative and unconventional sound design through neural networks
- For AI sound synthesis, what I pursue is not just another AI generative model that simply mimics the timbres of existing instruments. Instead, I aim to create unprecedented and innovative timbres, sound control, and interactions that people have never heard or experienced before. Fundamentally, I want to create a new synthesizer instrument that offers unique musical expressions. My research focus on designing how to actually utilize a neural network's latent space for unique timbres and musical expressions and further, what the most suitable human interaction and interface designs are for such methods.
I am a professional recording/mixing engineer and music producer trained in South Korea
As a musician, I am a singer and guitarist (acoustic/electric).
My Guitar Playing:
My Singing Voice: