본문 바로가기

About Me

About Me

My name is 김수현 (金琇賢, Soohyun Kim).

 

Here is my CV: (link)

Contact: soohyun@ccrma.stanford.edu

 

* I continue to pursue my PhD in Computer-Based Music Theory and Acoustics at CCRMA in September, 2024.

My advisor is Prof. Chris Chafe.

 

I am currently a master's student in Music, Science and Technology (MST) at CCRMA (Center for Computer Research in Music and Acoustics), Stanford University. 

 

My primary advisor is Prof. Julius Smith (for my AI music/audio signal processing research),

and my another mentor is Prof. Ge Wang (for my interactive-AI-based music performance or instrument design projects).

 

At CCRMA, I am in the AI TeamDSP research group, and I am also in the VR Design Lab and Stanford Laptop Orchestra (SLOrk) of Prof. Ge Wang.

 

I did Audio AI/DSP Internship at SAMSUNG Research America's Audio Lab, where I did research projects in developing new Machine Learning and DSP-based technologies for audio, electroacoustics, or multimedia (including non-linearity control of loudspeakers).

 

For my BS degree, I majored in Electrical Engineering and minored in Physics at Korea Advanced Institute of Science and Technology (KAIST) (GPA: 4.09/4.3 (Major: 4.16/4.3) (Summa Cum Laude)), where I did research internships at following labs:

Music and Audio Computing Lab (KAIST), Prof. Juhan Nam

Statistical Speech & Sound Computing Lab (KAIST), Prof. Hoi-rin Kim

Smart Sound Systems Lab (KAIST), Prof. Jungwoo Choi

 

I am also a professional recording/mixing engineer trained in South Korea who completed a professional sound engineer course at Record Factory (affiliated with Full Sail University, Avid professional learning partner). In addition, I am an Avid certified user of Pro Tools.

 

 

My Research Interests:

 

1) Human-AI interaction design for new music performance

- By utilizing AI, I aim to expand the realm of music performance through new interactions, previously difficult to realize only with traditional sensor devices. This approach doesn’t merely map the coordinates of body movements directly to sound but utilizes AI's ability to understand the semantic meanings of gestures or postures, allowing for more emotionally bonded mapping for a more expressive playing style. While maintaining such research ethos, my research focus on expanding the scope to include other instruments, different sound expression elements, and other types of non-verbal interactions. Ultimately, my goal is to create and showcase new and unique music performance pieces.

 

2) Creative and unconventional sound design through neural networks

- For AI sound synthesis, what I pursue is not just another AI generative model that simply mimics the timbres of existing instruments. Instead, I aim to create unprecedented and innovative timbres, sound control, and interactions that people have never heard or experienced before. Fundamentally, I want to create a new synthesizer instrument that offers unique musical expressions. My research focus on designing how to actually utilize a neural network's latent space for unique timbres and musical expressions and further, what the most suitable human interaction and interface designs are for such methods.

 

Me recording a rock band

 

As a musician, I am a singer and guitarist (acoustic/electric).

 

Me with my beloved guitar, Gibson ES-355

 

 

Blue Bossa Guitar Improvisation

 

 

 

My Singing Voice