본문 바로가기

Music & Audio Technology/Machine Learning

Guitar effects control in real-time by facial expression and gesture detection

The video above explains it all.

 

'Guitar faces'  refer to the characteristic facial expressions exhibited by guitarists, which reflect their emotions during their guitar performances. The objective of this project was to establish a reversal interaction between guitar sound and facial expressions, thereby allowing the guitar sound to be modulated in response to the performer's facial expressions. To accomplish this, I developed AI tools that enable guitarists to manipulate the guitar sound by utilizing audio effects that are responsive to their facial expressions and gestures.

 

FaceOSC and Wekinator detect facial expressions and gestures. Then the output system generates the guitar effects, so that guitarist can control their sound effects using their facial expressions and gestures.

 

Traditionally, guitarists have relied on foot pedals to control their sound effects. However, I found that facial expression and gesture detection can provide more intuitive and expressive control for guitar sound effects. I think this is because facial expressions and gestures have a more emotional correlation with the music performance than pressing foot pedals. The potential of this project extends beyond guitars and can be expanded to encompass various other instruments or synthesized computer music.

 

 

How it works?

Face detection: FaceOSC (https://github.com/kylemcdonald/ofxFaceTracker/releases)

Neural network (face detection data -> guitar effects parameters): Wekinator (http://www.wekinator.org/)

Audio signal prcessing: ChucK (https://chuck.stanford.edu/