For my final project, I want to bring my passion in video and sound into ICM. I encountered Xavier Cha a little over a year ago when she was giving a talk at my undergraduate studio art department and saw her video “abduct” which has stuck in my mind since then.
(image taken from Xavier Cha’s “abduct”)
It was a video of different actors’ and actress’ change in expressions. The change was sometime subtle and other time unexpected.
Her other video “feedback” is of a group of actors/actresses performing audience reactions without a subject. The group is positioned in front of the camera which created the effect that the user on the other side of the camera is being watched. I was intrigued by the idea of observing someone as well as being observed by someone. Taking that as an inspiration, I thought I could incorporate sound visualization into my videos.
I plan on making videos of my friend acting out different emotions (laughing, shock, angry….etc) and divided them into short clips. The selection of videos will be assigned to different amplitude. Interaction will take place between the user and his/her microphone input of him/herself and his/her surrounding. The actress’ laughing expression will change based on audio input (from giggling to big laughter). The sound of finger snap will trigger the clapping action and the sound of clapping will trigger tears. I also want to add a surprise reaction that if the user say out a certain keyword, the actress will read a poem.
The poetry inspiration comes from another artist Pascual Sisto whom I had the pleasure to do studio visit with a year ago.
p5.Speech and p5.sound will be my main reference. However, I also need non-speech recognition tool. I did some research and people seem to suggest CMU Sphinx. I haven’t had the chance to look into their GitHub but will do that this week. Another question that I have is whether I should created short video clips of different expressions as stated above or image sequences. Hope I can get some feedback in class tomorrow!