How I got a girlfriend from 1 Chuck program

Matthew Reed
4 min readFeb 28, 2023

--

Just kidding, that’s just my friend, but I promise it’ll get me one soon.

auto_rizz.ck

My favorite kind of thing to make is a useless one, so this assignment was right up my alley.

auto_rizz.ck is a program that (in conjunction with auto_rizz_relay.ck, FaceOSC, and Wekinator) can make your sexy face just that much sexier.

I was inspired by a bunch of memes I’ve seen recently of people playing this song and making that face simultaneously to make a move on a girl. I thought, why not take out all the unnecessary steps like having to press play and simply have the camera detect when you’re making the face and play it for you?

So, here is my first project with Wekinator. It’s given my friends and me so many laughs, so I think it ended up being more valuable than I anticipated.

synth.ck

This actually sounds terrible, but I need to go to bed.

I originally intended to connect faceOSC to PhasePlant, which is a really cool synth plugin for any DAW. Still, after hours of trying to figure out how Logic handles OSC, I realized there is nothing logical about logic except its name. So instead, I built a keyboard interface from scratch within chuck for some reason.

Boom. Now I have a virtual keyboard.

And I used the outputs of FaceOSC to change some of the parameters of the keyboard, including the attack time, release time, low pass filter frequency, and triangle wave width.

I’m lowkey pretty disappointed with the results because what I wanted was a super expressive synth that adjusts its sound based on your face. What I got is a synth that can’t really make that many unique sounds and doesn’t even really sound good in the first place.

Maybe tomorrow I’ll figure out how to hook it up to phaseplant and this will be revolutionary.

dance_stem_splitter.ck

My final wekinator project was to make a program that plays different stems of a song depending on how hard you’re dancing. Yeah, I know I already made two other FaceOSC projects, but it’s just quite entertaining.

For dance_stem_splitter.ck, I approximate the speed and rotational speed of my face using adjacent values of the position (x,y) and orientation (x,y,z), then I use those to train a wekinator model that plays different tracks based on how fast you’re dancing. Even though there are five stems, since I only input two variables, the expressivity is not actually that great. You can only really tell the difference between just the guitar stem and all the stems.

FaceOSC is also really bad at tracking a face, so it usually loses your face when you’re dancing too hard. It doesn’t really seem to matter that much though because it will just continue playing whatever stems were playing when it lost you which is usually all the stems.

Btw, these are stems from a song I’m working on.

The lights in the background are sound-sensitive, so they go off whenever the program is playing enough stems.

Reflection.ck

I think there is some real potential here. Honestly.

The first and last projects I did were kind of silly and pointless, but I think facial input for a synth could actually be really cool. Our faces are literally the most expressive body parts we have, so we should really be harnessing them to make some dope sounds. Also, this would lower the floor for playing cool sounds on a synth because facial expressions are far more intuitive than knobs and sliders. Another added benefit is that you can play with both hands while changing the synth’s knobs with your face.

I also realized that I didn’t really use the full power of machine learning here. I didn’t even get much of a taste. I only really used it as a quick way to convert from k weird FaceOSC values to n variables between 0 and 1. Although it was useful for quickly training on new people without needing to guess and check values to interpolate between. But that is all I really used Wekinator for, as some kind of linear interpolater between different gestures and facial expressions (even though it was technically a non-linear neural network).

My projects were definitely lacking in expressiveness for a few reasons. Firstly, I don’t know the ins and outs of ChucK, so I’m not quite sure yet how to synthesize the coolest sounds yet. Secondly, I found myself having to limit the number of input features in order to have the programs behave predictably and to have them be learnable. When I added too many features, it was difficult to juggle your eyebrows, mouth shape, speed of head, etc., all at the same time. Thirdly, I was limited by the technology. FaceOSC doesn’t work that well (at least on my face), and I found myself having to find the perfect lighting in order for it to accurately detect my mouth.

Where would I go from here if I had more time? I think I would definitely try to hook up faceOSC to Logic because I already know how to make really dope sounds in Logic, so being able to control them with my face would be incredible.

I really enjoyed this assignment. I think its easy to adopt this toxic capitalistic mindset that everything you do or make has to be productive or “for” something. But when we follow that dogma, we forget to make beautiful thinks just because they’re beautiful, do funny things just because they’re funny, or make stupid projects just because they’re stupid. There’s so much beauty in doing things just because. It was nice to take a pause on life and make something just for the sake of making it (and for a grade, but that’s more of an afterthought).

--

--