For this week's assignment I played a lot with the different machine learning models available. At first I wanted to make a machine that sings when users open their mouths, but I had difficulty achieving this using Google's voice mechanism, and I was too curious to try and explore the interaction with hand position.
I was inspired by the idea of finger puppets -
I wanted to create 4 characters that speak out loud when they are near the microphone, I aim to add to each colliding between microphone+figure the ability to draw their words while they speak. (Right now the sound of the characters sounds like a flute, but I'm thinking about changing it to different voices, or adding a different flute to each of them.)
https://editor.p5js.org/mbs9316/sketches/WAF2-wcBT
But right now I couldn't get the drawing effect. :((((( I created a class that keeps my colliding fingers (figure and microphone) as point x ,y, and pushes this point to an array. I can see that the array is growing successfully, but I couldn't get the x,y point correctly to draw the lines (at first I tried to get a line) on top of the video.
class Point {
constructor(x, y) {
this.x = x;
this.y = y;
}
//blue rect
if (floor(fD / 2) < 20) {
fill(0, 0, 255);
rect(floor(Tposx + fD / 2), floor(Tposy - fD / 2), 20);
a = new Point(floor(Tposx + fD / 2), floor(Tposy - fD / 2));
wave.freq(notess[0]);
env.play();
speak.push(a);
}
for (let i = 0; i < speak.length; i++) {
let keypoint = speak[i];
let c = keypoint[0];
let d = keypoint[1];
a.show(c,d);
console.log(c);
}
another one that didn’t work: