Live Web Wk 11-13: Final

Final: https://el3015.itp.io:3128/play.html

Final code: here

About:

Sound of Color is a web and mobile app that visualizes the relationship between color and sound frequency.

Process:

Final project proposal

Process

Steps for this week: a.) fix the line animation on the “draw” page/finally get that to work properly, b.) style the page: resize the pixelate canvas to be full width, c) animate the playhead, d.) fix the mouse move sizing issue, e.) get ipad camera to be facing out/ not user, f.) Billy to work on converting the sound frequency I have into midi notes, g.) the speed plays back faster every time you play and pause in ‘playback’ mode

a.) fix the line animation on the “draw” page/finally get that to work properly

It now animates! Many thanks to Shawn’s logic advice! It needed to go in a main draw loop and also needed an array that the sine lines selected would be pushed into.

But other issues emerged…for ex: it draws the full history of the lines selected. meh! this took way too long to figure out. but perhaps this is the solution? https://stackoverflow.com/questions/23092624/socket-io-removing-specific-listener

Tried a bunch of things with the “socket.off” and “socket.removeEventListener” but still not working. Need to ask Shawn/Mimi.

Screenshot 2019-11-29 09.03.33.png

Great, asked and turns out it was a simple solve! canvas.removeEventListener was the right code to use! Turns out I was just removing the line that draws the sine wave, when i needed to remove the whole function. Below is the winning script that works (this seriously took way too long to get)!

   redSelect.addEventListener('click', () => {

        evl = drawRedSine;

        let removeRed = (e) => {

            console.log('evl for red: ' + evl)

            console.log(e);

            let p = {

                x: e.clientX,

                y: e.clientY

            };

            socket.emit('sendRedData', p);

            console.log("EMITTED RED");

            canvas.removeEventListener('click', removeRed);

        }

        var listener = canvas.addEventListener('click', removeRed);

    });

b.) Resizing context + canvas to be full width (see d for answer)

Question: when i resize the canvas to be full width, the mouse pick is still picking from the same context size. it is not picking from the right pixel/ context didn’t seem to scale with the canvas.

c.) Animating the shape in playback mode

How to return value from a setInterval()? Having issues with the rectangle drawing across the screen…when it is drawing in setInterval(), it disappears too quickly because it is drawn every 500 miliseconds.

Never mind! fixed. All I had to do was draw the shape in the loop function.

d.) the mouse move was not choosing from the canvas.

The issue was that the canvas width was set at 100vh in css. so the actual video that was being drawn into the canvas was not the full window width. Figured out that you have to set the canvas width and height in the javascript.

the key code:

   ctx.canvas.width = window.innerWidth;

    ctx.canvas.height = window.innerHeight;

    //set video full width

    video.width = window.innerWidth;

    video.height = window.innerHeight;

e.) to get camera to face out I had to use ‘facingMode: ‘environment”

  let constraints = {

    audio: false,

    // video: true,

    video: {

      facingMode: "environment"

      // facingMode: 'environment'

    }

  }

f.) Converting frequency sounds to notes. Thank you Billy for converting the frequency to actual notes! woo!

g.) Fixing speed increase! This took so long but thank you Craig for helping troubleshoot this one! so the issue was that the enableFakePick() already has an event listener so it just kept adding to it when I was doing another event listener in the drop down selection part. What needed to happen was just to get rid of the event listener in this drop down part. part below is the winning code!

if (i === 1) {

letplayButtonDiv=document.getElementById('bottom-button-play-div');

letpauseButtonDiv=document.getElementById('bottom-button-pause-div');

playButtonDiv.style.display='inline-block';

evl=i;

console.log("evl:"+evl);

if (evl!=null) {

letplayButtonDiv=document.getElementById('bottom-button-play-div');

playButtonDiv.style.pointerEvents='auto';

console.log('playback');

enableFakePick();

//playButtonDiv.addEventListener('click', enableFakePick); //enable play button

}

//remove mouse pick function

document.removeEventListener('mousemove', enableMouseMove);

} else if (i === 2) {

//document.removeEventListener('click', enableFakePick);

evl=i;

if (evl!=null) {

// show color with pick function

console.log('mouse move');

document.addEventListener('mousemove', enableMouseMove);

}

letplayButtonDiv=document.getElementById('bottom-button-play-div');

playButtonDiv.style.pointerEvents='none'; //disable play button

}

Helpful Resources:

Credits!

Biggest thank you to Professor Shawn for constantly solving the technical issues I ran into. So appreciate all the help and for steering this project in a better direction. I learned so much regular javascript from doing this project and from being in Live Web- very grateful I took this course.

Shout out to sound expert Billy Bennett for converting the frequency sounds into midi notes! It sounds infinitely better!!

Also, thank you so much the last minute troubleshooting help, Dan Oved and Professor Mimi.

Data Art Wk 12-14: Data Critique

Final site: here

Final code: here

Eva and I wanted to continue thinking through the topic of garbage. We had started this project thinking we would continue building out our original designs from the previous Data & Publics assignment. But after hearing Genevieve’s lecture and seeing the examples of how people use their projects to critique an aspect of data culture, we decided to rethink our next steps.

From the lecture, I especially liked this Giorgia Lupi notion of ‘data is people.’ At the same time, I’ve also been thinking about Dr. Robin Nagle’s book ‘Picking Up’ and her experience working alongside the sanitation workers of NYC. From these two sources, we decided to use this opportunity to move beyond a general data visualization. Instead, we hoped to focus on the people. Who are the sanitation workers and what are their experiences?

1_kJt8zj1L1jooF2MrmRm9Ag.jpegImage result for picking up robin nagle"

At one point we wanted to interview sanitation workers but after talking to Robin, we realized how tricky that could get. After doing some more research, we found that sanitation workers have shorter lifespans and more injuries due to the back-breaking and dangerous nature of the work. We then got data from the Department of Sanitation New York that shows the different types of injuries by year and boroughs.

We were also interested in using 311 complaint data from NYC’s Open Data API to show the complaints made about sanitation. By placing the two perspectives next to each other – from those who complain to those who clean up – we hope to show a larger picture of sanitation in the city. We hope that this piece serves as a reminder of the labor that often goes unnoticed but is crucial to making our daily lives run.

a.) Finding our datasets. Using these 2 datasets – 311 Complaints (the most recent ones) and the DSNY Sanitation Worker injuries (for the 5 boroughs and within Jan-Nov 2019) – we brainstormed possible forms this could take. We split up the tasks, so Eva worked on the 311 complaint data using text analysis. I would work on the sanitation worker injury data.

b.) Designing it. For designing the injury side, we wanted to capture the physicality and humanness of injuries. So, the boroughs would have textural marks that each indicate a different type of injury. We were inspired by the textural quality of this website: https://canners.nyc/

First, was designing rough layouts of the pages.

Next, was illustrating the different type of injuries as marks and having the marks within the shape of that borough.

Screen Shot 2019-12-04 at 9.04.30 PM.png

Last part of the design process was combining the layout with the illustration.

screen-shot-2019-12-04-at-9.04.14-pm-2-e1575949432355.png

 

c.) Coding it. The idea is that when you mouse over a textural mark, the amount of that type of injury would show. This was a lot of div work! It required a separate div for every single injury for all the boroughs. A lot of the code for this was about showing and hiding divs when you selected a specific borough.

d.) Final touches. Next was designing and coding all the other parts of the website – an intro page and a menu.

screenshot-2019-12-08-10.33.50.png

Eva’s steps with the 311 Complaints and more can be read: here

Thank you!

Biggest thank you to Eva for always being stellar to work with! Also, big thank you to Rashida for brainstorming with us and giving us the best ideas for this! Excited to continue collaborating! As always, thank you Genevieve for inspiring us to think more critically about how data is collected, visualized and used. The readings and conversations have helped us be more self-aware when working in this space.

Understanding Networks Wk 10-13: Co-Draw

Xiaotong and I worked together on this project! The goal was to create a collaborative live drawing app using ITP’s Axi-Draw Machine and RESTful APIs.

The functions we aimed for: 

  • Co-Draw allows people to contribute to a drawing using a web interface
  • Control the Axidraw’s position based on the user’s mouse x and y position on the website.
  • Use socket.io to allow many people to draw at the same time.

When coding this project, we came across many challenges and we ended up with 2 versions:

  • version 1: tweaked example code for the Axi draw and added sockets to it (as backup)
  • version 2 (final): uses our own RESTful API that communicates from client → server → serially communicates to machine

Final code without sockets



Final Version

Screen Shot 2019-12-03 at 12.37.46 PM copy

This version uses our own RESTful API, which was done with a lot of help from Professor Tom Igoe. We were able to get as far as sending commands from the client side –> server side –> (communicate serially) –> to axi draw machine.

Systems Diagram

systems_diagram-e1575354864813.jpg



Server side: 

a.) GET and POST endpoints

// here are all your endpoints. The pattern is:
// GET the current value, or
// POST the new value as a request param:

server.get('/mouse', getMouseState);

server.get('/command/:command', runRemoteCommand);

server.post('/mouse/:mouse_x/:mouse_y', moveMouse);

server.post('/mouse/:mouse_y', handlePostRequest);

server.get('/state', handleGetRequest);

server.post('/state/:state', handlePostRequest);

b.) moveMouse() function that gets request from client and puts into a string to send to Serial

function moveMouse(request, response) {

// request is /mouse/:mouse_x/:mouse_y'
// get the position from the request
//SM,1000,-250,766\r

let result='SM,'+'1000,'+request.params.mouse_x+","+request.params.mouse_y;

// send it to the serial port as a command
sendSerialData(result);

// wait for confirmation from the serial port
// send a response back to the user

response.send(result); //send for string

}

c.) Send command from mouseMove() function to Serial 

function sendSerialData(command) {

   console.log(command+'\r');

   myPort.write(command+'\r');

   console.log("Sending something out the serial port");

}

Client Side

a.) setMouse() sends POST request to server using httpDo function of p5js

function setMouse(x, y) {

    var path = '/mouse/' + x + '/' + y; // assemble the full URL

    var content = '';

    console.log('path: ' + path);

    //httpDo( path, 'GET', content, 'text', getResponse); //HTTP PUT the change

    httpDo(path, 'POST', content, 'text', responseHandler); //HTTP PUT the change

}

b.) setMouse() function is called on in index

<!--top left-->

<buttononclick="setMouse('-550', '0'); moveTopLeft()">top left</button>

<!--top-->

<buttononclick="setMouse('-1000', '550'); moveTop()">top</button>

<!--top right-->

<buttononclick="setMouse('0', '550'); moveTopRight()">top right</button>

<!--bottom left-->

<buttononclick="setMouse('0', '-550'); moveButtomLeft()">bottom left</button>

<!--bottom-->

<buttononclick="setMouse('550', '-550'); moveButtom()">bottom</button>

<!--bottom right-->

<buttononclick="setMouse('1000', '500'); moveButtomRight()">bottom right</button>

Socket Code

a.) sendMouse() function for emitting data from client to server
// Function for sending to the socket

function sendmouse(xpos, ypos) {

// We are sending!

console.log("sendmouse: "+xpos+" "+ypos);

// Make a little object with and y

vardata= {

x:xpos,

y:ypos

};

// Send that object to the socket

socket.emit('mouse', data);

}

b.) Socket code on server side for listening for data, then emitting

// When this user emits, client side: socket.emit('otherevent',some data);

socket.on('mouse',

function (data) {

// Data comes in as whatever was sent, including objects

console.log("Received: 'mouse' "+data.x+" "+data.y);

// Send it to all other clients

socket.broadcast.emit('mouse', data);

// This is a way to send to everyone including sender

// io.sockets.emit('message', "this goes to everyone");

}

);

 

Version 2 (sockets added to AxiDraw Example Code)

For this version, we used the cncserver code as our example: https://github.com/techninja/cncserver

Run cncserver.js

install Node.js
install npm
node cncserver.js

 


 

References we used: 

 


 

Process:

a.) Our backup plan of using the AxiDraw code and just adding sockets to it (when our own code wasn’t working.)

 

b.) Practicing sockets to disconnect a user after 30 seconds. 

 

c.) inital start: simply communicating to Axi Draw using serialPort

 



So Many Thanks:

We bugged way too many people and professors for this project. I’m so grateful to everyone who took the time to help and listen. The biggest thank you to Professor Tom Igoe for being patient and spending so much time to help us understand how to write RESTful APIs and how to communicate with the AxiDraw using serial.

Thank you so much to Professor Shawn for providing guidance and troubleshooting help. Thank you Professor Mimi for helping us last minute with the socket pairing.

Thank you Dana and Noah for asking critical questions and giving good feedback during the start/ideation of this project. Thank you Jackie and Andrew for guiding us through the logic.

Live Web Wk 10: Finals Progress (v1)

Steps for Week v1 (11/14/19 – 11/21/19) 

a.) Pixelate live stream + get RGB values of each pixel

First I pixelated the live stream. This was the reference code for pixelating the live stream: pixelate effect (using this one!). Then I tried to find the rgb value for each pixel. I used the MDN example for that: getting pixel data from context

Screenshot 2019-11-20 01.06.54

—–

b.) Converting RGB to Frequency

  • RGB to HSL 
    • After pixelating the live stream, I needed to convert rgb –> hsl (hue|saturation|value), then hue –> wavelength(nm), then wavelength(nm) –>  frequency (THz). This is to ensure that the conversion from color to sound is scientifically accurate. From the
    • First things first: I converted the RGB to HSL in order to access the number for hue. After getting the Hue in the HSL, I need to convert Helpful resource for converting RGB to HSL: here.
    • Here’s my test code for pixelation with the rgb and hsl values printed in the innerHTML: here

Screenshot 2019-11-20 11.00.05

—–

c.) Convert Hue to Wavelength (nm)

Helpful visualizer for converting hue to wavelength: here.
stackoverflow – convert hue to wavelength
convert hue to wavelength

    //convert hue to wavelength
    // Estimating that the usable part of the visible spectrum is 450-620nm, 
    // with wavelength (in nm) and hue value (in degrees), you can improvise this:

    let wavelength;
    wavelength = Math.ceil(620 - 170 / 270 * h);

Checked using this site that converts color to wavelength (nm). For ex: tested to make sure the blue is around 400.

—–

d.) Convert Wavelength (nm) to Frequency (THz)

Helpful site for understanding and converting wavelength to frequency: here

Based on this equation:
Wavelength (Lambda) = Wave Velocity (v) / Frequency (f)

The code for converting from wavelength (nm) to frequency (THz) is this:

frequency = 3 * (Math.pow(10, 5)) / wl;

The above code also works for converting! I checked it just by making sure this blue is around 600 – 668 THz.

Screenshot 2019-11-21 02.31.36

—–

e.) Using frequency (THz) to make sound

I was able to take the functions I wrote from midterms to get a sound that mapped the color frequency (400-789 THz) to a pitch frequency range (20-3000 Hz).

Previously, the sketch was giving me an interesting feedback-y type sound. Wasn’t sure what it was but Shawn added in this snipped of code which helped with making each pixel have a distinct sound.

Next step is to figure out how to get the sound to only play once when on mouse over.

—–

f.) Design for Play Modes

Example sketch layout for introduction to the Play app. This is where people can select which mode they want to try out.

Screenshot 2019-11-24 17.06.14.png

Example layout for the 2 different modes. They don’t differ that much in visuals, just in how the user can interact. Live mode has a play head so it’s more like a soundtrack. Interact mode allows you to mouse over each color pixel.

 

Joy + Games Wk 10: Fruit Bowling

(video doesn’t have the audio)

For this week, I made a new game that simulates bowling with fruits! I came up with this idea after watching a very helpful bowling in unity tutorial. Right now, the instructions are: 1.) space bar to start bowling/move watermelon forward, 2.) right arrow to move right, 3.) left arrow for left, 4.) R to refresh.

There are many features that I still wanted to include – an intro page, a scoring system, changing which fruit to bowl with. I am also interested in putting this on itch.io! Will aim to do that this week!

References: 

Live Web Wk 10: Final Project Proposal

Sound + Color + Environment! 

Objective: When people take a photo of their surroundings using the app, the photo will turn into colored pixels. Each of these “pixels” will have a sound that matches the color. The goal is to create a tool that helps one experience the world in pure sounds and colors. I also intend for this to be an extension of my previous “Sound + Color” project. By having the more educational “color frequency” page from that project,  it will be easier to explain how I chose the sounds of the colors for this “Environment Page”.

191113_liveweb_final_sketch_1.jpg

Functions + Features:

  • Environment Page (mobile):
    • when user takes a photo –> photo pixelates into color blocks
    • user can click over one color block at a time to hear that sound
    • or user can click on the play button (at the bottom of the page) to hear the full soundtrack of all the colors
    • a sound wave that matches with the frequency of the color + sound
  • Environment Page (desktop):
    • same as above but only difference is that the photo captured will be from the computer’s camera
  • Environment Page will need to be both for mobile and web
  • Recreate the previous web vs (with the Intro, Learn, Draw page) so that it is mobile friendly
  • Add an about page that explains the science a little more and has some of my research references

Here is an example of the pixelation effect that I am hoping to achieve: