Understanding Networks Wk 10-13: Co-Draw

Xiaotong and I worked together on this project! The goal was to create a collaborative live drawing app using ITP’s Axi-Draw Machine and RESTful APIs.

The functions we aimed for: 

  • Co-Draw allows people to contribute to a drawing using a web interface
  • Control the Axidraw’s position based on the user’s mouse x and y position on the website.
  • Use socket.io to allow many people to draw at the same time.

When coding this project, we came across many challenges and we ended up with 2 versions:

  • version 1: tweaked example code for the Axi draw and added sockets to it (as backup)
  • version 2 (final): uses our own RESTful API that communicates from client → server → serially communicates to machine

Final code without sockets

Final Version

Screen Shot 2019-12-03 at 12.37.46 PM copy

This version uses our own RESTful API, which was done with a lot of help from Professor Tom Igoe. We were able to get as far as sending commands from the client side –> server side –> (communicate serially) –> to axi draw machine.

Systems Diagram


Server side: 

a.) GET and POST endpoints

// here are all your endpoints. The pattern is:
// GET the current value, or
// POST the new value as a request param:

server.get('/mouse', getMouseState);

server.get('/command/:command', runRemoteCommand);

server.post('/mouse/:mouse_x/:mouse_y', moveMouse);

server.post('/mouse/:mouse_y', handlePostRequest);

server.get('/state', handleGetRequest);

server.post('/state/:state', handlePostRequest);

b.) moveMouse() function that gets request from client and puts into a string to send to Serial

function moveMouse(request, response) {

// request is /mouse/:mouse_x/:mouse_y'
// get the position from the request

let result='SM,'+'1000,'+request.params.mouse_x+","+request.params.mouse_y;

// send it to the serial port as a command

// wait for confirmation from the serial port
// send a response back to the user

response.send(result); //send for string


c.) Send command from mouseMove() function to Serial 

function sendSerialData(command) {



   console.log("Sending something out the serial port");


Client Side

a.) setMouse() sends POST request to server using httpDo function of p5js

function setMouse(x, y) {

    var path = '/mouse/' + x + '/' + y; // assemble the full URL

    var content = '';

    console.log('path: ' + path);

    //httpDo( path, 'GET', content, 'text', getResponse); //HTTP PUT the change

    httpDo(path, 'POST', content, 'text', responseHandler); //HTTP PUT the change


b.) setMouse() function is called on in index

<!--top left-->

<buttononclick="setMouse('-550', '0'); moveTopLeft()">top left</button>


<buttononclick="setMouse('-1000', '550'); moveTop()">top</button>

<!--top right-->

<buttononclick="setMouse('0', '550'); moveTopRight()">top right</button>

<!--bottom left-->

<buttononclick="setMouse('0', '-550'); moveButtomLeft()">bottom left</button>


<buttononclick="setMouse('550', '-550'); moveButtom()">bottom</button>

<!--bottom right-->

<buttononclick="setMouse('1000', '500'); moveButtomRight()">bottom right</button>

Socket Code

a.) sendMouse() function for emitting data from client to server
// Function for sending to the socket

function sendmouse(xpos, ypos) {

// We are sending!

console.log("sendmouse: "+xpos+" "+ypos);

// Make a little object with and y

vardata= {




// Send that object to the socket

socket.emit('mouse', data);


b.) Socket code on server side for listening for data, then emitting

// When this user emits, client side: socket.emit('otherevent',some data);


function (data) {

// Data comes in as whatever was sent, including objects

console.log("Received: 'mouse' "+data.x+" "+data.y);

// Send it to all other clients

socket.broadcast.emit('mouse', data);

// This is a way to send to everyone including sender

// io.sockets.emit('message', "this goes to everyone");




Version 2 (sockets added to AxiDraw Example Code)

For this version, we used the cncserver code as our example: https://github.com/techninja/cncserver

Run cncserver.js

install Node.js
install npm
node cncserver.js



References we used: 




a.) Our backup plan of using the AxiDraw code and just adding sockets to it (when our own code wasn’t working.)


b.) Practicing sockets to disconnect a user after 30 seconds. 


c.) inital start: simply communicating to Axi Draw using serialPort


So Many Thanks:

We bugged way too many people and professors for this project. I’m so grateful to everyone who took the time to help and listen. The biggest thank you to Professor Tom Igoe for being patient and spending so much time to help us understand how to write RESTful APIs and how to communicate with the AxiDraw using serial.

Thank you so much to Professor Shawn for providing guidance and troubleshooting help. Thank you Professor Mimi for helping us last minute with the socket pairing.

Thank you Dana and Noah for asking critical questions and giving good feedback during the start/ideation of this project. Thank you Jackie and Andrew for guiding us through the logic.

Live Web Wk 10: Finals Progress (v1)

Steps for Week v1 (11/14/19 – 11/21/19) 

a.) Pixelate live stream + get RGB values of each pixel

First I pixelated the live stream. This was the reference code for pixelating the live stream: pixelate effect (using this one!). Then I tried to find the rgb value for each pixel. I used the MDN example for that: getting pixel data from context

Screenshot 2019-11-20 01.06.54


b.) Converting RGB to Frequency

  • RGB to HSL 
    • After pixelating the live stream, I needed to convert rgb –> hsl (hue|saturation|value), then hue –> wavelength(nm), then wavelength(nm) –>  frequency (THz). This is to ensure that the conversion from color to sound is scientifically accurate. From the
    • First things first: I converted the RGB to HSL in order to access the number for hue. After getting the Hue in the HSL, I need to convert Helpful resource for converting RGB to HSL: here.
    • Here’s my test code for pixelation with the rgb and hsl values printed in the innerHTML: here

Screenshot 2019-11-20 11.00.05


c.) Convert Hue to Wavelength (nm)

Helpful visualizer for converting hue to wavelength: here.
stackoverflow – convert hue to wavelength
convert hue to wavelength

    //convert hue to wavelength
    // Estimating that the usable part of the visible spectrum is 450-620nm, 
    // with wavelength (in nm) and hue value (in degrees), you can improvise this:

    let wavelength;
    wavelength = Math.ceil(620 - 170 / 270 * h);

Checked using this site that converts color to wavelength (nm). For ex: tested to make sure the blue is around 400.


d.) Convert Wavelength (nm) to Frequency (THz)

Helpful site for understanding and converting wavelength to frequency: here

Based on this equation:
Wavelength (Lambda) = Wave Velocity (v) / Frequency (f)

The code for converting from wavelength (nm) to frequency (THz) is this:

frequency = 3 * (Math.pow(10, 5)) / wl;

The above code also works for converting! I checked it just by making sure this blue is around 600 – 668 THz.

Screenshot 2019-11-21 02.31.36


e.) Using frequency (THz) to make sound

I was able to take the functions I wrote from midterms to get a sound that mapped the color frequency (400-789 THz) to a pitch frequency range (20-3000 Hz).

Previously, the sketch was giving me an interesting feedback-y type sound. Wasn’t sure what it was but Shawn added in this snipped of code which helped with making each pixel have a distinct sound.

Next step is to figure out how to get the sound to only play once when on mouse over.


f.) Design for Play Modes

Example sketch layout for introduction to the Play app. This is where people can select which mode they want to try out.

Screenshot 2019-11-24 17.06.14.png

Example layout for the 2 different modes. They don’t differ that much in visuals, just in how the user can interact. Live mode has a play head so it’s more like a soundtrack. Interact mode allows you to mouse over each color pixel.


Joy + Games Wk 10: Fruit Bowling

(video doesn’t have the audio)

For this week, I made a new game that simulates bowling with fruits! I came up with this idea after watching a very helpful bowling in unity tutorial. Right now, the instructions are: 1.) space bar to start bowling/move watermelon forward, 2.) right arrow to move right, 3.) left arrow for left, 4.) R to refresh.

There are many features that I still wanted to include – an intro page, a scoring system, changing which fruit to bowl with. I am also interested in putting this on itch.io! Will aim to do that this week!


Live Web Wk 10: Final Project Proposal

Sound + Color + Environment! 

Objective: When people take a photo of their surroundings using the app, the photo will turn into colored pixels. Each of these “pixels” will have a sound that matches the color. The goal is to create a tool that helps one experience the world in pure sounds and colors. I also intend for this to be an extension of my previous “Sound + Color” project. By having the more educational “color frequency” page from that project,  it will be easier to explain how I chose the sounds of the colors for this “Environment Page”.


Functions + Features:

  • Environment Page (mobile):
    • when user takes a photo –> photo pixelates into color blocks
    • user can click over one color block at a time to hear that sound
    • or user can click on the play button (at the bottom of the page) to hear the full soundtrack of all the colors
    • a sound wave that matches with the frequency of the color + sound
  • Environment Page (desktop):
    • same as above but only difference is that the photo captured will be from the computer’s camera
  • Environment Page will need to be both for mobile and web
  • Recreate the previous web vs (with the Intro, Learn, Draw page) so that it is mobile friendly
  • Add an about page that explains the science a little more and has some of my research references

Here is an example of the pixelation effect that I am hoping to achieve:

Data Art Wk 8-10: Data & Publics

Website so far (map only works locally)
Code here

Eva and I worked together to better understand where our garbage goes. We were interested in finding out where our different types of trash end up – does it stay in New York, end up in a different state, or even get sent out of country? First things first, we researched about how trash is handled in New York. Wish I had more time to dig into this topic deeper as there is a lot to dig into. We managed to get some facts at least.

a.) Research! Here are some notes captured through the research process: 

Notes on Guardian Article:
  • NYC generates over 3 million tons of household waste in 2015
  • types of garbage
    • mixed solid waste –> curb –> waste transfer station –> landfill or waste-to-energy plant
    • paper recyclables –> curb –> handing and recover facility –> domestic or international paper mills
    • metals, glass and plastic –> handing and recovering facility –> domestic/international recyclers
    • * compost is not listed in the diagram below but is an important type to consider.

Where New York City Garbage Goes

  • NYC relies of complex waste-management ecosystem encompassing 2 city agencies, 3 modes of transport (trucks, trains, and barges), 248 private waste hauling companies, temporary and permanent facilities
  • History of NYC waste management:
    • most of its history until mid-1900s, primary method for disposing waste was to dump it into the ocean
    • at one point 80% garbage in sea
    • City used some its garbage (ash, rubble, debris) to create artificial land –> increased its own size. much of the city’s land today including some of its priciest neighborhoods are built on garbage. ex 1160s map show how much of city is made of rubble + debris *Is this not amazing?!

A map of 1660s Manhattan overlaid on modern New York shows how much of the city’s land is manmade.

  • 2 waste Systems: 1 public, 1 private
    • 3/4 of ny’s garbage is generated by commercial business, most of it is rubble + debris from construction projects
    • garbage hauling industry has ties to organized crime
  • 12,000 tons of garbage each day
  • 2,230 collection trucks
  • moved to transfer facilities –> carted off to landfills located in various surrounding states – which are now nearly all at capacity
  • NY spent almost $1 billion per year on trash and recyclables collection

b.) After brainstorming, we realized that we had many functions we wanted this website to have. Some of the functions we thought of and designed up:

  • An intro animation page: that gave you a general overview of some NYC garbage facts
  • A journey section: when you type in your zip code it will tell you exactly the journey that your trash goes – from curb to transfer station to landfill. Ideally, it will show you an image of exactly what each step looks like.
  • A map section: a way to see how much trash each neighborhood generates. We were also hoping to be able to filter by time, income and type.
  • A take action section: some action items that we can do to be better trash citizens
  • A resource page: we used many datasets and read some articles. It will be good and transparent to have a bibliography page.
  • A QR code + sticker campaign: we thought of having stickers placed on trash bins with QR codes. The QR code would show the route that the bin would take from school/home/work to landfill. We were hoping this would be a simple way to bring the data closer to people.

c.) We next came up with design and sketches! Eva worked on some very thorough UX layouts based on what we had brainstormed.

I made some visual designs based on her great UX layouts.


This slideshow requires JavaScript.

e.) Next we worked on coding this to life! Exciting!

f.) Conceptual mockups for our civic approach.
Still need to tweak the facts, make the QR code, and create the page that links the QR to the journey section of the website.


f.) To be continued…
Since we couldn’t get the data for the journey map within this time frame, we will continue working on this. We also want to get the messaging right for the stickers with a stronger focus on the compost program!

We are excited to keep going!!!


In the meantime we have made this click through invision link to show what we are imagining for this: https://emily511438.invisionapp.com/public/share/UKWU4NCMF

Code References:

Understanding Networks Wk 10: Restful API Drawing Machine (Updated)

Live Drawing Show!


  •  User will draw on the website’s draw page. The Axi Draw will update based on user’s drawing. There will be a camera that is recording. The recording will be sent to the website’s live stream page so others can see the live stream of the drawing. 
    Image result for axi draw



  • Co-Draw allows people to contribute to a drawing using a web interface
  • Control the Axidraw’s position based on the user’s mouse x and y position on the website.
  • Include a camera that video records the Live Drawing show. The live stream will be posted on a separate page on the website.
  • Ideally: use socket.io to allow 2 people to draw at the same time.


1.) Identify Axi Draw Machine

  • Address: /identify_axi
  • Method: ?

2. ) Find current position of the user’s mouse x and mouse y on the website


Address: /pen_position

Method: post


   “mouse_x”: 2344,
   “mouse_y”: 281,
   “state” : 1  //pen state is from 0 to 1 (down/on)


“coordinates”: [
             “type”: “200”, //ok
             “mouse_x”: “2344”,
             “mouse_y”: “281”,
             “state”: “1”
               “type”: “400”, //error
               “details”: “invalid inputs”

3. ) Set to original position when user presses Reset Button

Address: /original
Method: POST

   “mouse_x”: 0,
   “mouse_y”: 0,
   “state” : 0 // off


            “mouse_x”: 0,
            “mouse_y”: 0,
            “state” : 0 // off

             “type”: “400”,
             “details”: “invalid inputs”

Video for live stream

4.) Get channel 

Address: /channel
Method: GET


            “recording”: on, //turn channel on or off
            “channel_id”: {channel_id}, //integer
            “width": 1920px,
            "height": 1080px

             “type”: “400”,
             “details”: “invalid inputs”

5.) Post channel data to client

Address: /channel
Method: POST

      “recording”: on, //turn channel on or off
      “channel_id”: {channel_id}, //integer
      “width": 1920px,
      "height": 1080px


Live Web Wk 9: WebRTC Data Channels

Try it out!
Code here

For this week, I made a very slow type of chat messaging (calling it snail msg for now?). The idea is that you can only send one letter at a time. I guess I am making this as a reminder to slow down and to be ok with taking my time. Sometimes I expect things to arrive immediately- a thesis idea, great project concepts, someone’s email/text response,  skills, knowledge. It’s helpful, especially right now, to remind myself to not rush. This is a super simple chat messaging system (a lot of the code was tweaked from the class example), but I enjoy the fact that it forces me to pay attention to every letter I type.

Notes & Questions:

  • In WebRTC, There have three main JavaScript APIs: 1. MediaStream, 2. RTCPeerConnection, 3. RTCDataChannel.
  • Why do I keep getting this “Resource interpreted as Stylesheet but transferred with MIME type text/html:” and am not able to link my stylesheet. I have usually been able to do so in the past. What is different this time? Searched throughout stackoverflow, but couldn’t find a solution. In the meantime, I just added the style to the html pg.
    Screenshot 2019-11-06 00.21.52
  • had issues with creating a new div for each data that was sent over socket. realized the issue was that I was creating a new textNode to append to the div, when it should have been a new innerHTML to append! this was the solution!
  • add class to newly appended div: “newDiv.className = ‘name of class'”
  • also getting errors when using my own fonts. says “https://stackoverflow.com/questions/34133808/webpack-ots-parsing-error-loading-fonts


Lines to Remember:

npm install nedb