Live Web Wk 10: Finals Progress (v1)

Steps for Week v1 (11/14/19 – 11/21/19) 

a.) Pixelate live stream + get RGB values of each pixel

First I pixelated the live stream. This was the reference code for pixelating the live stream: pixelate effect (using this one!). Then I tried to find the rgb value for each pixel. I used the MDN example for that: getting pixel data from context

Screenshot 2019-11-20 01.06.54

—–

b.) Converting RGB to Frequency

  • RGB to HSL 
    • After pixelating the live stream, I needed to convert rgb –> hsl (hue|saturation|value), then hue –> wavelength(nm), then wavelength(nm) –>  frequency (THz). This is to ensure that the conversion from color to sound is scientifically accurate. From the
    • First things first: I converted the RGB to HSL in order to access the number for hue. After getting the Hue in the HSL, I need to convert Helpful resource for converting RGB to HSL: here.
    • Here’s my test code for pixelation with the rgb and hsl values printed in the innerHTML: here

Screenshot 2019-11-20 11.00.05

—–

c.) Convert Hue to Wavelength (nm)

Helpful visualizer for converting hue to wavelength: here.
stackoverflow – convert hue to wavelength
convert hue to wavelength

    //convert hue to wavelength
    // Estimating that the usable part of the visible spectrum is 450-620nm, 
    // with wavelength (in nm) and hue value (in degrees), you can improvise this:

    let wavelength;
    wavelength = Math.ceil(620 - 170 / 270 * h);

Checked using this site that converts color to wavelength (nm). For ex: tested to make sure the blue is around 400.

—–

d.) Convert Wavelength (nm) to Frequency (THz)

Helpful site for understanding and converting wavelength to frequency: here

Based on this equation:
Wavelength (Lambda) = Wave Velocity (v) / Frequency (f)

The code for converting from wavelength (nm) to frequency (THz) is this:

frequency = 3 * (Math.pow(10, 5)) / wl;

The above code also works for converting! I checked it just by making sure this blue is around 600 – 668 THz.

Screenshot 2019-11-21 02.31.36

—–

e.) Using frequency (THz) to make sound

I was able to take the functions I wrote from midterms to get a sound that mapped the color frequency (400-789 THz) to a pitch frequency range (20-3000 Hz).

Previously, the sketch was giving me an interesting feedback-y type sound. Wasn’t sure what it was but Shawn added in this snipped of code which helped with making each pixel have a distinct sound.

Next step is to figure out how to get the sound to only play once when on mouse over.

—–

f.) Design for Play Modes

Example sketch layout for introduction to the Play app. This is where people can select which mode they want to try out.

Screenshot 2019-11-24 17.06.14.png

Example layout for the 2 different modes. They don’t differ that much in visuals, just in how the user can interact. Live mode has a play head so it’s more like a soundtrack. Interact mode allows you to mouse over each color pixel.

 

Joy + Games Wk 10: Fruit Bowling

(video doesn’t have the audio)

For this week, I made a new game that simulates bowling with fruits! I came up with this idea after watching a very helpful bowling in unity tutorial. Right now, the instructions are: 1.) space bar to start bowling/move watermelon forward, 2.) right arrow to move right, 3.) left arrow for left, 4.) R to refresh.

There are many features that I still wanted to include – an intro page, a scoring system, changing which fruit to bowl with. I am also interested in putting this on itch.io! Will aim to do that this week!

References: 

Live Web Wk 10: Final Project Proposal

Sound + Color + Environment! 

Objective: When people take a photo of their surroundings using the app, the photo will turn into colored pixels. Each of these “pixels” will have a sound that matches the color. The goal is to create a tool that helps one experience the world in pure sounds and colors. I also intend for this to be an extension of my previous “Sound + Color” project. By having the more educational “color frequency” page from that project,  it will be easier to explain how I chose the sounds of the colors for this “Environment Page”.

191113_liveweb_final_sketch_1.jpg

Functions + Features:

  • Environment Page (mobile):
    • when user takes a photo –> photo pixelates into color blocks
    • user can click over one color block at a time to hear that sound
    • or user can click on the play button (at the bottom of the page) to hear the full soundtrack of all the colors
    • a sound wave that matches with the frequency of the color + sound
  • Environment Page (desktop):
    • same as above but only difference is that the photo captured will be from the computer’s camera
  • Environment Page will need to be both for mobile and web
  • Recreate the previous web vs (with the Intro, Learn, Draw page) so that it is mobile friendly
  • Add an about page that explains the science a little more and has some of my research references

Here is an example of the pixelation effect that I am hoping to achieve:

Data Art Wk 8-10: Data & Publics

Website so far (map only works locally)
Code here

Eva and I worked together to better understand where our garbage goes. We were interested in finding out where our different types of trash end up – does it stay in New York, end up in a different state, or even get sent out of country? First things first, we researched about how trash is handled in New York. Wish I had more time to dig into this topic deeper as there is a lot to dig into. We managed to get some facts at least.

a.) Research! Here are some notes captured through the research process: 

Notes on Guardian Article:
  • NYC generates over 3 million tons of household waste in 2015
  • types of garbage
    • mixed solid waste –> curb –> waste transfer station –> landfill or waste-to-energy plant
    • paper recyclables –> curb –> handing and recover facility –> domestic or international paper mills
    • metals, glass and plastic –> handing and recovering facility –> domestic/international recyclers
    • * compost is not listed in the diagram below but is an important type to consider.

Where New York City Garbage Goes

  • NYC relies of complex waste-management ecosystem encompassing 2 city agencies, 3 modes of transport (trucks, trains, and barges), 248 private waste hauling companies, temporary and permanent facilities
  • History of NYC waste management:
    • most of its history until mid-1900s, primary method for disposing waste was to dump it into the ocean
    • at one point 80% garbage in sea
    • City used some its garbage (ash, rubble, debris) to create artificial land –> increased its own size. much of the city’s land today including some of its priciest neighborhoods are built on garbage. ex 1160s map show how much of city is made of rubble + debris *Is this not amazing?!

A map of 1660s Manhattan overlaid on modern New York shows how much of the city’s land is manmade.

  • 2 waste Systems: 1 public, 1 private
    • 3/4 of ny’s garbage is generated by commercial business, most of it is rubble + debris from construction projects
    • garbage hauling industry has ties to organized crime
  • 12,000 tons of garbage each day
  • 2,230 collection trucks
  • moved to transfer facilities –> carted off to landfills located in various surrounding states – which are now nearly all at capacity
  • NY spent almost $1 billion per year on trash and recyclables collection

b.) After brainstorming, we realized that we had many functions we wanted this website to have. Some of the functions we thought of and designed up:

  • An intro animation page: that gave you a general overview of some NYC garbage facts
  • A journey section: when you type in your zip code it will tell you exactly the journey that your trash goes – from curb to transfer station to landfill. Ideally, it will show you an image of exactly what each step looks like.
  • A map section: a way to see how much trash each neighborhood generates. We were also hoping to be able to filter by time, income and type.
  • A take action section: some action items that we can do to be better trash citizens
  • A resource page: we used many datasets and read some articles. It will be good and transparent to have a bibliography page.
  • A QR code + sticker campaign: we thought of having stickers placed on trash bins with QR codes. The QR code would show the route that the bin would take from school/home/work to landfill. We were hoping this would be a simple way to bring the data closer to people.

c.) We next came up with design and sketches! Eva worked on some very thorough UX layouts based on what we had brainstormed.

I made some visual designs based on her great UX layouts.

 

This slideshow requires JavaScript.

e.) Next we worked on coding this to life! Exciting!

f.) Conceptual mockups for our civic approach.
Still need to tweak the facts, make the QR code, and create the page that links the QR to the journey section of the website.

mockup_qr_stickers_1.jpg

f.) To be continued…
Since we couldn’t get the data for the journey map within this time frame, we will continue working on this. We also want to get the messaging right for the stickers with a stronger focus on the compost program!

We are excited to keep going!!!

 

In the meantime we have made this click through invision link to show what we are imagining for this: https://emily511438.invisionapp.com/public/share/UKWU4NCMF

Code References:

Understanding Networks Wk 10: Restful API Drawing Machine (Updated)

Live Drawing Show!

Goal:

  •  User will draw on the website’s draw page. The Axi Draw will update based on user’s drawing. There will be a camera that is recording. The recording will be sent to the website’s live stream page so others can see the live stream of the drawing. 
    Image result for axi draw

    IMG_7722.jpg

Functionality:

  • Co-Draw allows people to contribute to a drawing using a web interface
  • Control the Axidraw’s position based on the user’s mouse x and y position on the website.
  • Include a camera that video records the Live Drawing show. The live stream will be posted on a separate page on the website.
  • Ideally: use socket.io to allow 2 people to draw at the same time.

REST API:

1.) Identify Axi Draw Machine

  • Address: /identify_axi
  • Method: ?


2. ) Find current position of the user’s mouse x and mouse y on the website

 

Address: /pen_position

Method: post

Body:

{
   “mouse_x”: 2344,
   “mouse_y”: 281,
   “state” : 1  //pen state is from 0 to 1 (down/on)
}

Response:

“coordinates”: [
        {
        “success”: 
            {
             “type”: “200”, //ok
             “mouse_x”: “2344”,
             “mouse_y”: “281”,
             “state”: “1”
            },
         “error”: 
             {
               “type”: “400”, //error
               “details”: “invalid inputs”
             }
         }
     ]
}

3. ) Set to original position when user presses Reset Button

Address: /original
Method: POST
Body: 

{
   “mouse_x”: 0,
   “mouse_y”: 0,
   “state” : 0 // off
}

Response: 

{
    “original_position”:[
        “success”: 
        {
            “mouse_x”: 0,
            “mouse_y”: 0,
            “state” : 0 // off
         },

   “error”: 
        {
             “type”: “400”,
             “details”: “invalid inputs”
         }
      ] 
}

Video for live stream

4.) Get channel 

Address: /channel
Method: GET
Body: 

Response: 

{
    “channel”:[
        “success”: 
        {
            “recording”: on, //turn channel on or off
            “channel_id”: {channel_id}, //integer
            “width": 1920px,
            "height": 1080px
         },

   “error”: 
        {
             “type”: “400”,
             “details”: “invalid inputs”
         }
      ] 
}

5.) Post channel data to client

Address: /channel
Method: POST

{
 “channel”:
      “recording”: on, //turn channel on or off
      “channel_id”: {channel_id}, //integer
      “width": 1920px,
      "height": 1080px
}

Reference:

Live Web Wk 9: WebRTC Data Channels

Try it out!
Code here

For this week, I made a very slow type of chat messaging (calling it snail msg for now?). The idea is that you can only send one letter at a time. I guess I am making this as a reminder to slow down and to be ok with taking my time. Sometimes I expect things to arrive immediately- a thesis idea, great project concepts, someone’s email/text response,  skills, knowledge. It’s helpful, especially right now, to remind myself to not rush. This is a super simple chat messaging system (a lot of the code was tweaked from the class example), but I enjoy the fact that it forces me to pay attention to every letter I type.

Notes & Questions:

  • In WebRTC, There have three main JavaScript APIs: 1. MediaStream, 2. RTCPeerConnection, 3. RTCDataChannel.
  • Why do I keep getting this “Resource interpreted as Stylesheet but transferred with MIME type text/html:” and am not able to link my stylesheet. I have usually been able to do so in the past. What is different this time? Searched throughout stackoverflow, but couldn’t find a solution. In the meantime, I just added the style to the html pg.
    Screenshot 2019-11-06 00.21.52
  • had issues with creating a new div for each data that was sent over socket. realized the issue was that I was creating a new textNode to append to the div, when it should have been a new innerHTML to append! this was the solution!
  • add class to newly appended div: “newDiv.className = ‘name of class'”
  • also getting errors when using my own fonts. says “https://stackoverflow.com/questions/34133808/webpack-ots-parsing-error-loading-fonts

References: 

Lines to Remember:

npm install nedb

Understanding Networks Wk 9: Firewall

Assignment:
“Run Linux host for several days, with a firewall in place and a public IP address. Make a table of all the IP addresses that attempt to connect to your host. Found out where they are located? What organizations are the associated with? What service providers are providing their IP address?”

a.) Step 1: Using my Digital Ocean Linux Host, I installed and configured the ufw firewall. I enabled the TCP connections on  the ports below. I did this by following this super helpful tutorial from the ITP Network Site.

screen-shot-2019-11-04-at-11.14.18-pm.png

b.) Step 2: I collected a table of all the IP addresses that tried to connect to my host for Nov. 2, Nov.3, and Nov. 4. I was planning to combine all 3 day’s worth of data into 1 set, but that was just too large and made it hard to manage in Excel.

I decided to just focus on the Nov. 4 data, which was more than enough data points to work with (5,640 unique hits).  The below image shows a section of the full dataset.

screen-shot-2019-11-04-at-11.02.59-pm-e1572927798745.png

c.) Step 3: Using Excel I was able to find: total hits from all the sources, number of unique sources, top 10 most frequent src (including their ip addresses). The image below shows the results for this.

Screen Shot 2019-11-04 at 11.05.17 PM.png

Other Data (Nov 4): 

  • Total Hits from All the Sources: 5,640
  • Number of Unique Sources: 1,111

d.) Step 4: Using ipinfo.io, I tried to find where each of these 10 ip addresses are coming from and the organizations they are associated with.

  • #1 hits: 185.156.73.52
    • Moscow, RU
    • OOO Patent-Media

screen-shot-2019-11-04-at-11.46.49-pm-e1572929309844.png

  • #2 hits: 185.176.27.254
    • Moscow, RU
    • Balkan Internet Exchange Ltd

185.176.27.254.png

  • #3 hits: 185.156.73.52
    • Moscow, RU
    • Balkan Internet Exchange Ltd

185.176.27.162.png

  • #4 hits: 80.82.64.73
    • Amsterdam, NL
    • Incrediserve.net

80.82.64.73.png

Side note: Incrediserve.net is an IP Trading company! This is a dumb question, but do they mean ip addresses or ip as in intellectual property? 

Screen Shot 2019-11-05 at 12.03.51 AM

  • #5 hits: 96.250.119.232
    • New York City, NY
    • MCI Communications Services, Inc. Verizon Business

96.250.119.232.png

e.) Step 5: Reflection.

UM, why is someone from Russia constantly hitting my IP address?! And so many time in 1 day! Is this maybe tied with the hacking of our elections? Tried to look into this a little and found this forum. Seems like some people on the forum say it is “normal” to get probes from many countries like Russia, China, Vietnam, etc. Basically, I shouldn’t let my biases get in the way, even though my top 3 places that hit me are from Russia.

Thank You: 

  • So much gratitude to Rashida for answering my many questions about this!
  • Thank you for the straightforward guide to setting up the firewall and ip tables, Professor Tom! By initially trying to do this assignment without the guide, I have realized how confusing the process of setting up a firewall and configuring the ip tables could have been. The internet has so many resources, but not all structured so well!

Helpful Resources for Understanding UFW ( initially, I forgot about the itp tutorial above, so found some forums/helpful links online for trying to understand what to do) :

Lines to remember:

sudo dmesg | grep '\[UFW'
grep UFW /var/log/syslog
/var/log
sudo tail ufw.log

remember:
csp root@.....:/var/log/ufw.log .

Joy & Games Wk 8: Simulation

Assignment: make a simulation in unity. I decided to simulate a calm, fall-inspired boat ride. This was chosen party because of skill level, partly because of time, and partly because of all the beautiful yellow leaves I’m seeing! This isn’t the most exciting or goofy interaction but I’ve been feeling the need to calm down lately. Perhaps that is why I’ve ended up with something that is simply about staring at nature? I really should just go out for a hike instead of simulating nature on a screen lol.

References: 

Asset Credits: 

Live Web Wk 8: Pixel Manipulation

Live: https://el3015.itp.io:8103/index.html
Code

Excited for pixel manipulation! Didn’t have much of a strong concept behind this assignment, just wanted to play around and implement some features. A couple of functions I wanted:
a.) sliders to control different aspects of the color and pixels
b.) a way for the image and video to be downloaded
c.) a record button that doesn’t have a preset time but is determined by when user clicks stop and start

Was able to tackle the above features. Test codes here. One compromise, I couldn’t figure out how to get the video to be downloaded locally. Currently, it just saves to server and the captured image shows up on a new window. Something to improve upon. Also, this is way too laggy!

Originally, I actually wanted to make a slit-scan video capture because it’s such a cool (even if cliche) effect. Well, did some research, tried a few things out but ended up not figuring it out. It’s possible that looking at all the javascript example code for slit-scans psyched me out. Hoping to try again at some point…so keeping these reference links for future self.

References for slit-screen:

General References:

Notes to remember:

class*="col-"   /* means select class that contains this text
em is relative to the font-size of its direct or nearest parent, rem is only relative to the html (root) font-size.

Questions:

  • why are none of my apps showing up when I try “forever list” on the terminal? the apps are all still running though.
  • the window is very laggy, not exactly sure why. My guess is the threshold equation. I wonder if there is a way to minimize the lag.
  • how can I download the video blob locally when user stops recording?

Data Art Wk 5-7: Text Archive

View live: https://emilylin-itp.github.io/data-art/wk5-7-textarchive/language_dep_final/

Code here: https://github.com/emilylin-itp/data-art/tree/gh-pages/wk5-7-textarchive/language_dep_final

***Not very mobile friendly! Will really aim to get better at responsiveness!

Objective:

I am interested in the connection between language and emotional well being. Is there a way to spot the signs of depression, anxiety, or suicidal tendencies based on the words we use and how they are used? In verbal communication, I think people often don’t say exactly how they feel for the sake of keeping it together. But is there a subconscious way people who are depressed use words that subtly indicates their mental state? Is there a way we can spot who is struggling even if they don’t explicitly say they are? I just want to know if there is a way to read between the lines for depression.

My hypothesis: there is a language to depression. By looking into the work of writers who have killed themselves, I am hoping to test if this theory rings true. Could be wrong, who knows… but curious to see what the text analysis will show.

Credits: 

Many thanks to Genevieve for the conceptual feedback and technical resources! The brainstorming session and references were so helpful.

Steps:
a.) Research and reading:

I’ve been looking into different articles about the connection between words and depression. Many findings suggest the importance of pronouns, absolutist words and auxiliary words in helping to indicate depression. There is less of a link between specific words and suicide, but crisistrend.org was able to take the top 35 words used when people called/texted about a specific mental health issue. Based on these findings, I chose key words that I wanted to use for filtering the poets’ work.

Screenshot 2019-10-17 17.21.28 Screen Shot 2019-10-20 at 10.17.40 AM

b.) Deciding writers + text:

Then, I chose my 3 writers, decided to stick with only poets because the word count between novelists and poets is just too off. I wanted to include Hemingway and Ingrid Chang, but using one of their books just created such a curve in the word count. Eventually ended up choosing 3 Confessionalist poets (Sexton, Plath, Berryman) who suicided. Kept it to be American poets just because who knows what gets lost in the translation of poems. Very thankful for https://www.poetryfoundation.org and https://www.poets.org for providing a database for us all.

b.) Concept and design:

There was a lot of information I wanted to put on this site that didn’t make it into the coded version. I had hoped to show the correlation between what was happening in these poets’  personal lives and with the content in their poems. This required keeping a timeline of both their life and work. The tricky part is that these poems don’t have great time stamps. Unlike novels, poems often get published in a collected poems type book and don’t have exact years of when they were written. Some poems were even published into a book after their death. In the end I decided to omit using years as an extra data point (though I really wanted to show the correlation) and just include a biography timeline. Not sure if this is working though.

c.) Coding it! 

I followed along with the Shiffman’s A-Z tutorials on concordance and sentence histogram to get a better understanding of how to work with text. Here are some of the test codes.

I compiled all the poems I wanted to use into txt files. Using RiTa.js documentation, I was able to find key words in context. “kwic()” splits the sentence into 2 halves at the word. The word is it’s own variable and the phrase before the word gets pushed into one array while the other gets pushed into another.

Questions (No Answers Yet):

  • When using RiTaj’s kwic, it would give me a duplicate of the array. Also there are some weird things with “undefined” showing up when there are no special characters in the text.
    • managed to hack it and put the duplicate data into an empty string, so it doesn’t print out twice but would like to know what is really going on here? Why is it printing a duplicate version of the array?
      Screen Shot 2019-10-20 at 11.03.08 PM.png

 

Resources regarding the connection between language + depression:

Helpful resources for text analysis:

Resources for coding (in general):