Hand recognition

  • FINALLY working…
  • Work good with handfist :) recognition, very finicky with handopen :(  recognition


  • Won’t be used in my project. Just procrastinated like crazy.
  • Issues to solve
    • Transform position from 2d to 3d
    • Recognition stability
    • Screenshot_2015-04-13-21-42-49

Read The Rest


This week I did several small tests. Mainly tech stuff. And I found out that it’s really not easy to come out a good scenario.

Wireless Wooo_v2

Realized only laptop needed to link to itpsandbox to be server, and all the phones can connect to nyu wifi and access laptop server by going to laptop’s IP address. The whole world can access my laptop if they know my IP address! So crazy!! Mind blown!!!

Virtual Arm

  • Try to figure out how to get 360 rotation from accelerometer, and magnetometer of bluetooth TI Sensor Tag.
  • So far get pitch and roll from accelerometer, and yaw from magnetometer, but works kind of weird when using all of three at the same time. Works fine if just use pitch and yaw.
  • Bluetooth data transfer is not as free as serial communication. Might have to swtich to XBee/radio?  :/

W/ projector!

  • I bought a Slimport MicroUSB to HDMI adapter cable and tested it with projectors. It didn’t work with Samsung Pico project (HDMI to VGA to VGA 15pin), which is very small. But it worked with the medium size ones (HDMI to HDMI)! 

Control Room 2.0

Following the new strategy, get one done in detailed and change based on feedback, and experiment others freely, I chose Control Room as my target mask, since it already had physical form.

Things planed to change:

  1. Daily use
    • How can it integrate into daily life? Use it in front of laptop, use it outside home?
      • –> Only one window is remote view, others all local view
      • –> Portable home: always connected with your home. So you can see/talk with family members all the time, and your pets too
      • –> Open field option of relaxing escape for daily use
    • Based on feedback from user testing, people want to see the real world more(= see more windows, e.g. be able to see their hands if they want)
      • –> More windows to see more clearly
  2. More comfortable (suggestion from teacher Despina)
    • Smell good –> To do
    • Easy to put on, pillow in the back? –> To do
    • Aim for wearing for longer time, since it represents Home
  3. Interactions
    • Self: Based on where your head turns toward to, the things in the house react to it
    • Others
      • Open the door –> explode the house


  • Senario
    • T_T so difficult
  • Tech
    • Quaternion & Euler
      • “Ahhhhhhhhhhhhhhhhhhhh!!!!  TvT” –> famous Fxck & Yeah moment
      • Spent almost three hours trying to rotate the body with quaternion, but only change rotation.y, trying to deal something looks like this , and then found out that Three.js has Euler FUNCTION that can just convert Quaternion to Euler for you. Just need the source quaternion and order of axes.
        • setFromQuaternion ( q, order )
          • q — Quaternion quaternion must be normalized
          • order — string Order of axes, defaults to ‘XYZ’
    • Head sync
    • Exchange views

As One

Idea Sketch



Computer vision

  • Talked with Kyle McDonald
    • My project reminded him of anonymous hugging
    • Suggested testing different ways to max the performance, setting benchmark and see which part costs the most
  • Face detection
    • Use small canvas for executing face detection, display the image with big size of canvas.
    • From below test pics you can see, it’s much better to analyze with smaller canvas, but there’s no big difference between displaying big and small canvas, so for better resolution, it seems ok to display bigger canvas.






Greyscale + Blur

multiple faces detection, yet not stable due to changing lightness

  • Collect personal moment as taking photos –> accompany with you all the time to comfort you to confront unknown
  • Affecting virtual world


Wireless Wooo


Thanks to Andy‘s advice on hooking up localhost of my laptop through itpsandbox wifi, now I can run codes on mobile phone using laptop as server! (ps. It also works at home, just in NYU because of the security issue, using ITPSANDBOX is needed.)

*Note* It’s not advised to run server elsewhere (e.g. heroku, digitalOcean) because it takes more time to transfer the data back and forth. Localhost with laptop is the best option for proof of concept!


Paper Mache

–> Decide to do it after finalizing the virtual content.


Mask Iteration

– doodle of mask ideas

2015-02-17 14.02.33  2015-02-17 14.03.57

– 3D models built in Maya

Screen Shot 2015-02-17 at 1.43.18 PM Screen Shot 2015-02-17 at 1.44.18 PM Screen Shot 2015-02-17 at 1.45.10 PM  Screen Shot 2015-02-17 at 1.46.24 PM

– Pattern unfold by Pepakura

2015-02-13 00.53.45

– Prototyping with paper

2015-02-13 00.53.03 2015-02-13 00.54.18

– Prototyping with fabric

2015-02-17 14.10.58  2015-02-14 17.37.50

– Mask iteration display –> Mask Gang

2015-02-17 16.43.44  2015-02-17 16.44.062015-02-17 16.44.33

From left to right, dirty design based on the different functions:

  • Spikes – being inhuman, scary and aggressive
  • ChickenCow – ???
  • Box – attached to body, so the mask is hard to take off
  • Ears – the sound is magnified by funnels sticking out near the ears,  thus user becomes more aware of the environment around

Virtual World Construction

– doodle of world: 1) Closed, 2) Open

2015-02-17 13.58.49

– Coding coding coding….



Camera Feed

For my first experiment, in order to create strong nonsense moment, I want to connect virtual world with the reality, so as to play with perception and make conflict, and one way to do this is importing the camera feed from the mobile phone. This way, the user can not only be able to see the real world, but also experience illogical, contradictory events in the virtual world, which is triggered by the real world.

Possible scenario

scenario_01a–> Contradict to realityscenario_01b –> Think everyone is monkeyscenario_01c –> Encourage to say Hiscenario_01d–> Focus enhancement


Real + Virtual Mashup


After talking with professor Shawn Van Every, I decided to use browser instead of App as platform first, and test the limitation of browser and Javascript. With flexible HTML5 and Chrome browser, I can get camera feed from mobile just like from webcam of laptop, and getUserMediaMediaStreamTrack function of WebRTC API allow users to choose camera and set up constraints as they want. Below are the gists of it:

  • Get the user/rear camera into video
  • Put into canvas; use the canvas as texture for plane geometry
  • Translate the position of geometry but keep the center of the plane mesh (geometry + material = mesh) on center
  • Rotate the mesh as rotating the camera (== user’s head rotation)


Computer Vision on Phone


Nonsense is built on sense, so in order to make nonsense in virtual world, the V World needs to know what’s going on in the real world, and my first attempt is using computer vision to analyze the image captured from camera. Below are the computer vision JS libraries I found:

  • https://github.com/inspirit/jsfeat
  • http://trackingjs.com/
  • https://github.com/auduno/clmtrackr (face)
  • https://github.com/auduno/headtrackr (good for face)
  • https://github.com/sightmachine/simplecv-js
  • https://github.com/peterbraden/node-opencv
  • https://cloudcv.io/

Issue #1 – Currently I used jsfeat to grayscale the footage first, and then found the bright area pixels by pixels. It’s obviously slow. Next step will be trying the combination of OpenCV and Node.js (Thanks to Pedro), see if “perform CPU-intense image processing routines in the cloud, let Node.js server handle client requests and call C++ back-end” will optimize the performance or not.

Issue #2 – Have to figure out how to translate the pixel location from canvas to 3D world, since let eyeScreen rotating with camera (head) making everything complicated.