For my first experiment, in order to create strong nonsense moment, I want to connect virtual world with the reality, so as to play with perception and make conflict, and one way to do this is importing the camera feed from the mobile phone. This way, the user can not only be able to see the real world, but also experience illogical, contradictory events in the virtual world, which is triggered by the real world.
Real + Virtual Mashup
- Get the user/rear camera into video
- Put into canvas; use the canvas as texture for plane geometry
- Translate the position of geometry but keep the center of the plane mesh (geometry + material = mesh) on center
- Rotate the mesh as rotating the camera (== user’s head rotation)
Computer Vision on Phone
Nonsense is built on sense, so in order to make nonsense in virtual world, the V World needs to know what’s going on in the real world, and my first attempt is using computer vision to analyze the image captured from camera. Below are the computer vision JS libraries I found:
- https://github.com/auduno/clmtrackr (face)
- https://github.com/auduno/headtrackr (good for face)
Issue #1 – Currently I used jsfeat to grayscale the footage first, and then found the bright area pixels by pixels. It’s obviously slow. Next step will be trying the combination of OpenCV and Node.js (Thanks to Pedro), see if “perform CPU-intense image processing routines in the cloud, let Node.js server handle client requests and call C++ back-end” will optimize the performance or not.
Issue #2 – Have to figure out how to translate the pixel location from canvas to 3D world, since let eyeScreen rotating with camera (head) making everything complicated.