Project Description
2663 is a virtual reality experience that experiments binaural audio and live haptics where the visitor is a cyborg receiving their monthly system update. The performance starts as soon as the visitor enters the room. They are lead by a doctor, who is an actor, who aids the visitor to put on the headset and headphones. The doctor tells the visitor to press the play button on the video to begin their system update and that they will leave the room once they do so. In the virtual reality experience, the visitor, or cyborg, runs through a series of diagnostic tests -- audio, calibration, visuals (brightness, saturation, and warm and cool colors) -- until it overheats, cools, and completes its system update. During these tests, backstage performers emerge from behind the curtain and sync the haptics with the audio cues that are streaming live from the headset. As the update is finished, the doctor enters the room and pulls off the headset from the visitor and greets them back into reality, guiding them to the exit and saying how he hopes to see them again.
Perspective and Context
Our group project was inspired by the reading “VR / AR Fundamentals — 3) Other Senses (Touch, Smell, Taste, Mind)”, where the article discussed the various capabilities that contribute to an immersive experience other than visual and audio input. In the Haptics section, we came across many different ideas building up an immersive experience, e.g., delivering haptics information through “skin hearing”. For our project inspiration, where the real-time haptics is in-sync with the video playing in the headset, we were inspired by the existence of “complete VR haptic suits”, most using body-part specific vibrators which operate in sync with a VR game. “The Rubber Hand Illusion” case gave our project the psychological background for creating such a haptics-focused VR project. We believe that by syncing haptics with the video, we can fool visitors into believing that what they see, who they are within the VR experience, and what they feel is real.

Initial Storyboard Drafting

The form of our project is a 2-minute performance experience inspired by another live performance project staged at the National Theatre in London, Draw Me Close. We recognized the power of VR, not just as a tool that creates a gamified experience, but also a new media platform for storytelling. Since VR projects impress and require multiple senses to experience, visitors are more likely able to attach themselves to the project and reflect about the performance to and in real life.

We were also inspired by the television show Black Mirror, specifically the episode “White Christmas” in which one of the characters becomes a operating system and can be controlled by a person in real life. We were interested in the idea of being inside an electronic system and how a person from the outside can control the interface. We wanted to explore how a visitor would feel from a point of view of an electronic system and what kind of sensations they would feel. In particular, we explored the feeling of disconnection and unreality similar to sci-fi inspired shows and the literal feeling of being a machine -- feeling a finger touching the screen or the fan running in your computer when it is overworking and needs to cool down.
Development & Technical Implementation
Our project went through 3 iterations. In the first iteration, we solidified an idea, a draft of the script, and a storyboard under the impression our VR experience was room dependent. We were inspired by the white interior of the room, which reminded us of the Black Mirror episode “White Christmas.” We combined Zane’s idea of going to a doctor’s appointment and Robin’s idea of being inside an electronic system and decided we wanted to do an experience in which the visitor was in an interface from the inside looking outside at a person using the system, or themselves. We all collaborated on building the idea. During the filming process, Robin recorded the video with the Mirage, Zane acted, and Miki did the system voice over. During post-production, Robin and Miki synced the audio with Adobe Premiere Pro.

After using the Mirage VR camera and shot test footage, our project went through another iteration, polishing all components: idea, script, and storyboard. This time we reached the consensus that the visitor would be a cyborg receiving a system update. We also decided because the plot took place in a virtual space, not necessarily room dependent, we chose to shoot in front of a green screen for the first time on the 4th floor. Zane acted, Robin recorded the video and binaural audio, and Miki did the voice over. Shooting in front of the green screen aided us in practicing how to edit green screen footage for the final iteration.

In our final iteration, we shot in front of a better green screen again on the 9th floor. Zane acted, Robin recorded the video, and Miki did the narration to aid us during the video editing process. The voiceover and the binaural audio was recorded separately. Once Brian synced the audio on Premiere Pro. Miki and Robin rotoscoped and masked the footage in After Effects. Robin also added audio effects to make the experience richer and immersive, and to help with audio cues for haptics. She also added the images, texts, and transitions with some help from Miki. After seeing the footage for the first time, we decided to make additional changes: smaller text and images. However, after all the fixes, we noticed that the stereoscopy and disparity were very off. Robin and Miki attempted to fix these errors. Knowing no other methods other than to test, we asked Dave to help us understand how to resize and position text and images in VR videos in After Effects, rendering 4 different versions with different stereoscopic angles to find the best combination for the final render. Zane tracked the final rendering.
During the performance, all group members participated. Brian recorded the video for documentation. Robin and Miki wrote the written documentation. Brian, Miki, and Robin worked on the final video documentation.
Conclusion
The overall project experience pushed us to think more about the concept of VR. By making such an experimental piece, we not only researched plenty of innovative VR projects, but also the psychological and technical theories related to this field. In the process of making this VR project, we thought about further possible connections between haptics and VR, and how the two together builds to a more immersive experience. Through creative initiatives, all the team members gained more insight into the field, as well as explored its potential for future development.

Some areas that we did not succeed in was perfecting the final render. There were some glitches and the stereoscopy and disparity were still off. This was due to the lack of time in testing what was the best position and in rendering. To create a more immersive experience, the video should have also had better video effects and more immersive audio to accommodate that experience of being in a machine. In addition, it would be interesting to sync some haptics with the display settings portion of the experience as well as do some motion tracking for a more polished final cut. Another minute detail that would be more immersive for the experience is to create the illusion of a screen and have the ability to see the finger “flat” on the surface of the screen or vision the visitor was in to enhance the illusion that they are a cyborg and that their vision is behind a screen.

However, from a technical and user experience perspective, we learned a lot from this project from both our fails and successes. We learned how to edit virtual reality videos with green screen, and how to edit text and images into virtual reality videos in After Effects. We discovered that in order to fix stereoscopy and disparity in virtual reality videos, one needs to imagine how positioning works in a sphere video. To orchestrate a performance of live haptics with virtual reality, we learned the importance of thoroughly planned out the user experience, starting before the user putting on the headset, and ending after the user takes off the headset. All of the lessons above helped us create an illusion of the genuine and blur the lines between the real and not real.
Related Links

You may also like

Back to Top