Who's in Charge of the Augmented City?
I’m sitting at New York University (NYU), in a Brooklyn Tandon School of Engineering building with wires sprawling across my body while I stare at a 98-inch TV screen. Glasses with inward-facing cameras are tracking my pupils; plaster wrapped around my finger is measuring my perspiration; cheek pads are monitoring whether or not I’m smiling; another device looks at my heart-rate; and an electroencephalogram (EEG) headset—a contraption with wet squidgy nodes on the end of prongs, similar to a head massager—monitors my brain activity. All while my face is being filmed.
The researchers want to quantifiably measure my response and stress levels to two slightly different virtual environments. On the screen I am looking at a basic scene created in Google SketchUp. Mock-ups of an existing NYU building are identical minus a couple of minor differences: one has wider windows and lighter wall colors. Within the two environments I have to navigate my way up a set of stairs onto a mezzanine level, open a door, switch on a thermostat, and return.
Hardly thrilling (that’s the point), but Semiha Ergan, an assistant professor at NYU, is keen to see if the architectural theory behind such rudimentary design aspects is true and can quantitatively measure how much of a difference there is. Though the researchers couldn’t disclose the architecture firms actively interested in this research, I was told that a few well-known “three-letter” firms were keeping a close eye on things.
As you might have guessed, Ergan’s hypothesis was right. In the supposedly less stressful environment that had wider windows, I perspired less, my heart-rate was lower, and my facial expressions along with the EEG monitoring also indicated lower stress levels. The ramifications of this research pertain not only to virtual reality (VR) but augmented reality (AR) services, in which computer-generated visuals are superimposed on a tech user's view of the real world.