Aman sits outside a café in North Carolina. His wife, in Paris, calls him and tells him she would like to have lunch. He puts on a pair of eyeglasses and presses a button. Then, suddenly, she is sitting across the table from him. He gets up and sits next to her, and she turns to her left to face him — as if she was really there.
This is not some science fiction story, according to Henry Fuchs, a computer science professor at UNC-Chapel Hill. This is the augmented reality dream: a pair of light, comfortable glasses that easily project an image into the wearer’s environment. And researchers in UNC’s computer science department are working on this kind of technology right now. Although still in the very early stages and not ready for public consumption, this tool could change how we interact with each other.
“You want to be with people. Right now we’re limited in that way,” Fuchs says. “How lame is our social network now? It’s like telegraphs. There’s no subtlety.”
Fuchs sees augmented reality devices becoming part of an immersive social network. Facebook has already put $2 billion into researching augmented reality headsets, and Fuchs speculates that they see augmented reality as a way to get an edge in the social network market.
40 years of research
Fuchs started work on augmented reality technology in 1970 at the University of Utah. He studied with Professor Ivan Sutherland, who designed and built the first head-mounted display that could place simple line drawings into the user’s view.
“People would lie on the floor and we would do laser scans of them. You could sort of see that maybe there was a human there,” Fuchs says with a laugh.
But that was 45 years ago, and virtual reality software has accelerated at an incredible pace since then. UNC’s current headset uses tiny light sources to produce an image. Andrew Maimone — a UNC computer science PhD graduate — calls the design a pinlight display, a model he proposed and worked on while at UNC. Unlike pinholes, which collect light to make an image, the pinlights on this device produce light to create an image. Worn on the viewer’s head with a screen in front of the eyes, the display combines digital images with the real world, creating the illusion that those images are present in the user’s environment.
Two main challenges with head-mounted displays are design and field of view, according to Fuchs. The design should be comfortable but functional. It should project the image over most of the user’s viewpoint. Otherwise, the image will be cut off if the viewer looks too far in a certain direction.
“In a lot of displays, getting the wide field of view is a really big problem, but other things are easy,” Maimone says. “In this design, getting the wide field of view was quite easy, but some of the other things like resolution are a challenge.”
While the current model could be used for simple notifications, its resolution is too low to make realistic recreations of people, Maimone says. It could show icons indicating a call or email, but a person would look slightly distorted.
Currently, the subject must stay in a room outfitted with the cameras and scanning components needed to record the three-dimensional image, but Fuchs wants to make these devices much more accessible.
“The dream is to do this kind of thing in normal rooms,” Fuchs says. “Then, the display is just something built into your glasses.” Imagine a smart phone with 3-D capabilities: with the click of a button, the image would appear before you. This is the way Fuchs dreams of these headsets working, but making that dream a reality is a challenge that has already taken decades to solve.
Increasing communication in health
Doctors at UNC already used these headsets for a few applications, according to Fuchs. Etta Pisano, the chief of Breast Imaging at UNC from 1989 to 2005, used a different model to guide tumor-removing procedures with ultrasound. The headset merged real and virtual objects, placing an image of the ultrasound “inside” the patient and directly in front of her. Pisano then used the image as a guide for her needle in the procedure.
The headset worked using a combination of the two-dimensional ultrasound image and a live video of what the physician could see. “The camera view had to be at the same location as her eyes so that hand-eye coordination could be maintained,” Fuchs says.
This application also allowed for a narrow field of view because the ultrasound only required a few cubic inches. “She could see her assistants, she could see the rest of the patient, and that made the procedure much easier to do,” Fuchs says.
A new kind of meeting space
Another technology researchers in the Department of Computer Science continue to develop is a telepresence room. This is a three-dimensional projection of a remote room, displayed on a wall in the viewer’s environment. The viewer could then observe the room from different perspectives.
Maimone and Fuchs used Microsoft’s Kinect technology — webcam-style motion sensors used with Xbox games — to develop a telepresence room. About 10 sensors can scan a small room and produce a model of it in real time. Upon sending it to the display, a person looking at it could see the room from his or her perspective.
“It looks like there’s a shared hole that’s cut in the wall,” Maimone says. “In Skype, if there was something behind you that your head was obstructing, there’s nothing I could do to see it. But in this type of display, I could just move my head and see around you, like I could do in real life.”
The viewer does not have to wear special glasses to see the images, which means that multiple people can view the image at once. This leaves remote people stuck on the other side of the wall, though.
“It’s no better than a glass window into the adjacent room,” Fuchs says.
A bright future
Major players in the tech world continue to invest in augmented reality development. Google invested $500 million into the company Magic Leap, whose headset display shows clear, synthetic images. Microsoft stepped into the game when it announced the HoloLens earlier this year. For good reason, many in the virtual reality field are optimistic.
“The sensors and displays have gotten a lot better performance and a lot cheaper, so the industry is getting excited about it,” Maimone says. “Certainly, in the last year or two, it’s just exploded.”
Work on these devices “is very much going on” at UNC today, Fuchs says. The IEEE Virtual Reality Conference, the major event in the field, accepted a paper from UNC for presentation at this year’s meeting. This means more exposure for UNC’s model and, hopefully, some investment.
Research in virtual reality looks promising, and UNC continues to hold its place on the cutting edge of the field. If research progresses at the same rapid rate, augmented reality could be a part of everyday life in just a few years.