If you’re wearing a pair of shades and looking at your cell phone, your sunglasses reflect the cell phone’s screen. That image in the lens of your shades is small and none too clear, but computer scientist Jan-Michael Frahm and his team can get enough data from it to read what you’re typing on a touchscreen.
Sounds high tech, but it doesn’t take a fancy camera: the guy sitting across from you on the bus can capture that reflected image with his cell phone camera. A DSLR camera, the kind any professional photographer has, can do it from forty feet. And if a camera’s looking over your shoulder rather than at your shades, it can get a spy-worthy shot of your phone at two hundred feet away.
See this in action on UNC campus (the people being filmed are participants in the project, not unwitting students):
Frahm and the team wrote software that takes the video and looks for the shape of a cell phone screen in the frame. It stabilizes and aligns the image of the phone, then finds the pop-out effect of a letter key being touched. (You can thwart this with a program like Swype that lets you slide your finger across keys instead of picking them out.)
The researchers wanted to show that smartphone users need to be cautious about where and how they use their phones, says computer science student Rahul Raguram. But after presenting their work at a conference in October 2011, the researchers heard from other scientists who are going to work with them on medical applications of the object-tracking and text-recognition algorithms. The details of those projects are confidential for now, as are the details of how their software works.
“There’s been a lot of interest from people who’d like the source code of our system,” Raguram says. “Needless to say, without knowing their intentions, we’re unlikely to release it freely.”