Daniel Kish teaches those who are blind to see using echo location. By making a clicking sound, Kish can get an approximation of the world around him. But what about the reverse of this? Research has been done using cameras and light sensors to actually interpret sound from visual data.
To understand how this is possible, you must first understand sound. Sound is simply patterns of vibration. A door slamming sends waves of vibration through the air and the walls of the building itself. A bird call is heard through movement in the air. Once the waves hit your eardrum, your brain interprets it as sound. If these movements can be seen however, as subtle as they may be, they could be turned to sound.
A pane of glass easily vibrates along with the sounds near it. A laser bouncing off its surface will return with the same vibration. That’s something spies have used for decades to listen in to conversations. In fact, with instructions available online, even you can. Though the usual laws regarding spying certainly still apply!
A research group (from Adobe, Microsoft, and MIT) skipped the use of a laser entirely. Using video, they were able to measure the vibration on reflective objects. This means the vibrations are interpreted directly by the computer. It is slightly less effective, but the range of usable objects is much wider. It seems that not much can be done about hiding your conversations these days!