This mind-boggling interface design from MIT Media Lab's Fluid Interfaces Group essentially adds another layer of interactivity over your physical life. What I mean by that is: Right now, in real life, you look at your desk and see a bunch of objects. With the F.I.G's "Smarter Objects" system, you pick up a tablet, look at the objects on your desk "through" your tablet, as if through a window, and the tablet's screen shows you virtual overlays on the very real objects on your desk. You can then alter the functionality of these wi-fi enabled "smarter objects" on the screen, then go back to manipulating them in the real world. Tricky to explain in print, but you'll grasp it right away by watching their demo video:
The work was done by researchers Valentin Heun, Shunichi Kasahara, and Pattie Maes, and as they point out, none of the things in the demo video are the result of effects added in post; everything you see is working and happening in real time.
One commenter on the video suggested this interface design be adapted to Google Glass, but I think the tablet is a necessary intermediary, as you can tap, drag and slide your fingers across it. Your thoughts?
Create a Core77 Account
Already have an account? Sign In
By creating a Core77 account you confirm that you accept the Terms of Use
Please enter your email and we will send an email to reset your password.
Comments
the only tricky part for Google Glass, as of today, is touch. And that may require extra Kinect- or Leap Motion-like sensor(s).
maybe we will see it in next version/iteration of Google Glass?