If you haven’t seen Google Glasses yet, there’s a good video on YouTube that demonstrates the technology. Basically, Google Glasses converges your smartphone with eyeglasses. The idea is that everyday people can benefit from a heads-up display similar to what fighter pilots use in their cockpit. Officially, the initiative is called Project Glass and the technology is referred to as augmented reality head-mounted display (HMD).
Eric Jackson, a contributor at Forbes, recently penned a good overview of the technology. Jackson says “The people who laugh at these investments characterize them as Google having a lack of focus.” I am not one to laugh at this initiative. As President Dwight Eisenhower once said, “Plans are worthless, but planning is everything.” I think you have to take Google’s investments in Glasses in the context of planning.
One of the key features of Glasses is a ‘Siri” like interface (I guess the prototype with the QWERTY keyboard on the side of the glasses didn’t look so sleek?) With the voice input, you use applications like navigation, phone, and SMS entirely through voice command. For output, the Glasses somehow create the heads-up display (it appears that the HUD in the prototype only appears in the right eye). I can’t imagine how long the battery lasts in these things, but one step at a time.
Regardless of whether Project Glass leads to a commercial product, it is certainly going to create new intellectual property for Google. Hands-free phone use is the future, and while Glasses may not be the end result, some kinds of heads-up technology will be. Siri was a milestone in hands-free input, and Glasses or some successor could just be the comparable milestone for output. Certainly, hands-free navigation in a car would be more effective if arrows superimposed themselves on your windshield, rather than you having to strain your eyes on some tiny LCD screen mounted under your dashboard.