Last week, the Discovery Machine team got a live demonstration of Google Glass from “Glass explorer” Steve Brady. We got a firsthand look at its sleek user interface and played around with some of its fun features like taking pictures and using its New York Times app.
While Google isn’t yet planning to bring Glass into the world of 3D and virtual reality, Glass is an impressive first step towards augmented reality, or enhancing real-world elements with computer-generated sensory input. The benefits of augmented reality technology are seemingly endless — everyone from pedestrians looking for directions to doctors performing brain surgery could benefit from having a computer-generated display of information in their field of vision.
This technology also holds great potential for Navy training. The Office of Naval Research has already begun an augmented-reality effort to develop a system that lets students train with simulated images superimposed on real-world landscapes. Further development of this technology could greatly decrease training costs, allowing students to train from anywhere with a fraction of the support required for live training (read more here). At Discovery Machine, we are excited to think about how our AI behaviors may one day be used in augmented reality training. We currently develop our behaviors for use within virtual reality simulations Joint Semi-Automated Forces (JSAF) and Virtual Battlespace 2 (VBS2). On the spectrum between immersive, computer-generated environments like JSAF and VBS2 and live training, augmented reality training may come closer to live training. What new factors might developers need to consider when building AI for this emerging domain?