A House With AR Is A Complete Smart House
In Plato’s Allegory of the Cave, the prominent Greek philosopher asks us to change a group of prisoners living their entire lives in a cave. All that they can get of the real world comes from shadows which appear on the cave walls. Finally, a prisoner escapes and understands that his or her previous view of existence was founded on a low resolution, flat perception of how the world actually worked.
A somewhat pretentious way of beginning an article on augmented reality? Perhaps. But the general idea is the same: At the moment, in the pre-AR world, we have a visual outlook that contains only the details of things around us that we can see on the external. AR, a technology which has been discussed more and more in recent years, promises to let us go further.
Visualise walking down the street and having landmarks, store opening hours, Uber rider credentials and other (useful) background information put on top of our everyday view. Or roaming around your home and being able to decide, for instance, the live power draw of a power strip only by looking at it. Or how much battery life is left on your smoke detector. Or the WiFi details of your router. Or any other valuable number of “at a glance” specifics you might want to know.
As the shift in perception defined in Plato’s Cave, this won’t be a special “nice to have” supplement to the way we look at the world. Augmented reality will, its biggest boosters assertion, fundamentally alter our perception of real, physical places; forever altering the way we view and undergo reality and the possibilities proposed by the real world.
The future of AR interfaces?
At present, it’s not yet at that point. AR is still about games and, if we’re lucky, the opportunity to choose and place virtual Ikea furniture in our rooms to show us how much improved our life might be if we had a minimalist Scandinavian bookshelf or a handwoven rug. There’s still much advancement to be made, and lots of infrastructures to be laid down before the world around us can be rephrased in AR’s image.
One group working hard to accomplish this vision is the Future Interfaces Group at Carnegie Mellon University. The group has formerly created futuristic technology that ranges from conductive paint that transforms walls into giant touchpads to a software update for smartwatches which lets them know accurately what your hands are doing and answer accordingly. In other words, FIG foresees the way we’ll be interfacing with technology and the world around us tomorrow or in the future.
In its most recent work, the group has developed something called LightAnchors. This is a method for spatially attaching data in augmented reality. In spirit, it creates a prototype tagging system that precisely positions label on top of everyday scenes. It chalks up the real world like a neat, user friendly diagram. That’s essential. In the end, to “augment” means to make something superior by adding to it; not to crowd it with indistinguishable, messy banner ads and popups like a 1998 website. Augmented reality wants something like this if it’s ever going to live up to its undertaking.
“LightAnchors is sort of the AR counterpart of barcodes or QR Codes, which are everywhere,” said Chris Harrison, CEO of Carnegie Mellon’s Future Interfaces Group. “Obviously, barcodes don’t do a whole lot other than offering a unique ID for looking up price [and other things like that.] LightAnchors can be so much more, allowing devices to not only say what and who they are, but also share live information and even interfaces. Being able to implant information right into the world is very influential.”
How LightAnchors work
LightAnchors work by looking for light sources flashed by a microprocessor. Many devices already contain microprocessors used for things like adjusting status lights. According to the Carnegie Mellon researchers, these could be LightAnchor-enabled only via firmware update. In the event that an object does not currently show these blinked lights, a low-cost microcontroller could be connected to a simple LED for just a couple of dollars.
As part of their proof-of-concept, the researchers exhibited how a glue gun could be made to convey its live temperature or a ride share’s headlights made to produce an exclusive ID to help passengers find the right vehicle.
Once the lights have been found, LightAnchors then search video frame images to look for the accurate area to position a label. This is found by looking for bright pixels encircled by darker ones.
Right now, it’s still an idea that has yet to be commercialised. Executed right, however, this could be one way to allow users navigate and access the dense ecosystems of smart devices emerging with increasing consistency in the real world. “Currently, there are no low cost and artistically pleasing methods to give appliances a passage in the AR world,” Ahuja said. “AprilTags or QR codes are low-cost, but visually obvious.”
Could LightAnchors be the response? It’s definitely an exciting concept to explore. All of a sudden we’re feeling more than ready for AR glasses to liftoff in a big way!