• The Listener
  • North & South
  • Noted
  • RNZ

Google’s lens on the world: New features harness augmented reality

A red fox could guide you to where you need to go with Google Lens and Google Maps.

You thought the camera in your smartphone was for selfies and shots of the sunset. But it turns out we are using them much of the time to take photos of mundane things like recipes, receipts and passwords. The new Google Lens features will change the way you see the world around you and the way you use your camera.

What if you could cut and paste snippets of text from signs, menus and letters in reality and drop it into a document, email or text message, simply by pointing your camera at it?

That’s one of the new augmented reality features Google unveiled this week at its Google I/O developers conference in California.

Google unveiled Google Lens this time last year, an artificial intelligence-powered technology that uses your smartphone’s camera to detect objects and offer you suggested actions based on what it sees.

The service is not a stand-alone app, but is built into Google Photo, Google Assistant and Google Translate, which is adept and quickly translating text captured by your phone’s camera. Google Lens debuted as a standard feature in Google’s own Pixel phones, which are yet to go on sale in New Zealand.

However, Google this week said it would in the coming months be available on devices from phone makers including Nokia, Motorola, LG, Sony and Asus.

You'll be able to use Google Lens to take snippets of text from signs, recipes or receipts and capture them in a usable form on your phone.

Point and learn

Lens has to date been used for fairly narrow tasks – point your camera at some flowers and it will likely tell you the species of it by trawling through Google’s massive image database looking for a match.

Point your phone at a popular landmark and Lens will give you more information about it – and when the next tour will start. Samsung offers a similar service, Bixby Vision, on its Galaxy smartphones.

The new service take Lens a step towards more useful applications. In addition to the cut and paste feature called Smart Text Selection, you will also be able to search within text your camera has scanned – so you can jump out to Wikipedia or a dictionary website for more information.

RelatedArticlesModule - Google

Another feature matches clothing and styles of objects in the photo. If it’s a designer lamp you like the look of, Google Lens will deliver up examples of other lamps and pieces of furniture in a similar style, and let you know where to buy them.

In perhaps the greatest indication of the way forward for AI-powered camera features, Lens will now also let you just pan around with your phone and get real-time information about what you see in front of you – place names, objects, opening hours, reviews of films listed above the box office of a cinema.

It could fundamentally change how we use the camera, foregoing clicking to take a shot, instead opting for a continuous feed of information about what we are seeing.

Real time object detection gives you instant feedback.

Early chapter in AR

Clay Bavor, Google’s vice president of virtual and augmented reality, says that with three billion smartphones with cameras built in, it makes sense to harness a technology that many people are already comfortable with using.

“Look at the difference in how many people use computers and what you can do with them from punch cards to the Unix command line to the mouse and keyboard and the graphical user interface to the touchscreen,” he says.

“Every time there has been a leap like that, you could do vastly more things. We see virtual and augmented reality as a progression of that, hence the Google-wide investment in it.”

Google Lens is in the early stages of what would be a part of a “decade-long bet” on the future of computing, which also includes the company’s efforts in virtual reality, such as its Daydream VR headset.

“We’ve just had enough breakthroughs in displays and computer vision for things like tracking objects. We’ve had these initial breakthroughs to let us start to do useful things for users but we’ve a long road ahead of us,” adds Bavor.

Another popular feature of smartphones – Google Maps and navigation assistance using the phone’s GPS chip, is another obvious feature that can be augmented using the camera and artificial intelligence.

Navigating city streets could soon be a lot easier with augmented reality enhancing Google Maps.

Merging maps and reality

One of the most popular features shown off by Google this week was a visual update to Google Maps that replaces the little blue dot indicating your position on the map, with a visual display of the world around you on your camera screen and helpful arrows pointing you in the right direction.

“Let me paint a familiar picture,” said Google executive Aparna Chennapragada, when she demoed the feature during the Google I/O keynote on Tuesday.

“You exit the subway, you’re already running late for an appointment... then your phone says ‘Head south on Market Street’.

“One problem: You have no idea which way is south. So you look down at the phone, you’re looking at that blue dot on the map, and you’re starting to walk to see if it’s moving in the same direction. It’s not. You’re turning around. We’ve all been there.”

Indeed we have. The arrows flashed up on your phone screen over the street in front could cut through the disorientation of navigating unfamiliar streets and Google hinted at a tantalizing new feature when it included a red fox as a guide for you to follow through the streets.

Exactly when that addition to Google Maps will debut wasn’t revealed, nor whether we can summons our own virtual tour guide. But the clear signal from Google is that it sees the smartphone camera as integral to its push beyond text and voice searches to a more immediate way of exploring the world.

Peter Griffin attended Google I/O in Mountain View, California as a guest of Google.