It feels like Google has held the market on “point your camera at it to learn more” technology for some time now, first through its Translate app, which let you target signs in foreign languages with your smartphone’s camera and receive translations on the fly, and now via Lens, which expands this technology to give you plenty of information about the objects in photos you’ve taken (or are about to take).
You probably don’t want to fill your phone with multiple (or extra) search apps or browsers, so I devised a series of experiments to see how the two visual technologies stack up.
Round One: everyday objects
I went around my room, gathered up a random assortment of things, and placed them in a well-lit location on my desk. I then took photos of each object from roughly the same distance and angle—or, at least, from a vantage point that should be good enough for each app to have a pretty good shot at identifying the item.
To round out the category, here’s a video game that I was very excited about and have yet to play much of:
Moving on to a more practical use of these camera technologies, here’s how each app treated a simple business card. I’ve blurred out some of the key details in the interests of privacy, but I’ll talk about how each app worked in the captions.
Round Two: monuments
Since Lifehacker doesn’t give me a travel budget for experiments like this, and “scanning landmarks and monuments to learn more” is one of each app’s key features, I had to improvise. I pulled up photos of monuments and scanned those with each app—the apps can also scan photos you’ve already taken in your camera roll—to see what, if anything, Google Lens and Bing visual search recognizes.
Anyone feel like climbing a giant mountain? I hear the cables aren’t too bad:
And, finally, the iconic tourist trap of the San Francisco Bay Area—no, not In-N-Out Burger:
Round 3: fashion
Both Google Lens and Bing’s Visual Search claim to be able to identify clothing you or your friends are wearing and suggest matching items—or the item itself—you can buy. Let’s see how well that works with two pieces from the David Murphy collection.
All hail Santa, First of His Name.
The verdict: Google Lens (mostly) gets the job done
I generally found that Google Lens was a more useful tool for analyzing the contents of whatever’s in your camera at any given moment. Though it wasn’t perfect—struggling a smidge with landmarks and not being all that interesting with fashion—the app crushes it on text recognition and practicality (especially when scanning contact information). Bing is good at helping you find images that are similar to the composition of your photo, but it’s not as good as Google Lens at figuring out specific objects, and I think the latter’s text-recognition capabilities are what take it over the top.
Though most of us will probably install (or pull up) Google Lens as an afterthought—something to play with on vacation or to impress a friend at a party—it’s worth moving from the back of your head to a little more towards the middle. I doubt I’m going to walk down the street and have Google feed me constant information about what I’m looking at, but the app certainly has its uses. Seeing how accurate it is identifying everyday objects, I might play around with it a little more in my everyday travels to see what else it can do.