OK Google, Get
Out of My Face
Ubiquitous
computing is starting to get (really) real and I’m kind of afraid of it.
by Rachel Metz October 4, 2017
Today I
got a glimpse of the future. It’s overwhelming. And it’s amazing. But it’s also
very scary.
I was at
a press event Wednesday in San Francisco, where Google introduced its newest slate of products.
There were a lot of updates to existing products: shiny Pixel 2 and Pixel 2 XL
smartphones, a new version of the Daydream virtual-reality headset, smaller and
larger versions of Google’s Home personal assistant speaker, and a convertible
Pixelbook laptop. There were also two less-expected items: a square
life-logging camera called Pixel Clips and a pair of wireless earbuds called
Pixel Buds aimed at real-time (or, probably more realistically, close to
real-time) translation when used with one of the company’s Pixel smartphones.
It was a lot of stuff. Much of it in a
very appealing shade of gray, with lovely fabric covering gadget innards. More
than the visuals, though, what struck me was the message: love it or hate it,
this is the future, and the future means computers are everywhere, accessible
in all kinds of ways, for all kinds of things.
This
ubiquitous, or pervasive, computing isn’t a new idea—we’ve long seen it in the
march of “smart” devices like TV sets, thermostats, coffee pots, watches, cars,
and so on. Many of us now
rely on many computers just to get through the day, not including a smartphone
and a laptop. What is new—or newer, at least—and keeps improving at a rapid
pace, is the infusion of machine-learning capabilities that help these
computers do everything from helping you get directions to a hip bar in Ubud
when you don’t speak any Indonesian to automatically and purposefully taking
photos of people you love.
From the stage, Google CEO Sundar Pichai
framed the company’s AI-first approach as making it so people can interact with
computers in a more conversational, “sensory” way where we use voices,
gestures, and vision to make these exchanges more seamless. Computing should be
ambient and multi-device, he said; it should be “thoughtfully contextual,” and
AI can make this happen while getting smarter over time.
On the one hand, these devices do
communicate these goals. They hold the promise of a future where technology is
more seamlessly integrated into our lives, and it sounds fantastic. For
instance, in an on-stage demo, the Pixel Buds quickly performed a translation
from Swedish to English and vice versa between Google’s Home lead designer
Isabelle Olsson, who wore the earbuds and spoke in her native Swedish, and
Google product manager Juston Payne, who held a Pixel handset. The
possibilities for such a device for travel—hell, even for communicating in the
Bay Area—are endless.
The Pixel Clips camera paints a similar
future of convenience and delight. You could clip this little device to a bag
of flour on your kitchen counter, as a Google video showed us audience members,
and capture a fun baking session with your kid without ever having to touch the
technology.
Then there’s the other hand. Do you want
a camera in your home, deciding when is a good time to take a photo of you, or
your child or spouse, possibly capturing first steps and birthdays, but also so
much more? I have a one-year-old, and the idea immediately repelled me; I
already feel guilty when I take time out of our regular routine to snap photos,
but then, at least, I’m doing it intentionally.
How
about speakers all over your house that have been optimized to understand your
kids’ requests? That’s something Google also introduced with Home, which has 15
new experiences for kids ranging from using the company’s Assistant to tell
jokes, playing games like musical chairs or space trivia, or telling stories.
It’s typically hard for voice-recognition technology to
understand children’s voices, so getting this to work is impressive,
and it does work with the company’s Family Link software that lets parents
monitor and control kids’ Android-related activities.
Increasingly, this sort of AI is compact
enough that it can run directly on a smartphone, camera, or speaker itself,
without needing Internet connectivity to function. This is often touted
(rightly so) as a boon for user privacy—Google, in fact, positioned this
feature as such when introducing Pixel Clips, a $249 camera that is “coming
soon,” and the Pixel 2 smartphones.
But it also means that as long as these
devices have power, they have some powerful features. Pixel Clips never needs
to be online; it will shoot short videos and they will all be waiting for you
when you decide to share them with others or add them to the cloud-based Google
Photos service. An on-stage demo showed how Pixel phones will still be able to
use on-device machine learning to recognize songs playing out loud—say, at your
local coffee shop—and show you the name of the track and artist on the phone’s
lock screen.
Depending on your perspective, that could be great, or
horrible. And as I see it getting better and better before my eyes, I think
it’s a little bit of both.
Comentários
Enviar um comentário