News

The New York Times’ long view on wearables

The Apple Watch has brought renewed buzz (and mixed reviews) to the wearables discussion, with news executives watching to see what publishers do with the device. New York Times R&D lab’s Executive Director Matt Boggie looks beyond the release of a single device to the future implications of small screen technology.

by WAN-IFRA Staff executivenews@wan-ifra.org | April 14, 2015

Boggie will speak at the upcoming World News Media Congress in Washington, D.C. (1-3 June) in a session entitled “Live from the lab” and we spoke to him about The New York Times’ approach to wearables as part of an upcoming WAN-IFRA report, “Wearables for news publishers.”

How excited are you about the current state of wearables?

You have to remember that we are at the very beginning of the process of working out what to do with those new small screens.

As the next generation of devices become more ubiquitous, we will build new interactions, but with wearables we are seeing a lot of attempts to take what we’ve already done on other screens and port them over to the new device.

That’s an age-old problem – the first reaction to TV cameras was to read radio scripts into them, and realistically it will involve living with the new technology for a while before we start coming up with better interactions.

So what are you looking at for the future?

At the labs we’re never really looking at the next product but rather several cycles away from that. With wearables, the first wave of uses are very aspatial and asocial. The things I want to know as a person are about interacting with my space, but with breaking news alerts I could be in Arizona or in Europe when those news alerts pop up.

We’re looking at making things much more spatial so that if you take our recommendations for coffee shops, for example, and you’re walking past a spot we recommend in Barcelona, then we want to alert you and from that have you find our tour of the city. Putting that element of space back into the information is important.

The other thing is that the current devices are not social – if anything they are about separating you from the people you are with and we want to look at ways of adding new dimensions to the way you speak to the people you are in front of.  For example, a wearable that listens to conversations and lights up to show when a topic comes up that you have been researching – something you have resonance with. That’s using wearables to try to bridge the online-offline gap and create an enhanced social experience.

There’s an understandable focus on smartwatches but how do you see goggles and visual overlay displays in the future?

When it comes to overlays I think that where Google Glass got it wrong was in thinking that you would wear it all the time, but there are real places where the visual experience is useful – take a look at Microsoft’s HoloLens.

The problem with [camera input devices] is a different one in that there are few companies other than Google and Foursquare who can give you visual information to overlay the position you are in. Even here at the New York Times in the city where we have extensive coverage of architecture, the process of tagging and cleaning up our articles so they could be part of that would be quite an undertaking and we are a lot further along than most of our competitors.

Two levels of touch, voice control… how do you see us interacting with wearables?

We’re at an interesting point with these devices that feel analogue but have a subtle digital participation. We are used to watches being so simple that making them into tiny phones on your wrist may prove interesting to some but will create a split between those designing for aesthetics and those designing for function.

When you get ready in the morning it seems natural to use voice. I don’t often talk to my computer because it has a rich set of interfaces but you could imagine a talking mirror, a piece of furniture that would work. With wearables we are in a weird place because they are a mix of analogue things and digital devices so do I use a crown or do I talk and tap?  Apple has done both with Siri and double touch but they have also included the scroll wheel from 20 years ago.

What do you expect from some of this in the future?

There are more important things than location and time. If I’m on my way to work and I bump into a friend and have a discussion, I’m more interested in what we were talking about than when and where. There are ways you could use speech recognition and text analysis to capture those moments, although of course all such things come with a level of understanding about the limits of privacy and where that goes. With all this data it is important to think about where the data goes, who sees it, how long it is kept, and whether we are happy with that intrusion.

Share via
Copy link