News

An inside look at The New York Times’ data strategy

The New York Times’ digital subscriber base has experienced impressive growth in recent years. So how is the publisher harnessing its user data to improve conversion and retention? Where is data having the biggest impact? And how did the legacy organisation succeed in cultivating its data culture? Aram Chekijian, VP of Customer Data & Insights, has some answers.

by Simone Flueckiger simone.flueckiger@wan-ifra.org | October 30, 2018

At the New York Times, Chekijian is part of a team whose goal is providing data as a “common currency” across the business and newsroom, thinking through and communicating the metrics that connect newsroom, product and revenue.

In his role, he oversees all customer data analytics, with his team conducting analyses that span everything from econometric modeling to bottom-up engagement analysis and third-party data ingestion.

In this interview, Chekijian, who will speak at WAN-IFRA’s Digital Media LATAM conference, 14-16 November in Bogotá, Colombia, explains how his team collaborates with other departments, discusses the primary tools used for data analytics, and shares some of the challenges associated with cultivating a data culture in a legacy organisation.

WAN-IFRA: How would you characterise the data analytics culture within The New York Times, particularly in the newsroom? What does this mean for journalists, specifically?

Aram Chekijian: In the newsroom we partner with editors to inform editorial through an analytical understanding of audience and engagement. We often pair this with qualitative research, for a deeper understanding of our existing or prospective audience.

Broadly, we support journalists across the newsroom with tools that highlight relevant engagement metrics and audience descriptive, to help them see whether they have reached the intended audience.

Where is data analytics having the biggest impact at The New York Times today?

We have brought a significant amount of analytics in-house, most recently our media mix modelling, which benefits both cost efficiency and expedience of insights. Analytic output that typically involved months of turnaround time is now processed and run in a matter of weeks, all of which has implications for media strategy, and, by extension, revenue. We also have very robust audience engagement modeling that gives us information about how our base composition is changing, and enables strategic planning on an ongoing basis.

Could you elaborate on how data plays into the NYT’s subscription strategy?

Through our modeling efforts, we are able to isolate the effect of both subscriber acquisition and retention tactics over time, and continually refine our media planning, promotion and product offering based on market changes. Our analytics are able to suss out the effectiveness of sales and paid media weight, enabling continued improvement and refinement.

How does the data team collaborate with other teams and departments across the NYT?

About 70-80% of the work is coordinated and prioritised in advance in the form of project roadmaps. The Data group has its own internal roadmap, which is harmonised with other departments’ priorities on a cyclical (typically bi-weekly) basis with assistance from the Product Management organization. This creates visibility as to workload and tradeoffs across initiatives, and also helps with resourcing.

The remaining 20-30% is business as usual (BAU) work, which involves ad-hoc analytics, and incremental partner/stakeholder needs as requested. Typically these involve decision-making across more than one department, and communication and collaboration is facilitated by the Google stack (Docs, Sheets, Hangout, etc.), as well as Slack. In-person meetings are usually required to finalise any deliverables, but occasionally memos / decks are sent entirely through virtual collaboration.

Can you elaborate on even more opportunities and potential by leveraging existing and new data, both editorially and commercially?

On the editorial side, we’re conducting new research on measures of engagement within stories (e.g., time, scroll, etc) to better understand signals of a valuable read. Commercially, we have created tools such as Readerscope and Project Feels, which began as internal tools and but have been adapted for external business purposes.

In the newsroom, what are the primary tools that are driving content decisions? How much of that is off-the-shelf and/or developed in-house?

We developed STELA (story and event-level analytics) to provide ready access for reporters and editors to relevant reporting on stories we publish. This includes reporting on how and when we published the story, descriptive statistics on the audience reached (e.g., geography, referrer, subscriber mix) as well as key engagement metrics. We partner with journalists through the ongoing development process of STELA.

Additionally, we use Chartbeat to inform programming decisions on our home screen and Newswhip for a view into coverage of storylines across publishers.

What have been some of the greatest challenge in cultivating a data culture in a legacy organisation, and what are some of the challenges moving forward?

Until relatively recently, print and digital were separate businesses. The steps to integrate/migrate older systems, which date back decades, to our modern cloud-based architecture required massive cross-departmental initiatives, but now, having a common system for querying and analytics has made the output more usable, and has facilitated data literacy, governance and standardization. This has enabled us to move more in line with other subscription-based services.

Moving forward will involve scaling our efforts further as we build new capabilities and infrastructure to support our continually evolving product.



Don’t miss out on this year’s Digital Media LATAM conference, taking place from 14 to 16 November in Bogotá, to hear more from Aram Chekijian about how the New York Times harnesses and capitalises on user data.

Share via
Copy link