Blog

by Drew Lazzara

Let me start by saying that, when it comes to the application of advanced analytics to publishing, I am not the man for the job. I have just begun to stick my toe into these statistical waters, and while my hunger for knowledge is rapacious, my understanding of the topic is extremely broad and conversational at best. I’m asking for a little patience from the stat wonks and a little feedback from the rest of you.

I’m positive that, like all businesses, publishers employ all manner of data. Sales figures, cost analyses, market trends; I’m sure it’s a veritable kaleidoscope of numbers used to deftly strategize and plot the future. Yet it strikes me that all this information seems put to use largely to maximize what publishers do rather than to prescribe ways to adapt to a changed landscape.

Those changes are the well-documented scourge of publishing. Ebooks, print-on-demand, and the primacy of Amazon all undercut some of the key services that the publishing industry provides authors and readers. The industry response to this pressure has been disappointing and largely non-innovative. At this stage in the game, big publishing is not competing on its own terms. And it’s losing.

Tilting the playing field back again requires new modes of thought, and that is the purview of advanced statistical analsyis. Publishers make their acquisition decisions largely on their ability to conceptualize a manuscript’s success: its resemblence to top sellers in their catalog, their familiarity with marketing a certain type of book, the visibility of the author. But these assessments are ultimately just gut feelings, and so much of a book’s success comes down to luck.

But when it is possible to know so much exact information about the reading habits and buying patterns of the public, why rely on luck and guts? In an already low-margin industry, why take any more chances than necessary? The culling and analyzing of data might make it possible for publishers’ “gut feelings” to be guided more precisely, informing acquisitions and giving the buying public exactly what they want almost all of the time. This would guide print run decisions, eliminate overhead, and reduce returns. It would also engender brand loyalty, a concept that is pretty much non-existent in big publishing.

The problem, of course, is that it’s not as simple as I make it sound. For starters, you can’t just snap your fingers and produce comprehensive consumer data. And you really can’t even begin the project if readers aren’t largely complicit. Amazon has perhaps the most advanced consumer database in the world, but the culling of this information requires not only the constant development of data-mining tools, but also that customers shop from them in the first place. Publishers would love for every purchase to be made directly on their own website; it would yield greater margins for everyone involved. But even without advanced metrics, it is anecdotally clear that people don’t buy books that way.

So the quest for information, and thus the quest for smarter, more targeted content decisions, starts with a concerted effort to drive traffic to non-Amazon digital retail spaces. The development of publishers as their own primary retailers starts with the de-conglomeration of publishing. For the biggest publishers, that doesn’t mean untethering decades of mergers; it means allowing imprints to operate with more editorial and brand independence. Go visit the Random House website. Can you tell me anything distinguishing about any of it’s dozens of imprints? I didn’t think so. By creating distinct niches (even if–for now–those niches are still defined by gut feelings), you create a customer base that turns to you for something specific. And one that will tell you exactly what they want. It’s something that small publishers are already doing.

Another way to cull data is to incentivize visits to your publishing site. Amazon does this through its Associates Program, which allows anyone with a webpage to add an Amazon link and share in a percentage of any sales to which that link directly contributes. Such partnerships cost practically nothing to implement, add prestige to the partner site, and drive traffic to your own. These partnerships also create their own customer network, allowing publishers to better understand the kinds of sites that most interest their consumers and thus better target them in the products they offer.

As I said, my understanding of statistics and their predicitive possibilty is limited. I’ve just tried to think out loud here about some fairly broad and obvious ways to make our industry more robust. I leave it to people much smarter than myself to hatch new plans for the collection and application of data in the name of invigorating the business of books. I have faith in them.

In the meantime, I’d like to a bit of less-scientific data mining myself. In the comments section, please leave your thoughts on this piece and tell us what brought you here. Are you a regular reader? Have you ever purchased one of Ooligan’s titles?  What kinds of things do you like to read? Where do you buy books? I promise we won’t use this information for nefarious purposes. We just want to make publishing better, starting with Ooligan Press.

Leave a Reply