Pricing – what's the big deal?
For many years, publishers and librarians have led relatively separate existences, with subscription agents mediating between the two parties. This distant relationship may not have been ideal, but it worked because business models were relatively simple. Today's world looks very different, with a complex array of products, formats, prices and licence terms. This has led to publishers dealing increasingly directly with their library customers as they work to provide bespoke pricing and content packages.
Publishers have adapted their business models in recent years with the twin aim of protecting their traditional income streams and selling more content – hence the birth of the ‘big deal’ for those publishers with collection sizes that warranted it, and those libraries with the budget to be able to afford it. The result was, for many libraries, greatly expanded access to content and a reduced price per page. Large publishers were able to secure their current baseline revenues and continue to achieve some sales growth. Smaller publishers were left a little in the wake, with neither sufficient volume of content to attract the attention of large consortia purchasers nor the internal resources to handle complex negotiations. Intermediaries appeared to help plug this gap (for example, the ALPSP Learned Journal Collection offers societies the opportunity to be part of a content collection with other small publishers; organizations like PCG, Empact Sales and Accucoms offer sales representation services), but negotiations across multiple publishers in multiple disciplines can make this level of collaboration challenging.
As budgets continue to tighten, more and more attention from the library community is being focused on understanding the value of the content purchased, and this is likely to lead to greater selectivity in the future. Publishers too realize that basing their pricing on historic print holdings is not going to be sustainable for the long term, for reasons that will be considered later. So, both parties realize that change is required to make content purchase easier and fairer.
How significant are the threats?
Perhaps the first issue to address is whether we have a problem at all. There is no doubt that pricing of scholarly content has evolved hugely over the past 20 years, and as discussed earlier, both publishers and librarians have seen benefits from this. And all of this in the face of a ‘serials crisis’ that has been much talked about but which has never quite come about; at least not yet. However, the risks are now steadily accumulating and if both libraries and publishers are to ride out the challenging times ahead they will need to work closely with each other – supporting their mutual needs and interests, and those of their customers. This is likely to involve greater flexibility and a wider range of business models than they currently operate with.
Although there is plenty of doom and gloom within the community, there is not yet any real evidence that the market is faced with a catastrophic collapse. We do though have to be realistic about the fact that a considerable number of libraries are faced with budgetary pressures that are going to require them to make difficult choices. Libraries will increasingly have to look beyond demand for content as a measure of its value, instead finding measures of its impact on the success of the communities that they serve. This is going to bring challenges to both libraries and publishers, but in the end will most likely be good for the overall system of scholarly communications – helping unite content producers and purchasers so that publishers get a fair return for producing high-quality, high-impact content, and libraries pay a fair fee for that content based on its usefulness to their individual institution.
How should publishers respond?
So, how should publishers respond to pricing under current circumstances? Well, it is really back to basic marketing principles of supply and demand. Publishing has tended in the past to be a supply-driven business: we provide content and people buy it. How useful or how well used that content is, is often reflected in the sketchiest of ways (impact factors, for example). The kind of business models we may be looking at in the future are likely to concentrate our minds more fully on demand – what content users really want and value most. Some publishers are responding to this potential already, using sophisticated analysis of funding, citation and output trends to identify areas for launching new titles, for example. This kind of analysis is likely to become increasingly relevant in subscription acquisition and retention decisions too. So we can anticipate that the next generation of business models are likely to more closely link the price publishers want to charge for content with the direct relevance and usefulness of that content to a specific institution.
Uniting publishers and librarians
Before speculating on what those models might look like, let's take a moment to examine what both libraries and publishers are trying to achieve and see where we can find some common ground. Starting with libraries, for a long time now librarians have worked to maximize the size of their collections and, as e-journals really began to take off, librarians saw the opportunity to provide much more content to their patrons than they had previously been able to provide in print. Publishers responded with the big deal. However, to my mind the big deal is unsustainable. I cannot see a future for it as metrics for analysis of content and its usefulness are developed, and I cannot see that libraries will not want to develop those metrics in the face of budget restrictions. Potentially there are huge amounts of waste in the system and budget restrictions are likely to drive that out.
Today's librarian is increasingly focused on buying content that has demonstrable value to their institution and that meets the niche needs of the various communities that they serve. They are also increasingly exploring how best to provide that content with a view to fitting it effectively into the workflow of researchers and students. This alone is likely to have a huge impact on publishers' business models in the future, as their customers will increasingly want to be able to break down content into more granular units that can be accessed and purchased in a whole variety of ways.
Rather than moving from one model to another to accommodate this, we should focus on offering multiple models in the future, depending on the format and access route that suit the needs of individual institutions. However, some things will remain much the same whatever business model we might choose to offer, and it is these factors in particular that we should focus on when we consider pricing models:
- they should be easy to understand
- they should be flexible and offer choice
- they should support not penalize an institution in ensuring that the resources they provide attract the highest possible usage
- the cost/benefit ratio must be fair.
Moving on to the publisher's viewpoint, I find that publishers have a lot in common with their library customers. They also want to provide access to as much content as possible! And most publishers are focused on quality and meeting the needs of the communities they serve, both in terms of publishing the right kind of content and delivering it in an expanding range of formats. Attitudes towards pricing are not dissimilar on the whole either. Publishers realize that complex business models mean lost sales (the opaqueness of some pricing structures currently offered does not help either the buyer or the seller). Publishers ideally too want to be able to match the value of their content to the value it represents to an institution, as they recognize that this is the best way not only to retain sales but also to expand into broader markets. They also want pricing to be fair and easy to calculate so that their resources are not tied up in endless negotiations.
Clearly, libraries and publishers are really not so very far apart in what they want to achieve.
And there is some good news about where we already are. In a recent report from the Research Information Network1 it was highlighted that from 2003 to 2007 the number of article downloads had doubled, and as the number of downloads had risen, the average cost of each download had fallen and now stands at 80p. The same report has also highlighted a strong correlation between usage of content and the success of an institution based on a wide range of criteria. Therefore, overall, we are making good progress, but there is no doubt room for more improvement and innovation yet.
What is the future likely to hold for us? Let's look at this from two viewpoints: firstly, improving current models and then after that, developing new models for the future.
Improving current models
Print-based pricing was actually a good indicator of value in the past, allowing large institutions or those with high demand to simply buy the required number of subscriptions. Pricing online site licences for packages of content has generally been based on historic print holdings, but this is a very rough approximation of relative value. Also, this has become dated as institutions have grown, contracted, opened and closed departments, and so on. Fundamentally, journal pricing – whether based on historic holdings or as a single fixed-price subscription – remains based on volume of content rather than its value to each customer. This approach is potentially undervaluing content for some customers and pricing it out of the reach of others. Publishers have been aware of this problem for a long time and a number of solutions have been proposed – tiering and usage-based pricing being the two most prominent, but both have their problems.
Tiering by size does not always work for more specialist titles. For example, a large institution with a small medical school may end up paying the top price for medical journals that in reality may only be used by a small sub-section of that institution. Trying to define what constitutes ‘relevant FTEs’ (full time equivalents) and then restricting access to just those FTEs places an almost impossible administrative burden on both libraries and publishers. Usage-based pricing for the academic market feels intuitively wrong – neither publishers nor libraries want to restrict access to content (although it may prove ideal for the corporate market).
Some publishers, acknowledging the drawbacks of both models, have sought to combine elements of both and use complex calculations to then try and come up with a fair price.
The difficulty of this approach is the manual work required to price for each and every customer, coupled with the lack of transparency for the customer as to how the price was calculated. Clearly, a simpler and more open system of pricing is desirable for all parties.
However, tiering with all its imperfections probably does represent the most practical system we have available to us right now, and for me is a good first step for publishers to take in acknowledging that the ‘one price for all’ model really does not reflect the value offered by online access. There are a variety of tiering systems already in use and all have their strengths and weaknesses, and implementation of this model has its challenges2. Obviously, the more that publishers can consolidate around a small number of tier definition systems the better for their library customers and the agents that have to explain each model; and, if modelled appropriately, tiering can be used to incentivize libraries to consolidate their purchasing and move to e-only access in order to get the best value from the publisher.
In thinking about improving current models we should also consider selling to individuals.
There have been several studies in recent years that have demonstrated that the market for personal subscriptions has been in steep decline3, but this bucks a more general trend in society of a shift towards individual accountability with greater acceptance of personal payment for things like university fees, textbooks, etc. Might this trend also affect attitudes towards periodical content in the future? I suspect it might if we look beyond subscriptions to more advanced pay-per-view (PPV) models. Mobile is also likely to make a difference – the immediate need and satisfaction this offers may create new revenue lines for publishers, as long as we can make billing quick and easy. In order to achieve this we are likely to increasingly rely on intermediaries such as Amazon and DeepDyve. Recently, Google has thrown its hat into the ring with news of a new subscription service4. So this is definitely a market to watch and may help us sell our content to a wider range of users and provide library patrons with options to still purchase content for which a core subscription is not held.
Developing new models
Having discussed earlier the importance of simplicity and transparency for future business models, it does not necessarily follow that the calculation of price will become simpler. Indeed, this is likely to become more complex and formula based, bringing together a wide range of attributes about the purchaser, the users, usage and content to then derive an appropriate price. The algorithm may take a little time to set up at the outset, but once in place and if the criteria used can be transparent and common across publishers (admittedly, this is a very big ‘if’!), and the formulae arranged, it would then be fairly easy to generate a price per institution. If this seems far-fetched, remember that it is already possible now through metrics that can be generated from services such as Thomson Reuters InCites service and Elsevier's SciVal (combined with the Ringgold identifier5) to understand the precise profile of an institution based on its size, type, research output and the quality of that output. It is not such a step away to then associate that information with a price a publisher may charge for access to content.
Looking beyond even this, we may learn from the consumer world, which is already experimenting with dynamic pricing algorithms, which constantly adjust the price being offered based on competitor pricing, sales levels, profitability and so on to automatically present a tailored price at the point of purchase.
Tiering and usage-based pricing still generally place the onus on the publisher to define the price and business rules. Some interesting developments in the e-books marketplace may indicate future options for journal subscription pricing based on libraries taking a stronger role in classifying themselves and defining the criteria against which a publisher should set their pricing. Patron-driven acquisition (PDA) is an emerging model that puts control in the hands of librarians and also actively involves the libraries' patrons, which seems logical and sensible in terms of any kind of content acquisition. In response to PDA, e-content distributors are developing sophisticated access control capabilities. The library is able to establish a detailed profile that is then matched to the content available. That initial analysis sets the foundation for the base content to which the institution will immediately have access. Further access to non-subscribed-to content is available in various ways, with the library's overall budget protected by arrangement with the vendor. This might typically include price-limit settings, deposit accounts, predefined expenditure levels and a disabling feature to cut off access to new content once the budget is depleted. Additional funds can always be added to PDA accounts, and the profile can be adjusted.
One of the key benefits of this approach to me is that by giving power to the library to define their precise needs, publishers are then incentivized to work to provide content that fits that profile, which in turn drives up quality and relevance standards – consumers rather than producers are directly setting the agenda for what content they need and the price that should be paid; in effect, creating a mutually beneficial system.
Overall, the next few years are likely to see the ongoing development of enhancements to current models and experimentation and introduction of new business models. I envisage increasing emphasis on the library community to more specifically define their needs and understand in more detail the real value derived from the content that they subscribe to. The role of intermediaries is likely to become increasingly critical and it could be argued that library systems vendors are slightly ahead of the game here.
But perhaps subscription agents will also be looking to grab some share of this space, as tools like EBSCO Discovery System (EDS) could provide the foundation for cross-publisher subscription sales platforms supporting more advanced business models. Certainly there are likely to be opportunities in this area and, if established players in our industry do not grasp this nettle, it may be the Googles and Apples and their like that increasingly dominate.
While we speculate about future business models and value-based pricing, this is all within the context of a market that may become increasingly reluctant to pay anything at all for content; all the algorithms in the world cannot help us with that one. So libraries and publishers alike need to keep in mind that the business model we offer is just one issue. We also need to keep our eye on the many other services that our users want from us and which may be where our future value really lies – such as search, facilitated with semantic tagging and tools that integrate content more directly into workflow. Real value comes from knowledge rather than information alone and I suspect the future for both libraries and publishers relies on us working together to exploit that opportunity.