Introduction

The burgeoning availability of reliable usage data for online journals has opened the door to usage-based measures of journal impact, value and status. Since 2002 COUNTER1 has provided a standard for vendor-generated usage statistics for individual libraries and library consortia, while the MESUR project2 has demonstrated the potential value of a wide range of usage-based metrics for assessing the impact of journals at a global level. A common, underlying theme in both projects is that usage-based alternatives to citation-based metrics are both desirable and increasingly practical.

While ISI's Journal Impact Factors (JIFs), based on citation data, have become generally accepted as a valid measure of the quality of scholarly journals, and are widely used by publishers, authors, funding agencies and librarians as measures of journal quality, there are misgivings about an over-reliance on Impact Factor alone in this respect. The availability of the majority of significant scholarly journals online, combined with the availability of increasingly credible COUNTER-compliant online usage statistics, raises the possibility of a parallel usage-based measure of journal performance becoming a viable additional metric. Such a metric, which may be termed ‘Journal Usage Factor’ (JUF), could be based on the data contained in COUNTER Journal Report 1 (Number of Successful Full-text Article Requests by Month and Journal) calculated as illustrated in Equation 1 below for an individual journal:

Stage 1 of this project3, funded by UKSG, was a survey into the feasibility of developing and implementing JUF. Reporting in 2008, it demonstrated not only that the JUF concept is a meaningful one, but also that there is considerable support from the publisher, librarian and research communities for this new metric. The main conclusions of Stage 1 were:

  • the majority of publishers are supportive of the JUF concept, appear to be willing, in principle, to participate in the calculation and publication of JUFs, and are prepared to see their journals ranked according to JUF
  • there is a diversity of opinion on the way in which JUF should be calculated, in particular on how to define the following terms: ‘total usage’, ‘specified usage period’, and ‘total number of articles published online’. Tests with real usage data will be required to refine the definitions for these terms.
  • the great majority of authors in all fields of academic research would welcome a new, usage-based measure of the value of journals
  • JUF, were it available, would be a highly ranked factor by librarians, especially for the evaluation of journals for potential purchase
  • COUNTER is on the whole trusted by librarians and publishers and is seen as having a role in the development and maintenance of JUFs, possibly in partnership with another industry organization. Any organization filling this role must be trusted by both librarians and publishers and include representatives of publishers and librarians
  • there are several structural problems with online usage data that would have to be addressed for JUFs to be credible. Notable among these is the perception that online usage data is much more easily gamed than is citation data.

Based on the results of Stage 1, UKSG, RIN (UK Research Information Network), ALPSP (Association of Learned and Professional Society Publishers). the International STM Publishers Association and a group of publishers decided to fund a further, Stage 2 study, with the objectives described below. Following an open Request for Proposals in 2009, John Cox Associates and Frontline Global Marketing Services Ltd were appointed to do the work, and reported their findings in September 2010. The full report of this study is available on the UKSG website4. This article summarizes the project objectives, results, conclusions and recommendations.

Objective of Stage 2

The overall aim of the Journal Usage Factor Stage 2 study was to assess the viability of JUF as a reliable, implementable, cost-effective tool for assessing the relative status and value of journals by testing each of the individual elements in Equation 1 above using real publisher usage data from a range of vendors.

Methodology

Test usage data for 326 journals, covering five broad subject areas (engineering, humanities, medicine and life sciences, physical sciences, and social sciences) was obtained from seven publishers (ACS Publications, Emerald, IOP Publishing, Nature Publishing Group, OUP, SAGE and Springer), covering the publication years 2006–2009. One- and two-year publication periods (‘y’ in Equation 1), as well as one- and two-year usage periods (‘x’ in Equation 1) were tested.

Publishers were asked to differentiate between different versions of articles, i.e. the ‘version of record’ (VoR), ‘accepted version’ and ‘proof’5. This proved to be difficult for some publishers, who do not make the distinction when replacing an earlier version of an article with the VoR. Publishers were also asked to classify items within the journal as ‘Article’ or ‘Non-article’ content and to exclude the standing matter (such as cover pages, contents, indexes, acknowledgements, etc.). This also proved to be difficult for some publishers due to the complex way in which they label items; some publishers had in excess of 300 item types. All were able to exclude the standing matter, but for some the contractor had to accept ‘all content’ rather than classified data.

The precise selection of journals for each subject area was agreed with each participating publisher. The intention was to form a balanced range of around 40-50 titles for each of the five broad subjects. In reality, however, the number of journals in each broad subject was as follows: engineering, 38; humanities, 35; medicine and life sciences, 102; physical sciences, 32; social sciences, 119. This was due to the participating publisher's lists and disciplines selected. Many of the publishers publish on behalf of learned societies, and some publishers chose to exclude society journals to avoid a lengthy process of asking permission for each journal to be included.

The usage data collected from the participating publishers provided coverage from 2006 throughout 2009; full data for 2006 was only available from one publisher, and for 2007 was only available from some publishers. This did, however, allow JUF calculations for a range of publication periods (y) and usage periods (x) in Equation 1.

Evaluation of the JUF variables

The effects on JUF of four variables were analyzed in the course of the study. These variables were: content type (all content vs articles only); article version (accepted article, proof, version of record); publication period; usage period.

Content type

The JUFs for all content and for articles only were compared, and evaluated in the context of their practical implementation. Little significant difference in JUFs was observed between ‘all content’ and ‘articles only’ in the humanities, physical sciences, and business and management. In the social sciences, the JUFs were lower in the articles-only category, indicating that readers made considerable use of non-article content. In medicine and life sciences, and in the sub-set of clinical medicine, JUFs were higher in the articles-only calculation, indicating that readers are much more concerned to use articles than other editorial content. No firm conclusions could be drawn in engineering, as the JUFs fluctuated from period to period. It is clear that non-article content is relevant and is used across the disciplines, though much less so in medicine and life sciences.

Item type control is difficult to manage. Using all content (i.e. all editorial content including articles, editorials, book reviews, etc., but not standing matter such as editorial board lists, subscription and permissions details, etc.) reduces the likelihood of item misdescription by eliminating the need for detailed categorization, and reduces the impact on publishers. Editorial matter is published for a purpose, and its usage forms part of the usage of the journal as a whole. Even with the adoption of all content, publishers will have to adhere strictly to the specification and avoid extraneous items such as standing matter from creeping in.

For consistency across all disciplines, the balance of advantage appears to lie with a JUF based on all content, as providing a better, more robust metric than one based solely on articles.

Further research and testing on a wider range of journals, across more disciplines, will be necessary in order to confirm these conclusions.

Article version

In view of the inconsistencies among publishers in their approaches to differentiating between versions of articles, as well as the desirability of capturing usage as soon as an article appears online, it was decided that, for the purposes of this project, the balance of advantage lies with including all versions in the JUF metric. This approach not only enables complete usage to be captured in the JUF, but also minimizes problems of data accuracy. This issue can be revisited once publishers adopt a consistent policy on article version control, based, for example, on the recommendations of the NISO/ALPSP Technical Working Group on Journal Article Versions6.

Publication period

In considering whether to recommend a one-year or two-year publication period, three factors were taken into account:

  • the Journal Impact Factor is based in a publication period of one year, with citations measured in the following two years. Whether the JUF should follow the same structure as the JIF is a matter of preference
  • the data demonstrated that there were occasionally unexplained peaks or troughs in usage. A longer publication period would have a ‘smoothing’ effect on the JUF, to reduce the impact of such usage events
  • a two-year publication period would reduce the effect on the JUF of early publication where it is offered by the publisher, and also ‘smooth’ the effect of the tapering usage at the end of the usage period.

It was decided that a two-year publication period provides consistency and a smoothing effect that will provide a more reliable metric than one based on one year only. It is recommended that a two-year publication period be adopted.

Usage period

Four usage periods were considered: 1–12 months, 1–24 months, 13–24 months, and 13–36 months after the month of publication. It was agreed that relying on usage in the periods 13–24 months and 13–36 months would not be desirable, for the following reasons:

  • delaying the capture of usage data until 12 months after publication excludes usage that takes place immediately the article becomes available. It was apparent from the data that usage in the first few months is substantial, and reflects the importance of timely access to researchers, particularly in STM disciplines. To ignore this usage would be to base the JUF on incomplete usage and seriously distort the result
  • by definition, the publication period would be some years old – e.g. 2007 publication and usage in 2008–09 would result in a JUF being available well into 2010, or a 2007–08 publication period and usage in 2009–10 would produce a JUF in 2011. The resulting JUF would be historical, rather than current, and devalue the metric.

In evaluating the advantages and disadvantages of usage periods of one and two years immediately after publication (i.e. usage in 1–12 and 1–24 months after the month of publication), it is considered that a one-year period suffers from the tapering effect of usage of articles published later in the year. In order to provide a more reliable base of usage data, a two year period is considered to be preferable, and the recommendation is to adopt a usage period 1–24 months after the month of online publication.

JUF and JIF

The relationship between the JIF and the JUF has already been examined in the context of usage of BioOne life sciences journals within a group of 112 US institutions within the SCELC Consortium7. This case study made reference to this investigation, and used the same equation for calculation. Its major conclusion was that the JIF and the JUF are not related; the relationship between the two was found to be entirely random.

In the present study a comparison of JIFs and JUFs was done for the top 20 JUF titles in each discipline, and the top ten JIF titles in each discipline. This comprises a more comprehensive test than that reported by BioOne.

  • Top 20 JUFs: In engineering, medicine and life sciences and the physical sciences the JIFs and JUFs of some high JIF/high JUF titles correlated. Otherwise there was little correlation, regardless of the discipline. In the humanities and social sciences, JIF coverage is much less complete.
  • Top 10 JIFs: In engineering there appears to be a reasonably close correlation between the two metrics, but none in any of the other disciplines.

The conclusion has to be drawn that in most cases there is little or no correlation between the JIF and the JUF. Exceptions appear to be where the brand (publisher and/or journal) is particularly strong and may drive usage as well as citations.

Conclusions

This study has demonstrated that:

  • the COUNTER usage data, with some extensions, are a feasible basis for the calculation of JUFs
  • the range between the highest and lowest JUFs is substantial, as it is with JIFs: JUFs could, therefore, be useful as a means of differentiating journals within a discipline
  • there are clear differences in journal rankings within disciplines when titles are ranked according to JUF, compared with their rankings according to JIF
  • a number of journals with no JIF appear in the top 20 journals (in the selection of titles provided by publishers for this project) within each discipline when ranked by JUF
  • as with the JIF, the JUF is unlikely to be a useful comparator of journals between disciplines.

Recommendations

  • in Equation 1 the usage period (x) should be 24 months, while the publication period covered (y) should be a maximum of 24 months. This publication period provides a more consistent, reliable metric that is less subject to fluctuations due to early publication (where offered) and other factors. A 24-month usage period, contemporaneous with the publication period, also provides a reliable metric that is also more current than citation-based metrics. In other words:
  • the JUF should be based on
    • all content types, with the exception of standing matter ( this needs to be tested further, as including non-article content affects the JUF differently in different disciplines)
    • all published versions of the article, i.e. accepted version, proof and version of record)
  • the COUNTER-based specification for publisher usage data used for this project should be refined to incorporate additional fields, such as ‘item type’
  • an agreed standard for content item types should be developed, to which journal-specific item types would be mapped
  • a simple subject taxonomy to which journal titles may be assigned, should be developed
  • publishers should adopt standard article version definitions based on the NISO/ALPSP recommendations8
  • the process for extracting the usage data from publisher systems must be automated before it will be feasible for JUFs to be calculated routinely and on a large scale for around 20,000 online journals.

Next steps

Before it is proposed that JUF be adopted formally as a new standard, there should be further analysis of the test usage data collected in this project. The aims of this further analysis will be:

  1. to validate the results obtained so far
  2. to extend the analysis that compares JUF with JIF to cover all the journals in the project for each subject category by Usage Factor
  3. to assess whether the proposed 24-month usage period could be shortened without compromising the reliability of the metric
  4. to investigate the impact of different gaming/ fraud scenarios and propose approaches to dealing with these
  5. to suggest other usage-based metrics that could provide insights into the relative status/ value/prestige of individual journals.