The feasibility of developing and implementing journal usage factors: a research project sponsored by UKSG

generally accepted as a valid measure of the quality of scholarly journals,and are widely used by publishers, authors, funding agencies and librarians as measures of journal quality.There are, nevertheless, misgivings about an over-reliance on impact factor alone in this respect and there is growing interest in the development of usage-based alternatives to citation-based measures of journal performance. Against this background, the United Kingdom Serials Group (UKSG) thought it would be timely to sponsor a study to investigate the feasibility of journal usage factors.This article describes the aims of the study, the results obtained and the conclusions drawn.There appears to be significant support,even among established publishers whose journals perform well in impact factor rankings, for the development and implementation of journal usage factors.


Introduction
ISI's journal impact factors, based on citation data, have become generally accepted as a valid measure of the quality of scholarly journals, and are widely used by publishers, authors, funding agencies and librarians as measures of journal quality. There are, nevertheless, misgivings about an over-reliance on impact factor alone in this respect 1 and other, author-centred, citation-based measures, such as the Hirsch Index 2 , are gaining support. The availability of the majority of significant scholarly journals online, combined with the availability of increasingly credible COUNTER-compliant online usage statistics, raises the possibility of a parallel usage-based measure of journal performance becoming a viable additional metric. Such a metric, which may be termed 'usage factor', could be based on the data contained in COUNTER Journal Report 1 (number of successful full-text article requests by month and journal) calculated as illustrated in Equation 1 below for an individual journal: (1) Usage factor = Total usage (COUNTER JR1 data for a specified period) Total number of articles published online (during a specified period) There is growing interest in the development of usage-based alternatives to citation-based measures of journal performance and this is reflected in the funding being made available for this work. Especially noteworthy in this respect is the work of Bollen and Van de Sompel at Los Alamos National Laboratory. 3 Against this background, the United Kingdom Serials Group (UKSG) thought it would be timely to sponsor a study to investigate the feasibility of journal usage factors.

Aims and objectives
The overall objective of this study was to determine whether the usage factor (UF) concept is a meaningful one, whether it will be practical to implement and whether it will provide additional insights into the value and quality of online journals. The study was conducted in two phases. In Phase 1, conducted by the author, in-depth interviews were held with 29 prominent opinion makers from the STM author/editor, librarian and The feasibility of developing and implementing journal usage factors: a research project sponsored by UKSG ISI's journal impact factors (IFs), based on citation data, have become generally accepted as a valid measure of the quality of scholarly journals, and are widely used by publishers, authors, funding agencies and librarians as measures of journal quality.There are, nevertheless, misgivings about an over-reliance on impact factor alone in this respect and there is growing interest in the development of usage-based alternatives to citation-based measures of journal performance. Against this background, the United Kingdom Serials Group (UKSG) thought it would be timely to sponsor a study to investigate the feasibility of journal usage factors.This article describes the aims of the study, the results obtained and the conclusions drawn.There appears to be significant support, even among established publishers whose journals perform well in impact factor rankings, for the development and implementation of journal usage factors.

PETER T SHEPHERD
Director COUNTER journal publisher communities, not only to explore their reaction to the usage factor in principle, but also to discuss how it might be implemented and used. Phase 2, conducted by Key Perspectives Ltd, consisted of a web-based survey of a larger crosssection of the academic author and librarian communities.

Phase 1. In-depth interviews
The 29 interviews conducted fell into the following categories: Authors: 7 Librarians: 9 Publishers: 13 In this section the results of the seven main questions posed in the interviews are summarized and discussed.

How well do you feel served by non-citation measures of journal value and performance?
The essence of the responses to this question was that there are few, if any, such measures that are universal and comparable, which is one reason for the over-reliance on citation data. The development of alternative, usage-based measures would be very helpful. One currently available metric that is useful, at least from the perspective of some publishers, is the total number of articles published, which is also provided by ISI. This is used as a measure of market share (by journal and by publisher) and the trend over a period of time can be monitored.

Are you confident that COUNTER-compliant usage statistics are a reliable basis for assessing the value, popularity and status of a journal? (Librarians only)
Of the nine librarian respondents, five are confident that the COUNTER usage statistics are a reliable basis for assessing the value, popularity and status of a journal. The remaining four thought that they are not sufficiently reliable yet, but that COUNTER is going in the right direction.

Would journal usage factors be helpful to you in assessing the value, status and relevance of a journal?
All seven authors answered 'yes' to this question, although one author said that it would depend on how UF is calculated. All nine librarians also answered 'yes'. The response from publishers was less unanimous, with eight responding 'yes', two responding 'no' and three with mixed feelings.

Would you feel comfortable having journals ranked by usage as well as by citations? (Authors and publishers only)
All seven authors responded 'yes' to this question. The response from publishers was more mixed, with eight publishers saying 'yes' (several said 'yes, but …') and five saying 'no' (several said 'not unless …') undercounting usage and depressing their UF! Their contracts with, for example, Ingenta, require them to provide usage statistics, but they do not yet enforce this rigorously.

Which organizations could fill a useful role in compiling usage factor data, benchmarking and commentary? (Librarians and publishers only)
There was a wide diversity of responses to this question and there is no existing organization with sufficient credibility and capability in the eyes of both publishers and librarians to fulfil this role. Several suggested that the mission of COUNTER could be expanded to fill such a role, possibly in partnership with another organization with complementary capabilities. The majority of publishers are willing, in principle, to co-operate with the right partner. Key issues are 'trust', 'independence' and 'costeffectiveness'. Any organization aspiring to fill the consolidation/compilation role would have to meet these criteria to be acceptable to publishers.

Phase 2. Web-based survey
The web-based survey, conducted by Key Perspectives Ltd using online questionnaires, was designed to obtain feedback from a larger sample of librarians and authors.

a) Librarians
The main aim of this part of the survey was twofold: ■ to discover what librarians think about the measures that are currently used to evaluate scholarly journals in relation to purchase, retention or cancellation ■ to discover librarians' views on the potential value of a UF.
In total, 155 librarians participated. The librarians' questionnaire was designed so that librarians would first rank, in order of relative importance, a list of key factors known to influence the process of first, evaluating journals for potential purchase and, second, evaluating journals for retention or cancellation. The questions were presented in matched pairs, so that librarians were asked to rank the known factors first and then, having been introduced to the proposed UF formula, they were invited to re-rank the list including the UF.
The results, presented in Table 1, show that at the moment 'feedback from library users' is the most important consideration in the decision to purchase journals. Next in the list comes price, followed by the reputation or status of the publisher and then IF. When the UF is introduced to the mix, librarians ranked it second in order of importance. While one might not expect a UF to supplant user feedback, it is significant that UF is ranked ahead of price, IF and the reputation or status of the publisher.
When it comes to evaluating journals for retention or cancellation, librarians are now able to consider usage and cost per download statistics in addition to the factors listed previously. As the results in Table 2 indicate, feedback from library users remains the foremost consideration, but it is interesting to note that usage is ranked second in importance ahead of price and cost per download. IF and the reputation or status of the publisher appear to be relatively unimportant.
When UF is presented as an option, the re-ranked list in Table 2 shows that librarians perceive it to be important, ranking it third behind feedback from library users and usage. UF is ranked ahead of price and cost per download. It is noteworthy that librarians think a UF could be more important than IF.

b) Authors
The main aim of this part of the survey was twofold: ■ to discover what academic authors think about the measures that are currently used to assess the value of scholarly journals (notably IFs) ■ to gauge the potential for usage-based measures.
A total of 1,394 academic authors participated in the study. They have a number of factors to consider when deciding which journal to submit their work to for publication. The survey set out to understand where impact factor fits alongside other factors that are known to be important to authors. The results presented in Figure 1 show that a journal's reputation is the most important factor in an author's decision-making process. Authors want their work to be read by their peers so it is not surprising that a journal's readership profile ranks second overall in terms of relative importance. Clearly a journal's impact factor plays an important role in the majority of authors' deliberations about where to publish, but it appears to be a supporting rather than a lead role. The results indicate that authors discern a clear distinction between a journal's reputation and its impact factor. Finally, a journal's level of usage relative to other journals in the field is shown to be a significant factor. This recognition by academic authors of the importance of a journal's level of usage provides encouragement for the development of a usage-based quantitative measure.
Nearly half of academic authors believe a journal's IF to be a valid measure of its quality. The data presented in Figure 2 indicates that this endorsement is not overwhelming: whereas 47% of authors either strongly agree, or agree, that impact factor is a valid measure of quality, 24% either strongly disagree, or disagree, and 25% take a neutral stance. Overall there is a higher level of agreement with the following statement: too much weight is given to journal IFs in the assessment of scholars' published work. Of academic authors, 62.5% either strongly agree, or agree, that this is the case, compared to just 13% that either disagree strongly, or disagree. A further 19% had no particular opinion either way.
Authors were then asked the following question: Would you welcome the development of new quantitative measures to help assess the value of scholarly journals based upon verifiable data which describes the number of times articles from those journals have been downloaded? The pattern of responses, presented in Figure 3, is clearly positive. In response to the question, 70% of academic authors replied 'yes, definitely', or 'yes'.

Conclusions
A number of conclusions can be drawn from the results reported above:

Impact factor
IF, for all its faults, is entrenched, accepted and widely used. There is a strong desire on the part of authors, librarians and most publishers to develop a credible alternative to IF that will provide a more universal, quantitative, comparable measure of journal value. It is generally acknowledged that no such alternative currently exists, but that usage data could be the basis for such a measure in the future. 70% of authors surveyed would welcome a new, usage-based measure of the value of scholarly journals.

Confidence in the COUNTER usage statistics
While there is growing confidence among librarians in the reliability of the COUNTER usage statistics, two current weaknesses would have to be remedied before a COUNTER-based UF would have similar status to IF. First, the COUNTER usage statistics would have to be independently audited to ensure true comparability between publishers. (Auditing will commence in 2007.) Second, the number of COUNTER-compliant publishers, aggregators and other online journal hosts will have to increase significantly.

Usage factor
All authors and librarians interviewed thought that usage factor would be helpful in assessing the value, status and relevance of a journal. These results were confirmed by the much larger sample of authors and librarians in the web survey. The majority of the publishers also thought it would be useful, but their support would depend on their confidence in the basis for the UF calculation (Equation 1). Tests using real usage data will be required to establish the components in the UF calculation.

Ranking journals by UF
While the great majority of authors were in favour of ranking journals by UF, there was less unanimity among the publishers. Indeed the publisher responses, both positive and negative, tended to be qualified. The majority were positive, but need to be convinced that the UF calculation would be robust and fair. The minority who were negative appeared to accept that such rankings are going to happen in any event and they would rather it is done by an organization that they trust. Librarians indicated that, if UF were available, it would become the second most important factor ( after 'feedback from library users') in decisions in the purchase of new journals, while it would be the third most important factor (after 'feedback from library users' and 'usage') in retention/cancellation decisions.

Organizations that could compile and comment on UF data
There is no existing organization which commands the confidence of both librarians and publishers and has the capability to compile/comment on UF data. Librarians, on the whole, do not trust publisheronly organizations and publishers, on the whole, do not trust librarian-only organizations, to fill this role. Indeed, it may require a partnership between organizations. The type of organization required will depend on the role to be filled. If, for example, publishers were to be responsible for the consolidation and calculation (audited) of UFs, a much smaller central UF organization would be required than if it were to be responsible for the consolidation of usage data, calculation of UFs and publication of UFs.

Willingness to publish UFs
The majority of publishers appear to be willing, in principle, to calculate and publish UFs for their journals, according to an agreed international standard, and appreciate that there would be benefits to them in doing so. Some publishers are more reluctant than others, but would participate if UF were defined and implemented in a way that is acceptable to the market. In summary, there is significant support, even among established publishers whose journals perform well in IF rankings, for the development and implementation of journal UFs. Having said that, this survey has brought into focus a number of structural questions that will have to be dealt with if journal UFs are to be credible.