Journals in the arts and humanities: their role in evaluation

This article is concerned with the role and evaluation of journals in the arts. It examines the very distinctive ways in which journal publication in this area compares with the sciences and explains these through the diversity of research outputs and the distinctiveness of citation practice. It draws on data about publishing habits in arts and humanities disciplines,showing that peer-reviewed journals are not the self-evident location of choice that they are in the sciences. Furthermore, it is very difficult to construct hierarchies of journal impact and quality, in part because of the quite different cultures of citation. Nevertheless, the search for proxies for the quality of research will continue, and various current projects engage in one way or another with journals.The article concludes by briefly looking at some of these and considers the ways each engages with journals.

My reflections on the relationship between arts and humanities research and journals begin with a painful episode some two and a half years ago. I was in my final months as Chief Executive of the Arts & Humanities Research Board (AHRB), overseeing the Board's transition to the full research council status that it now enjoys. I opened the Times Higher Education Supplement one morning to find a front page headline that read: "Journals 'top ten' sparks a rebellion." The story would not go away until another headline a few weeks later announced: "Research Board halts 'top-ten' list amid protest." In those few weeks I learned that the idea of ranking journals, a process entirely normal in the sciences and some of the social sciences, was a matter of huge sensitivity in the arts and humanities.
At the AHRB we had agreed with the Office of Science & Technology (OST) about how to help it deliver the Treasury's Public Service Agreement target for the international quality of UK research. Most research councils do this through citation data, reassuring government that UK science was second only to that in the United States. We persuaded the OST that as citations did not work as a measure of quality in the arts and humanities, something to which I will return, we should identify the ten best journals in the world in each discipline and calculate what proportion of the articles in those journals were from UK researchers. A research council had to offer some measure of the international quality of the research in its field, and the alternative of citation data would have been deeply misleading. The uproar that followed our search for what would count as the top ten journals in each discipline was partly because we had not explained properly the very precise reasons for our request, but it was also because our research community feared the whole concept of using journals to evaluate research quality. Even though the data was being gathered only for the UK as a whole, with no concern for institutions let alone individuals, we were told that this would affect the Research Assessment Exercise (RAE), promotions and academics' behaviour.
Why is it so difficult to use journals to evaluate the quality of research in the arts and humanities? One reason is the existence of so many other ways of putting research into the public domain. There are monographs, which in many humanities disciplines remain the most prestigious form of output. There are edited collections of articles by different authors gathered together not in journals but in thematic books. In a number of subjects, This article is concerned with the role and evaluation of journals in the arts. It examines the very distinctive ways in which journal publication in this area compares with the sciences and explains these through the diversity of research outputs and the distinctiveness of citation practice. It draws on data about publishing habits in arts and humanities disciplines, showing that peer-reviewed journals are not the self-evident location of choice that they are in the sciences. Furthermore, it is very difficult to construct hierarchies of journal impact and quality, in part because of the quite different cultures of citation. Nevertheless, the search for proxies for the quality of research will continue, and various current projects engage in one way or another with journals.The article concludes by briefly looking at some of these and considers the ways each engages with journals.

GEOFFREY CROSSICK
Warden Goldsmiths, University of London above all literature and classics, the scholarly edition of a text is a major research output. In the creative and performing arts there are many practise outputs such as musical compositions, designs, art exhibitions, novels, and performances in theatre, music and dance. And there are also, of course, articles in journals.
Look at the evidence from submissions to the 2001 RAE, in which people chose four pieces of work to signal their quality. Journal articles comprised 37% of outputs submitted to arts and humanities panels, books 52%, book chapters 3% and other outputs 9%. Compare the sciences where 96% of submitted outputs were in journals, or engineering where it was 78%. There exist in the arts and humanities many different ways of putting research into the public domain, and there is no clear hierarchy of esteem amongst them. This is very different from the science and technology disciplines, and also from a good number of the social sciences such as economics and political science.
There are different reasons for looking to journal publication to evaluate a research domain, reasons that too often get conflated. These are activity, as an indication that research has taken place and has yielded published outputs; impact, evidence that others found the research relevant and useful; and finally the quality of the work that has been published. In the arts and humanities the last of these can only effectively be established through the judgment of peers. Many other disciplines have agreed proxies for the measurement of the quality and impact of research, primarily using citations and impact factors, but it is agreed across the arts and humanities that such proxies cannot command the authority they do elsewhere. Although quantitative and especially bibliometric data may be used to inform qualitative evaluation, it is agreed that they cannot be a substitute for it. The judgment of peers remains the core way to establish the quality of research outputs.
Why is it so difficult to use journals in the way they are used in so many other disciplines to measure the quality of research outputs? In addition to the diversity of outputs which deny journals the uniquely privileged position they hold in most other disciplines, there is the very different character of citation behaviour in arts and humanities subjects. Knowledge and understanding is not cumulative in the way that it broadly is in the sciences, one consequence of which is that works from 30 or 40 years ago will be regularly cited. Above all there is the character of critical discourse -arguing with others and positioning one's own thinking in relation to others with whom one disagrees -as a mode of research and argument. In that context, especially with the culture of the extended footnote, citation is not a clear sign of quality or influence. Many citations are used to shape an argument rather than to signal prior research on which one is building. Citation data in that intellectual culture cannot provide a proxy for quality.
Nonetheless, could it be that the best departments publish more in journals? The AHRB commissioned Evidence UK to look at the relationship. They gathered together all the departments in a discipline at a given RAE grade to see how important journal articles were amongst the submissions from those departments. What was the relationship between RAE grade and the proportion of journal articles in departmental submissions? There was no clear pattern across disciplines. In law there was actually an inverse relationship between RAE grade and the percentage of outputs submitted that were journal articles, steadily falling from 83% of outputs in departments graded 3b to the lowest figure of 43% in 5* departments. As far as law was concerned, the better the department in the RAE, the less likely it was to have its academics publishing in journals. In many other disciplines there was no clear pattern at all. Asian studies, for example, showed 29% for 5* departments, 55% for grade 4 departments and 43% for those with a grade 3b. History had a flat distribution with the five top grades having between 33% and 37% of outputs in journals. Music was an unusual case of a positive correlation, rising from 11% in 3b departments to 27% in those graded 5*, but even in top research departments only a small proportion of outputs were journal publications.
For various reasons, then, bibliometric approaches to quality that rested on journal publications would be unwise. And even if we wanted to use journal publication and citation analysis, the range of journals covered by the ISI (Institute for Scientific Information, part of the Thomson Group) is not extensive enough. Philosophy stands at the head of the list, with 52% of outputs returned to the 2001 RAE published in ISI journals, with library & information management next at 40%. Most other subjects had no more than 20-29% of outputs published in ISI journals (English and French are just 21%, for example). Many, such as Italian or theology or art & design, fall below even 20%.
Research outputs in the arts and humanities are therefore put into circulation in diverse ways appropriate to each discipline. That diversity must be seen as a strength and is one of the reasons why research in the arts and humanities reaches wider academic and non-academic publics. Weighting just one of those forms of output, academic journals, above others in quality evaluation would both be misleading and create perverse incentives to publish in journals that would have damaging consequences.
The evaluation of research quality, and the search for proxies for quality, will not go away, and probably should not do so. We need to know how good research is in departments, institutions and countries in relation to their comparators. I shall conclude by drawing attention to some of these. One, the long-running Humanities Indicators Project of the American Academy of Arts and Sciences (AAAS), interestingly makes no more than passing reference to bibliometrics, even in the AAAS's Making humanities count: the importance of data. 1 The Humanities Indicators Project gathers data on a range of variables to demonstrate the condition of the humanities. It is concerned more with such matters as staffing profile, doctoral students and research income than with publications. When it does consider publications its concern is monographs rather than journals, and no quality indicators are sought.
As far as the Arts and Humanities Research Council (AHRC) is concerned, the delivery plan agreed between the AHRC and the successor body to the OST is unlikely to include attempts to measure the international quality of research. The issue of journals as a measure of research standing thus recedes. The AHRC has nevertheless participated in a large project to construct a European Reference Index for the Humanities (ERIH) that was launched in 2004 by the European Science Foundation. 2 The intention of the new venture is to construct lists of journals in each humanities discipline in order to 'help to identify excellence in humanities scholarship and [to] prove useful for the aggregate benchmarking of national research systems … in determining the international standing of the research activity in a given field in a country'. This statement of intent nonetheless insists that 'as they stand, the lists are not a bibliometric tool'. 3 Without such a statement the ERIH project would not have received co-operation from within the arts and humanities research world. The intention is, through an iterative consultative process, to categorize journals in each of fifteen disciplinary areas into three lists. The A list comprises journals regarded as high-ranking international, with B representing a standard international level, and C those deemed to be important local or regional journals. Against repeated expressions of anxiety the ERIH project leaders have insisted that the categorization is not meant to be hierarchical, arguing that their aim is to raise the visibility of journals and to strengthen within them the practice of peer review.
It is in reality hard to believe that the A, B and C lists will not be seen as a hierarchy of journals, and consultation in the UK has shown that fears of creating such a quality hierarchy is widespread. Academics are worried that a categorization of journals will direct researchers' choice of journal for their articles, and will shape researcher behaviour in a competitive world of promotions and RAE. There is a certain naiveté in the protests, because quality of location for research outputs (choice of journal for an article, choice of publisher for a book, choice of gallery for an exhibition) already influences individuals' decisions and aspirations.
The most important project in which journals will once again be considered as a proxy for quality and impact is, of course, the development of metrics to be used in future RAEs. Although citations will play a part in the metrics being developed for disciplines in science, technology and medicine, it is agreed that metrics to be used alongside lighttouch peer review for RAE 2013 in the arts and humanities (and indeed in the social sciences, education and mathematics) will need to be different. How they will differ remains to be seen, and the AHRC and HEFCE (Higher Education Funding Council of England) expert group that worked on this question last year proposed using various outputs in a basket of metrics, but did not propose proxies for the quality of those outputs. 4 Will the new National Public Sector Bibliometrics Consortium (envisaged by what was formerly the OST and more recently the Office of Science & Innovation, the research councils and the higher education funding councils) help take these issues forward? If it comes about then its goal will be to contract on a medium-term basis with an outside body to generate both advice and bibliometric data in these areas. The driver behind the proposal is the need for any data that is to drive funding to be robust and independent enough to command confidence.
So, does anyone want to carry out bibliometric analysis on arts and humanities journals given the considerations that I have briefly sketched here? There seems remarkably little interest in the UK or in other countries, and yet looming for us all is RAE 2013. We are required to build robust measures for the evaluation of research in the arts and humanities, albeit to be allied with some vague notion of light-touch peer review. Can we do that without some form of bibliometrics? Probably not. But, in the light of the position that I have outlined here, we are left asking whether, if it cannot be done without bibliometrics in the arts and humanities, can it be done with them?