On “Academic Analytics”

Statement on “Academic Analytics” and Research Metrics

The following statement was approved by the Executive Committee of the AAUP Council at its March meeting:

Recently a number of colleges and universities have contracted with independent private companies to provide allegedly “objective” quantitative information about faculty scholarly “productivity.” Among the most prominent of these “metrics providers” is a firm called Academic Analytics, a subscription‐based software package that claims it can assist faculty and universities by displaying ostensibly objective data about faculty research.

Academic Analytics says this about itself: “Academic Analytics is a provider of high‐quality, custom business intelligence data and solutions for research universities in the United States and the United Kingdom. Our mission is to provide universities and university systems with objective data that administrators can use to support the strategic decision‐making process as well as a method for benchmarking in comparison to other institutions. Rooted in academia, we help universities identify their strengths and areas where improvements can be made.”

The company now claims 385 institutional customers in the U.S. and abroad, representing about 270,000 faculty members in 9,000 Ph.D. programs and 10,000 departments.

Some of the firm’s metrics are without any qualitative dimension: per capita production of books and articles, for example. Other figures, such as per capita citations and “awards” per faculty member, may be said to introduce some qualitative dimensions, but seem to produce puzzling results. One philosopher found, for example, that the company’s listing of the “top ten” philosophy departments was almost just “random.”1

One institution where Academic Analytics has become controversial is Rutgers University, which in 2013 signed a four‐year, $492,500 contract with the firm. The university claims the data it has received is valuable, but faculty members say the company’s reports are inaccurate and worry that a focus on research productivity data will damage the quality of teaching. One significant problem is that the contract restricts access to the firm’s portal to those who hold “a position that involves [strategic] decision making and evaluation of productivity,” as approved by the company. The contract also limits what information may be distributed or shared. Hence faculty members have been largely unable to check the accuracy of the firm’s data concerning their own scholarship, and the few who have managed to do so claim it is incorrect.

In response, in December the faculty of the university’s School of Arts and Sciences voted overwhelmingly to forbid use of the firm’s data in tenure and promotion decisions or “in decisions affecting the composition of the faculty, graduate and undergraduate curricula, and grant‐writing.” The resolution also called on the university to distribute personal data collected by the firm to each faculty member of the school.

Most faculty members have some sort of direct experience of metrics used to assess performance. Metric data have been gathered on almost all activities conducted by scholars–across teaching and research–and they may be aggregated at different levels, to assess individuals, departments and whole institutions. In some contexts, they may offer a useful supplementary instrument of performance assessment. For example, editors of journals may learn about the impact of their journal and its standing against comparable journals.

There is, however, good reason to doubt the utility of such metrics in tenure and promotion decisions and/or in judgments affecting hiring, compensation or working conditions. A 2015 study by the Higher Education Funding Council for England,2 where use of research metrics is now required at public institutions, found “considerable skepticism among researchers, universities, representative bodies and learned societies about the broader use of metrics in research assessment and management.” The study found that indicators can be misused or “gamed,” that the data infrastructure underpinning use of metrics and information about research remains fragmented, with insufficient interoperability between systems; and that “it is not currently feasible to assess research outputs or impacts . . . using quantitative indicators alone.”

The study concluded that “carefully selected indicators can complement decision‐making,” but expert judgment and qualitative measures that respect research diversity remain essential elements of a representative process of peer review, which should remain “the primary basis for evaluating research outputs, proposals and individuals.”

We agree. Measuring faculty “productivity” with an exclusive or excessive emphasis on quantitative measures of research output must inevitably fail to take adequate account of the variety and totality of scholarly accomplishments. In a 1993 report on “The Work of Faculty: Expectations, Priorities, and Rewards,” the AAUP’s Committee on Teaching, Research and Publication concluded:

External mandates of workload and productivity are not an effective or desirable means of enhancing the quality or cost‐effectiveness of higher education. We believe that nothing of any value, insofar as the quality of higher education is at issue, is likely to result from extramural efforts to define workload or to determine an appropriate mixture among types of professional activity, whether we refer to individuals or to institutions.

With respect to research, the report noted:

By enlarging the perspective through which we judge scholarly achievement, we more accurately define the many ways in which intellectual inquiry shapes the path of scholarly pursuits and of our complex and interrelated roles as teachers and researchers in a multitude of institutional and disciplinary settings.

Colleges and universities and their faculty members should exercise extreme caution in deciding whether to subscribe to external sources of data like Academic Analytics and should always refrain from reliance on such data in tenure, promotion, compensation or hiring decisions. In cases where such data is made available, it must be employed subordinate to a process of effective peer review in accordance with longstanding principles of academic freedom and shared governance. In all cases individual faculty members must be provided with access to and the opportunity to correct any data and information, no matter how it may be generated, that may be employed by those making decisions affecting their employment status.

1 “Questions About Academic Analytics” on “Leiter Reports: A Philosophy Blog,” http://leiterreports.typepad.com/blog/2014/04/questions‐about‐academic‐analytics.html

2 “The Metric Tide: Report of the Independent Review of the Role of Metrics in Research Assessment and Management,” at http://www.hefce.ac.uk/pubs/rereports/Year/2015/metrictide