SciCom – Are Publication Metrics a Mistake?



Discussing Publication Metrics

Hi Reader, I am sure you are aware of the rising pressure to publish papers.

An inappropriate use of metrics used to assess publishing success might contribute to a misguided focus within the scientific community.

Let’s discuss some important nuances of the metrics we introduced last time to help you develop your own perspective.

Let's get started because there are many angles we have to cover:


Do Publication Metrics Make Sense?

There is no final answer to that question.
It’s like asking whether measuring the geometric mean makes sense.

Each of these metrics provides a certain kind of information. What matters more is how we use that information.

Fundamentally, we can say that we need to:

  • Be aware of what a specific metric actually measures
  • Use them in combination to arrive at a more comprehensive picture.

Yet even a combination of several metrics cannot predict the future performance of a journal or scientist with absolute certainty.

Indeed, even in high-impact journals, only a fraction of papers receive extraordinarily many citations. Whether you have a big name or publish in a big journal, it is by no means a guarantee that you will be cited.

The Importance of Metrics

Without going into too much detail, we can say that these metrics serve an important purpose.

Measuring how often work is cited is a surrogate for measuring awareness and relevance for others’ work.

Having some sort of metric to differentiate impact is important - for hiring decisions as well as funding allocation.

Pretending all work is equally valuable is just illusory.

Science needs resources and money. Without performance-based indicators we would assign those resources more randomly, which would not be economically sensible.

It Needs Balance

However, since there are few other performance indicators available, one can argue that those available are often used excessively.

Publishing decisions are often made purely based on IF without considering other important aspects.

Similarly, hiring decisions based on publication count, the journals someone published in, and the h-index underestimate the importance of innovative thinking that is initially cited less often.

In essence, a myopic perspective of too many creates a vicious cycle in which scientists start doing science in order to publish, rather than to advance our understanding of the world around us.

Two Major Problems

But even when one tries to be balanced, currently used metrics have two major issues.

No matter how many metrics one uses, they do not provide a systematic assessment.

Each of the metrics we discussed focuses on a certain time frame and uses a specific methodology. But they only partially complement each other.

For a systematic analysis, examining a metric like the impact factor from a two-, five-, ten-, and twenty-year perspective would be a start. Then assessing variation based on self-citation and the reputation of citing journals would seem useful.

In other words, we are able to look at more than just one puzzle piece, but never at the complete picture.

For individual researchers this becomes even more important:

Younger scientists are generally disadvantaged in this regard. They have had less time to publish, their work has had less time to be cited, and not being the last author means they appear on fewer papers.

The other essential problem is that citations and publications are our only proxy for scientific impact.

Technically, this leads to challenges such as the fact that reviews generally have a skewing effect, as they commonly receive more citations but provide a very different contribution than original research.

Moreover, self-citation, citation rings, or predatory publishing can be abused to boost these metrics.

Do We Disincentivize Advances?

We could even raise a more fundamental question - whether citations and the number of studies published are really key factors we should consider when assessing the value of a journal or the contribution of a scientist.

We completely ignore the novelty of approaches, improvements in methods, or real-world applications.

This is especially problematic since (reasonable) concerns of the community toward truly novel or interdisciplinary approaches often need years, if not decades, to be accepted.

Although only required by a few institutions, applicants often include JIF to denote the “value of their publications”. In practice, this is also frequently encouraged by their institutions. However, some argue that chasing high-profile publications pushes scientists into highly competitive topics and may encourage cutting corners or overstating results. The authors claim the deeper issue is that short-term metrics (like JIFs and citation counts) discourage risky, innovative research. To examine this, they analyzed over 660,000 papers from 2001 in Web of Science, using an indicator of “riskiness” based on whether a paper cited unusual new combinations of journals in its references.

The question we need to ask is: what do we want to reward?

Especially senior scientists and those making decisions in universities and funding bodies carry the responsibility here.

Addressing Commercialization

Of course, we often like to pretend that science is an objective endeavor - but it's simply not.

Neither in what we decide to research nor in how we share these findings.

Therefore, we also have to consider how large the publication market is and what drives its revenues.

(Commercial) Journals make money through subscriptions and publishing.

Their priorities are reach and quantity - this is simply business.

At the same time, many scientists also accept publication-based metrics as the main determinant of their careers. Thereby feeding into the system.

All in all, it may appear surprising to an outside observer that the scientific community has been so inefficient at finding truly robust metrics to evaluate the value of its research.

One could even get the impression that commercial publishers were able to integrate certain metrics for evaluating their journals due to a lack of interest from the scientific community. Just consider the comparatively low competition in publishing only a few decades ago.

An Important Nuance

Still, it is unlikely that a single individual can establish a meaningful new metric. On the one hand, it is the community that drives adoption.

But secondly, it is not easy to calculate any sort of advanced metric due to the huge amount of data one needs to process.

While the precise features such as the year of analysis are one variable, it also matters which database is used for the analysis.

Just try to find out how many publications and citations a senior scientist you know has. Google Scholar sometimes provides clearly different numbers from, for example, ResearchGate.

And so we have to accept that SJR is based on Scopus data, while Eigenfactor is based on Web of Science data.

A Final Word

We publish a lot, and therefore we too struggle with technical challenges, as every large system does.

The importance of publishing metrics will probably not wane anytime soon.

You can conduct analyses of your own (for example, you could trace number of publications versus citations over time in a histogram or grouped by research question) but be prepared that you will be judged based on conventional metrics.

But now - do these metrics make sense?

I would argue that metrics generally do, but we need to remember that we rely on a very (perhaps oddly) concrete set of assessments.

However, what I think might not matter as much as what you think.

Make up your own mind and consider how you let these metrics influence your decisions.

How We Feel Today

Edited by Patrick Penndorf
Connection@ReAdvance.com
Lutherstraße 159, 07743, Jena, Thuringia, Germany
Data Protection & Impressum
Unsubscribe · Preferences

A Sign For Science

Sharing how we can communicate our science effectively, whether in posters, papers or on social media!

Read more from A Sign For Science

What Makes a Journal? Hi Reader, how many active, peer-reviewed academic journals exist today? There are 40,000! And are you interested in how many papers they publish every day? So, how can you know which journal to publish in? Today, we will discuss five key features of journals to help you differentiate them: What Differentiates Journals With so many journals available, it is easy to think that they differ only by name or impact factor. Click to enlarge. Please take these numbers with a...

Essential Publication Metrics Hi Reader, we commonly communicate our science by publishing it. But to know where to publish, we often look at the impact factor of a journal. Also, whether you get a good postdoc position or become a professor partially depends on your publication record or h-index. Still, what are these metrics, and what do they measure? How Do We Measure Reach? In the past few years, publication pressure has significantly shaped science. Nature & IUBMB Life: Publishers like...

Shortcuts & Designing Yourself Hi Reader, is there a shortcut to designing scientific graphics? Being creative like the creators of these posters is one. But let’s be honest: creating good visuals can be time-consuming and difficult. However, scientists are sometimes asked to design journal covers, graphical abstracts, or summary graphics for broad communication. So, let’s discuss what could make your life easier: It's Your Decision No matter the path you decide on, the responsibility will...