Sharing how we can communicate our science effectively, whether in posters, papers or on social media!
Share
Understanding Publication Metrics
Published 16 days ago • 7 min read
Essential Publication Metrics
Hi Reader, we commonly communicate our science by publishing it.
But to know where to publish, we often look at the impact factor of a journal.
Also, whether you get a good postdoc position or become a professor partially depends on your publication record or h-index.
Still, what are these metrics, and what do they measure?
How Do We Measure Reach?
In the past few years, publication pressure has significantly shaped science.
Nature & IUBMB Life: Publishers like to use journal metrics to promote their outlets. Wellcome Trust UK and NIH: Funding bodies make their decisions partially based on previous publication records. University of Cambridge and Stanford University: University panels decide, based on publication records, who gets an advanced position such as tenure track. Max Planck Society and RIKEN: Institutions promote their work and assign funding based on research output. In other words, publication metrics guide decisions in all domains of science.
Just consider that publishers such as Wiley, Elsevier, and Nature have estimated annual revenues of more than a billion dollars each.
But how do we measure the success of our publications? Let’s review some of the most common metrics and understand how they come together:
Metrics to Assess Journals
Despite all its shortcomings, we most commonly use citations as the main indicator to assess the importance or impact of scientific work.
And the following numbers are there to give you an idea of the impact, prestige, or importance of a journal.
Impact Factor (IF)
The total number of citations, divided by the total number of papers published in the previous two years.
Widely recognized
Can be skewed by a few highly cited papers (that is not infrequently the case)
Doesn’t provide information about the citations of a specific paper
This editorial from Nature Materials addresses criticisms of the journal impact factor such as the short two-year citation window, and distortion by highly cited papers. It contends these concerns are overstated, presenting data showing that the 2011 impact factor correlates well with the five-year median citations of primary research papers, a measure robust to outliers and excluding non-primary content.
Indeed, many scientists estimate the impact of their work based on the IF of the journal in which it's published in.
However, a journal impact factor (JIF) provides information about the average success of a journal, not any single piece of work published there, as citation counts often vary widely.
CiteScore
The average number of citations received by a journal for all documents published over the previous four years.
Includes several document types: Papers, Reviews, Conference papers, Book chapters, Editorials, Letters, Errata/corrections
Published by Elsevier (Scopus)
Potentially punishes outlets with many “non-paper” publications that are less commonly cited (such as Nature or Lancet)
Click to enlarge. In one of their blogs, Eigenfactor conducted a rough analysis of CiteScore results because when Elsevier launched CiteScore (via Scopus), it was positioned as an alternative to the Journal Impact Factor from Clarivate.
SCImago Journal Rank (SJR)
The SCImago Journal Rank (SJR) indicator is calculated by dividing the total weighted citations a journal receives by the number of citable publications during the previous three years.
Weighted means that influence is quantified through the prestige of the citing journals, as determined by the SJR algorithm; citations from highly ranked journals contribute more.
In SCImago Journal Rank (SJR), citations are weighted using a PageRank-style algorithm in which each journal passes a share of its prestige to the journals it cites, similar to PageRank, originally developed by Larry Page and Sergey Brin at Google (if you like to read more about how it works). However, as shown in the graphic above, this leads to clear differences when comparing SJR rankings with rankings based on the h-index (taken from Wikipedia), which measures the largest number of papers that have each received at least that same number of citations (more on this below).
Less sensitive to citation manipulation (e.g., self-citation)
Technically comparatively complex to calculate
Eigenfactor Score
Eigenfactor estimates how important a journal is within the whole citation network by weighting citations according to the prestige of the citing sources based on the previous 5 years.
> Eigen factor is therefore similar to the SJR but the latter divides by the number of articles and is therefore size-independent.
Eigen factor is size-dependent (big journals score higher because they publish many papers, so they accumulate more total citations even if their average paper is only moderately cited.)
The graphic on the left stems from eigenfactor.org where you can learn more about their work. The graphic on the right is from a NewsRx blog indicating the Eigenfactor does correlate clearly with total number of cites.
Derived from Eigenfactor is the Article Influence Score measuring the average influence per article (similar to SJR)
Excludes journal self-citations
Technically complex to calculate
Article-Level Metrics (Altmetrics)
Online attention to individual papers.
Includes: Social media mentions, News coverage, Policy citations, Downloads
Common indicator: Altmetric Attention Score
“The amount of each color in the donut will change depending on which sources a research output has received attention from.” according to altmetric.com. The donut on the right has received a lot of mainstream media coverage. The Altmetric Attention Score is an automatically calculated weighted measure of the attention a research output receives. It increases with the volume of mentions (counting only one per person per source), assigns different weights depending on the source type (e.g., news counts more than blogs or tweets), and adjusts for the influence and behavior of the authors mentioning the work.
Important to Note
We couldn’t dive into all the details and there are still more metrics, such as the Source Normalized Impact per Paper (SNIP), which indicates the journal citation impact normalized for field citation practices, allowing for cross-field journal comparisons.
However, those metrics are mostly used by librarians and institutions to some extent because metrics are strongly field-dependent.
Indeed, many "low scoring" journals have a quality and visibility problem. However, please remember: high metrics don’t necessarily mean the best choice.
Publishing in renowned journals often goes along with much longer editorial processes, strict selection criteria, and higher APCs for open access pieces.
Metrics to Assess Individuals
The following metrics should summarize the impact of a single scientist’s work.
Although they are not as widely discussed, they are, for example, crucial for securing more senior positions in academia:
h-Index (H factor)
Wikipedia states: “The h-index is defined as the maximum value of h such that the given author (or journal) has published at least h papers that have each been cited at least h times.”
Honestly, h-indices are intuitive but hard to put into words.
In essence, a researcher has an index h if they have h papers cited at least h times each.
5 publications with citation counts of 10, 8, 5, 4, 3 → h-index = 4
5 publications with citation counts of 25, 8, 5, 3, 3 → h-index = 3
5 publications with citation counts of 9, 7, 6, 2, 1 → h-index = 3
Resistant to outliers but ignores very highly cited papers.
Favors senior researchers with more publications.
Does not account for author position; first author, last author, and middle author receive the same credit.
This is data from Minasny et al. showing the relationship between the scientific age (t) of 340 soil researchers and the h index (Web of Science data) on the left and the relationship between the number of citations and the h index on the right (Black dots are data from Web of Science, green squares are from Scopus, and blue triangles are from Google Scholar.) For more on the h index, read this editorial.
g-index
Largest number g such that the top g papers (if one rank-orders papers according to number of citations) have at least g² citations combined.
Hence, a g-index of 10 indicates that the top 10 publications of an author have been cited at least 100 times.
A g-index of 20 indicates that the top 20 publications of an author have been cited 400 times. One basically searches for the highest g based on all publications of an author.
As you might have noticed, the g-index is an alternative to the older h-index. The point is that the g-index gives more weight to highly cited papers. In more detail, the h-index does not average the number of citations. Instead, it only requires a minimum of n citations for the least-cited article in the set and thus ignores the citation count of very highly cited publications. Interestingly, mathematically the g-index turns out to be the maximum reachable value of the h-index if a fixed number of citations can be distributed freely over a fixed number of publications.
Field-Weighted Citation Impact (FWCI)
FWCI compares how often a researcher’s publications are cited relative to the world average for similar publications.
> FWCI = Actual citations received divided by expected citations for similar publications.
“Similar publications” means same field/subject area, publication year, document type (article, review, etc.)
You do this for each paper and then average all individual FWCIs into one number.
FWCI = 1.0 → world average “impact”
FWCI = 2.0 → cited twice the world average
FWCI = 0.5 → cited half the world average
Can be skewed by one blockbuster paper if publication numbers are small
Computing the benchmark (the average citations for all comparable papers in the database) is technically laborious
m-index (also called m-quotient)
Equals h-index divided by the years since first publication.
As the h-index favors senior researchers who have had more time to accumulate citations, the m-index corrects for this by measuring average annual impact
i10-index
Number of publications with ≥ 10 citations.
Provider: Google Scholar.
Citation Percentiles / Top-X% Papers
Essentially provides the % of papers in the top 10% most cited
Many systems (especially Clarivate and Elsevier analytics) report this metric.
Total Citations
Sum of all citations to a researcher’s work
Important to Note
Once again, we could also talk about Mean Normalized Citation Scores (MNCS), v-indices, or the Relative Citation Ratio (RCR), which is used in the U.S. NIH ecosystem and compares a paper’s citation rate to its co-citation network (papers cited alongside it).
This data comes from Gingras et al., who analyzed Quebec university affiliates, including 6,388 professors and researchers who had published at least one paper over the eight-year period (2000–2007).
With all of these metrics, we have to consider that we often get a summary of an author’s work.
Especially for junior scientists with fewer than 10 publications, many metrics such as FWCI are not stable.
That means year-to-year changes can be dramatic, also because their work is often published more recently. One normally needs a body of 30–50 publications for a more robust assessment.
I intentionally did not include average or expected values for these metrics.
On the one hand, they differ by field, decade, and expectations. On the other hand, they can be misleading, since publications and citations often come unevenly.
Key Take-Aways
Several metrics are used to evaluate journals and scientists, but some are cited disproportionately often.
Each serves a different purpose and is based on specific assumptions about what matters to assess (e.g., timeframes).
There is also not a single established metric I know of that would try to estimate translatability or innovativeness of your work.
However, it is important to have a general understanding of how these metrics function in order to prepare yourself if you plan to pursue an academic career.
What Makes a Journal? Hi Reader, how many active, peer-reviewed academic journals exist today? There are 40,000! And are you interested in how many papers they publish every day? So, how can you know which journal to publish in? Today, we will discuss five key features of journals to help you differentiate them: What Differentiates Journals With so many journals available, it is easy to think that they differ only by name or impact factor. Click to enlarge. Please take these numbers with a...
Discussing Publication Metrics Hi Reader, I am sure you are aware of the rising pressure to publish papers. An inappropriate use of metrics used to assess publishing success might contribute to a misguided focus within the scientific community. Let’s discuss some important nuances of the metrics we introduced last time to help you develop your own perspective. Let's get started because there are many angles we have to cover: Do Publication Metrics Make Sense? There is no final answer to that...
Shortcuts & Designing Yourself Hi Reader, is there a shortcut to designing scientific graphics? Being creative like the creators of these posters is one. But let’s be honest: creating good visuals can be time-consuming and difficult. However, scientists are sometimes asked to design journal covers, graphical abstracts, or summary graphics for broad communication. So, let’s discuss what could make your life easier: It's Your Decision No matter the path you decide on, the responsibility will...