Antingen stödjer din webbläsare inte javascript, eller är javascript inaktiverat. Denna webbplats fungerar bäst om du aktiverar javascript.

Indicators are metrics that provide summarized information to compare different activities. In bibliometrics, they are used to measure and evaluate the impact of researchers, institutions and journals.


Basic indicators

Examples of basic indicators are:

  • Number of publications and citations per researcher
  • Number of citations per publication
  • Number of self-citations
  • Number of uncited articles

Keep in mind that these indicators should not be used for comparisons between different subject areas!

Analysis of citations primarily shows visibility and this varies depending on the context and can reveal structures within a subject area. How citations differ between different subject areas and are affected by the form of publication, target group and level of ambition, and citation patterns can reflect relationships between researchers and departments. The time it takes for publications to reach the citation peak varies between subjects.  In order to make a correct citation analysis, self-citations should be removed and a comparison should be made with similar publications.

H-index

The H-index is a metric that emphasizes both productivity and impact through citations.

In short, an h-index of 10 means that a person is a co-author of at least 10 articles where each article has been cited at least 10 times, a total of at least 100 citations, which may be the case for an associate professor or newly appointed professor. (Wikipedia, 2023)

To calculate the h-index, various databases such as Web of Science, Scopus and Google Scholar are used, which have variations in the coverage of scientific material.

The H-index is dependent on the research area and time dimension, and for comparisons it should be considered within the same subject area and take into account what has been active.

Structural indicators

These metrics provide insights into publication patterns and can often be presented as maps of co-authorship, citations between researchers, subject areas and co-publication with other organisations and countries.

Cocitation analysis is a method that investigates relationships between journals, publications, or researchers by analysing who is cited together in later publications. In order to carry out cocitation analysis, a connection between the publications is required. By forming clusters or groups after such an analysis, one can gain insight into the intellectual basis of a research area, which changes over time depending on the use of the cited publications. It is a static method that helps to identify commonalities in the content through the references that are shared between publications.

To visualize these indicators, various visualization tools are often used, of which VosViewer is the most well-known.

Indicators for journals

There are different ranking systems for journals. It is important to be aware of these different indicators and their context when evaluating and comparing research and publications.

Journal Impact Factor (JIF)

This is the first and most widely used bibliometric indicator at the journal level. It measures the average number of citations in a year for articles published in the previous two years. JIF has been calculated annually since 1975 and is indexed in the Journal Citation Reports database with data from Web of Science and its predecessors.

SCImago Journal Ranking(SJR)

This bibliometric indicator is calculated based on articles and citations in the Scopus database.  SJR is calculated over three years and limits self-citations.

Source Normalized Impact per Paper (SNIP)

SNIP is calculated taking into account differences in citation practices by comparing the total number of citations in the citing journals. The impact of an individual citation is given a higher value in subject areas where citations are not common, and vice versa. SNIP thus takes into account the specific context of the subject area and provides a more nuanced picture of a journal's impact.

Norwegian list

The Norwegian model is used for the evaluation and allocation of research funding in Norway, but also in modified versions in other Nordic countries, such as Stockholm University. The model is based on a register, the list, where scientific journals are grouped into two levels:

  • Level 1: Includes scientific journals.
  • Level 2: Consists of the scientific journals that are most prominent in each research area. This level represents about 20% of all scientific publications.

For more information, please contact: see NSD's website.

Questionable journal factors

In recent years, several non-transparent impact factors have been marketed by various companies. These factors often lack clear documentation on how they are actually calculated. The aim is to make journals that are new, anonymous or unserious (so-called predatory journals) appear to be influential and of high quality. It is important to be critical and to examine such factors carefully to avoid misleading assessments of scientific impact.

Questionable impact factors:

  • Citefactor
  • Global Impact Factor
  • Scientific Journal Impact Factor
  • IndexCopernicus

Relevance of journal indicators

The use of journal indicators to assess the work of a researcher or research group is contentious. Although these indicators may be serious in their calculations, their application is questionable. For example, a high Journal Impact Factor (JIF) may come from a few highly cited articles and does not guarantee that every article published in the journal is of high quality. On the other hand, a high level of JIF or SRJ indicates that the journal is well used and has a good spread within the research community. This makes it an interesting option for researchers considering where to publish their findings.

Advanced Bibliometric Indicators

These indicators are always normalized, i.e.;  are compared with similar publications under the same conditions, such as subject area and year of publication. Indicators that are adapted to the research field can be used to make comparisons between different subject areas

One example is field-normalized citation rate, which is often used in the field of medicine and which means that different fields are compared in a fair way. Because if you only count the number of scientific citations, fields where many articles are traditionally written automatically also risk having the highest number of citations. The field-normalized citation rate takes this into account, so that all fields have the same conditions. Field-rated citation rate is also known as Crown factor (CF).

Another indicator is field-normalized top publications. They are also known as the Top 5% and refer to the proportion of a research group's publications that belong to the top 5% most cited in the world.

 

Updated