Einstein is it worth half of Dr. Raoult ? To end with the “index h”
Associate professor, UQAM
THE SCIENCE IN HIS WORDS / The media controversies surrounding professor Didier Raoult provides the opportunity to get back on the ubiquitous index bibliometric h-index, invented in 2005 by the american physicist John Hirsch (hence the choice of the letter “h” to refer to the index).
The”h index” or “factor h”, French has become in the space of a few years an indispensable reference for many researchers and managers of the academic world.
It is especially promoted and used in the biomedical sciences, a field in which the mass of publications seems to have made it impossible to any qualitative assessment of a serious work of researchers. This “indicator” has become the “smoke and mirrors of assessment”, in front of which the researchers admire or chuckle by noting the pitiful “h index” of their “dear colleagues”, but still rivals.
Although experts in bibliometrics have quickly noted the dubious nature of this composite indicator, most researchers do not always seem to understand that its properties are far from making a index is valid to assess seriously and ethically their “quality” or “impact” of science.
Most often, its promoters commit a mistake of elementary logic, saying that the Nobel prize winners “in general” an h-index high, evidence that it measures the quality of individual researchers. However, if an h-index may indeed be associated with a Nobel, this is not proof that an h-index high-bit is not necessarily associated with a “poor” researcher. In effect, an h-index seemingly weak can hide a scientific impact higher, at least if one accepts that the standard unit for this scientific visibility is reflected in the number of citations received.
The limitations of the h index
Defined as the number of articles of an author that have each received at least N citations, we immediately see that this index is bounded by the total number of items. In other words, if a person has twenty papers that are cited a hundred times each, her h-index is 20, just as a person who also has twenty articles, but cited each only twenty times, five times less ! But any serious seeker would say that the two are “equal” to the fact that their h-index is the same ? However, if an indicator is not proportional to the concept that it is supposed to measure, then it is invalid.
The most ironic thing in the history of the h-index is that its inventor was originally designed to counter the use of the number of papers, which according to him did not well reflect the scientific impact of a scientist. He thought, therefore, the “correct” combined with the number of citations that the articles receive. Worse, it turns out that the h index is in fact very highly correlated (up to the height of about 0.9) with the number of publications ! In other words, it is indeed the number of publications increase the h-index more than the number of citations, an indicator, which, despite its limitations, remains the best measure of the impact of scientific publications.