FSP - Scientometrics
Permanent URI for this collectionhttp://localhost:4000/handle/123456789/60
Browse
Browsing FSP - Scientometrics by Author "Vîiu, Gabriel Alexandru"
Now showing 1 - 4 of 4
- Results Per Page
- Sort Options
Item Ranking Romanian academic departments in three fields of study using the g-index(Routledge, part of the Taylor & Francis Group, 2015) Miroiu, Adrian; Păunescu, Mihai; Vîiu, Gabriel AlexandruThe scientific performance of 64 political science, sociology and marketing departments in Romania is investigated with the aid of the g-index. The assessment of departments based on the g-index shows, within each of the three types of departments that make up the population of the study, a strong polarisation between top performers (very few) and weak performers (much more numerous). This alternative assessment is also found to be largely consistent with an official ranking of departments carried out in 2011 by the Ministry of Education. To conduct the evaluation of departments the individual scientific output of 1385 staff members working in the fields of political science, sociology and marketing is first determined with the aid of the ‘Publish or Perish’ software based on the Google Scholar database. Distinct department rankings are then created within each field using a successive (second-order) g-index.Item Research-driven classification and ranking in higher education : an empirical appraisal of a Romanian policy experience(Springer Science and Business Media LLC, 2016) Vîiu, Gabriel Alexandru; Păunescu, Mihai; Miroiu, AdrianIn this paper we investigate the problem of university classification and its relation to ranking practices in the policy context of an official evaluation of Romanian higher education institutions and their study programs. We first discuss the importance of research in the government-endorsed assessment process and analyze the evaluation methodology and the results it produced. Based on official documents and data we show that the Romanian classification of universities was implicitly hierarchical in its conception and therefore also produced hierarchical results due to its close association with the ranking of study programs and its heavy reliance on research outputs. Then, using a distinct dataset on the research performance of 1385 faculty members working in the fields of political science, sociology and marketing we further explore the differences between university categories. We find that our alternative assessment of research productivity-measured with the aid of Hirsch's (Proc Natl Acad Sci 102(46): 16569-16572, 2005) h-index and with Egghe's (Scientometrics 69(1): 131-152, 2006) g-index-only provides empirical support for a dichotomous classification of Romanian institutions.Item The citation impact of articles from which authors gained monetary rewards based on journal metrics(Springer, 2021) Vîiu, Gabriel Alexandru; Păunescu, MihaiMonetary rewards granted on a per-publication basis to individual authors are an important policy instrument to stimulate scientific research. An inconsistent feature of many article reward schemes is that they use journal-level citation metrics. In this paper we assess the actual article-level citation impact of about 10,000 articles whose authors received financial rewards within the Romanian Program for Rewarding Research Results (PR3), an exemplary money-per-publication program that uses journal metrics to allocate rewards. We present PR3, offer a comprehensive empirical analysis of its results and a scientometric critique of its methodology. We first use a reference dataset of 1.9 million articles to compare the impact of each rewarded article from five consecutive PR3 editions to the impact of all the other articles published in the same journal and year. To determine the wider global impact of PR3 papers we then further benchmark their citation performance against the worldwide field baselines and percentile rank classes from the Clarivate Analytics Essential Science Indicators. We find that within their journals PR3 articles span the full range of citation impact almost uniformly. In the larger context of global broad fields of science almost two thirds of the rewarded papers are below the world average in their field and more than a third lie below the world median. Although desired by policymakers to exemplify excellence many PR3 articles are characterized by a rather commonplace individual citation performance and have not achieved the impact presumed and rewarded after publication based on journal metrics. Furthermore, identical rewards have been offered to articles with markedly different impact. Direct monetary incentives for articles may support productivity but they cannot guarantee impact.Item The lack of meaningful boundary differences between journal impact factor quartiles undermines their independent use in research evaluation(2021) Vîiu, Gabriel Alexandru; Păunescu, MihaiJournal impact factor (JIF) quartiles are often used as a convenient means of conducting research evaluation, abstracting the underlying JIF values. We highlight and investigate an intrinsic problem associated with this approach: the differences between quartile boundary JIF values are usually very small and often so small that journals in different quartiles cannot be considered meaningfully different with respect to impact. By systematically investigating JIF values in recent editions of the Journal Citation Reports (JCR) we determine it is typical to see between 10 and 30% poorly differentiated journals in the JCR categories. Social sciences are more affected than science categories. However, this global result conceals important variation and we also provide a detailed account of poor quartile boundary differentiation by constructing in-depth local quartile similarity profiles for each JCR category. Further systematic analyses show that poor quartile boundary differentiation tends to follow poor overall differentiation which naturally varies by field. In addition, in most categories the journals that experience a quartile shift are the same journals that are poorly differentiated. Our work provides sui generis documentation of the continuing phenomenon of impact factor inflation and also explains and reinforces some recent findings on the ranking stability of journals and on the JIF-based comparison of papers. Conceptually there is a fundamental problem in the fact that JIF quartile classes artificially magnify underlying differences that can be insignificant. We in fact argue that the singular use of JIF quartiles is a second order ecological fallacy. We recommend the abandonment of the quartiles reification as an independent method for the research assessment of individual scholars.