FSP - Scientometrics
Permanent URI for this collectionhttp://localhost:4000/handle/123456789/60
Browse
Browsing FSP - Scientometrics by Title
Now showing 1 - 7 of 7
- Results Per Page
- Sort Options
Item Minimum representative size in comparing research performance of Universities : the case of medicine faculties in Romania(Chinese Academy of Sciences, National Science Library, 2018) Liu, Xiaoling; Păunescu, Mihai; Preoteasa, Viorel; Wu, JinsanPurpose The main goal of this study is to provide reliable comparison of performance in higher education. In this respect, we use scientometric measures associated with faculties of medicine in the six health studies universities in Romania. Design/methodology/approach The method to estimate the minimum necessary size, proposed in in Shen et al. (2017), is applied in this article. We collected data from the Scopus data-base for the academics of the departments of medicine within the six health studies universities in Romania during the 2009 to 2014. And two kind of statistic treatments based on that method are implemented, pair-wise comparison and one-to-the-rest comparison. All the results of these comparisons are shown. Findings According to the results: We deem that Cluj and Tg. Mureş have the superior and inferior performance respectively, since their reasonably small value of the minimum representative size, in either of the kinds of comparison, whichever indexes of citations, h-index, or g-index is used. we can not reliably distinguish differences among the rest of the faculties, since the quite large value of their minimum representative size. Research limitations There is only six faculties of medicine in health studies universities in Romania are analyzed. Practical implications Our methods of comparison play an important role in ranking data sets associated with different collective units, such as faculties, universities, institutions, based on some aggregate scores like mean and totality. Originality/value We applied the minimum representative size to a new emprical context—that of the departments of medicine in the health studies universities in Romania.Item Ranking Romanian academic departments in three fields of study using the g-index(Routledge, part of the Taylor & Francis Group, 2015) Miroiu, Adrian; Păunescu, Mihai; Vîiu, Gabriel AlexandruThe scientific performance of 64 political science, sociology and marketing departments in Romania is investigated with the aid of the g-index. The assessment of departments based on the g-index shows, within each of the three types of departments that make up the population of the study, a strong polarisation between top performers (very few) and weak performers (much more numerous). This alternative assessment is also found to be largely consistent with an official ranking of departments carried out in 2011 by the Ministry of Education. To conduct the evaluation of departments the individual scientific output of 1385 staff members working in the fields of political science, sociology and marketing is first determined with the aid of the ‘Publish or Perish’ software based on the Google Scholar database. Distinct department rankings are then created within each field using a successive (second-order) g-index.Item Ranking the Romanian departments of sociology : comparative results of different evaluation methodologies(Consiliul Agenţiei Române de Asigurare a Calităţii în Învăţământul Superior - ARACIS, 2013) Păunescu, Mihai; Hâncean, GabrielIn this article we will discuss the ranking of the sociology higher education study programs in Romania, on the basis of departmental g-successive index. The need and consequences of rankings in higher education is a much debated topic. Thus, we will look a little bit into the assumptions and the logic that underpins any evaluation and ranking exercise. Having done so, we will stumble upon a specific ranking methodology that is largely based on g-index. We will nonetheless show that the alternative official methodology, based on a considerably higher number of indicators, though measuring more comprehensively the concept of quality, largely produces the same results. We will eventually discuss the advantages and disadvantages of using synthetic indexes (like g index for instance) comparing with evaluation exercises that take on board more numerous indicators and dimensions.Item Research-driven classification and ranking in higher education : an empirical appraisal of a Romanian policy experience(Springer Science and Business Media LLC, 2016) Vîiu, Gabriel Alexandru; Păunescu, Mihai; Miroiu, AdrianIn this paper we investigate the problem of university classification and its relation to ranking practices in the policy context of an official evaluation of Romanian higher education institutions and their study programs. We first discuss the importance of research in the government-endorsed assessment process and analyze the evaluation methodology and the results it produced. Based on official documents and data we show that the Romanian classification of universities was implicitly hierarchical in its conception and therefore also produced hierarchical results due to its close association with the ranking of study programs and its heavy reliance on research outputs. Then, using a distinct dataset on the research performance of 1385 faculty members working in the fields of political science, sociology and marketing we further explore the differences between university categories. We find that our alternative assessment of research productivity-measured with the aid of Hirsch's (Proc Natl Acad Sci 102(46): 16569-16572, 2005) h-index and with Egghe's (Scientometrics 69(1): 131-152, 2006) g-index-only provides empirical support for a dichotomous classification of Romanian institutions.Item The "Black-Box" of institutional scores : analyzing the distribution of the values of the H and G Indexes in medicine schools in Romania(University of Oradea Publishing House (Editura Universitatii din Oradea), 2015) Proteasa, Viorel; Păunescu, Mihai; Miroiu, AdrianMeasuring the university research performance has been an important focus of the higher education policies in past decade in Romania. In the present study we considered alternative methodologies for evaluating quality of research in the faculties of medicine. We set to compare the perspectives of past official evaluations with alternatives based on h and g indexes of the academics within these faculties and subsequent successive indexes and averages. We analyzed the distribution of the values of the individual h and g indexes and we rejected the universality claim hypothesis, according to which all university h- and g-index distributions follow a single functional form, proportional with the size of the universities. However, using the Characteristic Scores and Scales approach, we show that the shape of distributions is quite similar across universities revealing the skewness of scientific productivity. Given the high skewness of all distributions, we conclude that all three collective aggregation rules considered, averages, h- and g-successive indexes fail to provide an accurate measure of the differences between the individual academics within the six medical schools, and fail to provide scientific achievement incentives for the wide majority of the academic staff within the analysed faculties.Item The citation impact of articles from which authors gained monetary rewards based on journal metrics(Springer, 2021) Vîiu, Gabriel Alexandru; Păunescu, MihaiMonetary rewards granted on a per-publication basis to individual authors are an important policy instrument to stimulate scientific research. An inconsistent feature of many article reward schemes is that they use journal-level citation metrics. In this paper we assess the actual article-level citation impact of about 10,000 articles whose authors received financial rewards within the Romanian Program for Rewarding Research Results (PR3), an exemplary money-per-publication program that uses journal metrics to allocate rewards. We present PR3, offer a comprehensive empirical analysis of its results and a scientometric critique of its methodology. We first use a reference dataset of 1.9 million articles to compare the impact of each rewarded article from five consecutive PR3 editions to the impact of all the other articles published in the same journal and year. To determine the wider global impact of PR3 papers we then further benchmark their citation performance against the worldwide field baselines and percentile rank classes from the Clarivate Analytics Essential Science Indicators. We find that within their journals PR3 articles span the full range of citation impact almost uniformly. In the larger context of global broad fields of science almost two thirds of the rewarded papers are below the world average in their field and more than a third lie below the world median. Although desired by policymakers to exemplify excellence many PR3 articles are characterized by a rather commonplace individual citation performance and have not achieved the impact presumed and rewarded after publication based on journal metrics. Furthermore, identical rewards have been offered to articles with markedly different impact. Direct monetary incentives for articles may support productivity but they cannot guarantee impact.Item The lack of meaningful boundary differences between journal impact factor quartiles undermines their independent use in research evaluation(2021) Vîiu, Gabriel Alexandru; Păunescu, MihaiJournal impact factor (JIF) quartiles are often used as a convenient means of conducting research evaluation, abstracting the underlying JIF values. We highlight and investigate an intrinsic problem associated with this approach: the differences between quartile boundary JIF values are usually very small and often so small that journals in different quartiles cannot be considered meaningfully different with respect to impact. By systematically investigating JIF values in recent editions of the Journal Citation Reports (JCR) we determine it is typical to see between 10 and 30% poorly differentiated journals in the JCR categories. Social sciences are more affected than science categories. However, this global result conceals important variation and we also provide a detailed account of poor quartile boundary differentiation by constructing in-depth local quartile similarity profiles for each JCR category. Further systematic analyses show that poor quartile boundary differentiation tends to follow poor overall differentiation which naturally varies by field. In addition, in most categories the journals that experience a quartile shift are the same journals that are poorly differentiated. Our work provides sui generis documentation of the continuing phenomenon of impact factor inflation and also explains and reinforces some recent findings on the ranking stability of journals and on the JIF-based comparison of papers. Conceptually there is a fundamental problem in the fact that JIF quartile classes artificially magnify underlying differences that can be insignificant. We in fact argue that the singular use of JIF quartiles is a second order ecological fallacy. We recommend the abandonment of the quartiles reification as an independent method for the research assessment of individual scholars.