FSP - Scientometrics
Permanent URI for this collectionhttp://localhost:4000/handle/123456789/60
Browse
Browsing FSP - Scientometrics by Subject "H-index"
Now showing 1 - 2 of 2
- Results Per Page
- Sort Options
Item Research-driven classification and ranking in higher education : an empirical appraisal of a Romanian policy experience(Springer Science and Business Media LLC, 2016) Vîiu, Gabriel Alexandru; Păunescu, Mihai; Miroiu, AdrianIn this paper we investigate the problem of university classification and its relation to ranking practices in the policy context of an official evaluation of Romanian higher education institutions and their study programs. We first discuss the importance of research in the government-endorsed assessment process and analyze the evaluation methodology and the results it produced. Based on official documents and data we show that the Romanian classification of universities was implicitly hierarchical in its conception and therefore also produced hierarchical results due to its close association with the ranking of study programs and its heavy reliance on research outputs. Then, using a distinct dataset on the research performance of 1385 faculty members working in the fields of political science, sociology and marketing we further explore the differences between university categories. We find that our alternative assessment of research productivity-measured with the aid of Hirsch's (Proc Natl Acad Sci 102(46): 16569-16572, 2005) h-index and with Egghe's (Scientometrics 69(1): 131-152, 2006) g-index-only provides empirical support for a dichotomous classification of Romanian institutions.Item The "Black-Box" of institutional scores : analyzing the distribution of the values of the H and G Indexes in medicine schools in Romania(University of Oradea Publishing House (Editura Universitatii din Oradea), 2015) Proteasa, Viorel; Păunescu, Mihai; Miroiu, AdrianMeasuring the university research performance has been an important focus of the higher education policies in past decade in Romania. In the present study we considered alternative methodologies for evaluating quality of research in the faculties of medicine. We set to compare the perspectives of past official evaluations with alternatives based on h and g indexes of the academics within these faculties and subsequent successive indexes and averages. We analyzed the distribution of the values of the individual h and g indexes and we rejected the universality claim hypothesis, according to which all university h- and g-index distributions follow a single functional form, proportional with the size of the universities. However, using the Characteristic Scores and Scales approach, we show that the shape of distributions is quite similar across universities revealing the skewness of scientific productivity. Given the high skewness of all distributions, we conclude that all three collective aggregation rules considered, averages, h- and g-successive indexes fail to provide an accurate measure of the differences between the individual academics within the six medical schools, and fail to provide scientific achievement incentives for the wide majority of the academic staff within the analysed faculties.