多指标科学成就的排名是一项艰巨的任务。例如,研究人员的科学排名使用两种主要指标:即出版物和引用的数量。实际上,他们专注于如何选择适当的指标,仅考虑一个指标或它们的组合。大多数排名方法结合了多个指标,但是这些方法面临着一个挑战性的挑战—将适当/最佳权重分配给目标指标。帕累托最优定义为多目标优化中的效率度量,该目标通过同时考虑多个标准/目标来寻求最优解。基本的帕累托优势深度排名策略的性能会随着标准数量的增加而降低(通常来说,超过三个条件时)。本文提出了一种新的改进的帕累托优势深度排名策略,该策略使用从基本帕累托优势深度排名获得的一些优势指标和一些分类的统计指标对科学成果进行排名。它尝试同时使用所有指标来查找比较数据的聚类。此外,我们将提出的方法用于解决当今非常普遍的多源排名解析问题。例如,每年都有几家全球性机构对世界大学进行排名,但是它们的排名并不一致。作为我们的案例研究,所提出的方法用于对几个科学数据集(即研究人员,大学和国家)进行排名,以证明概念。提出了一种改进的帕累托优势深度排序策略,该策略利用从基本帕累托优势深度排名获得的一些优势度量和一些分类的统计度量对科学成果进行排名。它尝试同时使用所有指标来查找比较数据的聚类。此外,我们将提出的方法用于解决当今非常普遍的多源排名解析问题。例如,每年都有几家全球性机构对世界大学进行排名,但是它们的排名并不一致。作为我们的案例研究,所提出的方法用于对几个科学数据集(即研究人员,大学和国家)进行排名,以证明概念。提出了一种改进的帕累托优势深度排序策略,该策略利用从基本帕累托优势深度排名获得的一些优势度量和一些分类的统计度量对科学成果进行排名。它尝试同时使用所有指标来查找比较数据的聚类。此外,我们将提出的方法用于解决当今非常普遍的多源排名解析问题。例如,有几家全球性机构每年对世界大学进行排名,但排名却不一致。作为我们的案例研究,所提出的方法用于对几个科学数据集(即研究人员,大学和国家)进行排名,以证明概念。
The ranking of multi-metric scientific achievements is a challenging task. For example, the scientific ranking of researchers utilizes two major types of indicators; namely, number of publications and citations. In fact, they focus on how to select proper indicators, considering only one indicator or combination of them. The majority of ranking methods combine several indicators, but these methods are faced with a challenging concern—the assignment of suitable/optimal weights to the targeted indicators. Pareto optimality is defined as a measure of efficiency in the multi-objective optimization which seeks the optimal solutions by considering multiple criteria/objectives simultaneously. The performance of the basic Pareto dominance depth ranking strategy decreases by increasing the number of criteria (generally speaking, when it is more than three criteria). In this paper, a new, modified Pareto dominance depth ranking strategy is proposed which uses some dominance metrics obtained from the basic Pareto dominance depth ranking and some sorted statistical metrics to rank the scientific achievements. It attempts to find the clusters of compared data by using all of indicators simultaneously. Furthermore, we apply the proposed method to address the multi-source ranking resolution problem which is very common these days; for example, there are several world-wide institutions which rank the world’s universities every year, but their rankings are not consistent. As our case studies, the proposed method was used to rank several scientific datasets (i.e., researchers, universities, and countries) for proof of concept.