Emmanuelle Tognoli in her kind comment suggested: As we develop computational literacy in the decades to come, perhaps we will adopt “personalized rankings” just like we do of “personalized medicine”: each and everyone will be able to weight the factors (rank=30% teaching + …) and write their own equations (or have a website write it for them with sliders) to see their unique customized rankings depending on their own priorities.
I find her suggestion that might lead to a method to set subjectively the parameters of an objective framework. As I see, two Korean scientists (Gae-won You and Seung-won Hwang published already ten years ago with the title: Personalized ranking: a contextual ranking approach . I take the liberty to copy the abstract for further common thinking:
“As data of an unprecedented scale are becoming accessible on the Web, personalization, of narrowing down the retrieval to meet the user-specific information needs, is becoming more and more critical. For instance, in the context of text retrieval, in contrast to traditional web search engines retrieving the same results for all users, major commercial search engines are starting to support personalization, improving the search quality by adapting to the user-specific retrieval contexts, e.g., prior search history or other application contexts. This paper studies how to enable such personalization in the context of structured data retrieval. In particular, we adopt context-sensitive ranking model to formalize personalization as a cost-based optimization over context-sensitive rankings collected. With this formalism, personalization is essentially retrieving the context-sensitive ranking matching the specific user’s retrieval context and generating a personalized ranking accordingly. In particular, we adopt a machine learning approach, to effectively and efficiently identify the ideal personalized ranked results for this specific user. Our empirical evaluations over real-life data validate both the effectiveness and efficiency of our framework.”
My 2 cents on (science, ie publication and citation based) ranking is that it MUST fulfill peer expectations. The way I like to put is is that there is nothing ABOVE (or beyond) the community judgment. The community (of, say, vetriloquists – I love the word 🙂 knows exaclty who is what. Any ranking must reflect that or is a bad ranking. I realized that when a colleague started to personalize (!) his ranking, adding edited volumes (despite the university policy to ignore them). His aim was to IMPROVE his raking whereas all his peers knew exaclty who he was….
LikeLike