Change search
Link to record
Permanent link

Direct link
Breitinger, Corinna
Publications (2 of 2) Show all publications
Beel, J., Gipp, B., Langer, S. & Breitinger, C. (2016). Research-paper recommender systems: a literature survey. International Journal on Digital Libraries, 17(4), 305-338
Open this publication in new window or tab >>Research-paper recommender systems: a literature survey
2016 (English)In: International Journal on Digital Libraries, ISSN 1432-5012, E-ISSN 1432-1300, Vol. 17, no 4, p. 305-338Article in journal (Refereed) Published
Abstract [en]

In the last 16 years, more than 200 research articles were published about research-paper recommender systems. We reviewed these articles and present some descriptive statistics in this paper, as well as a discussion about the major advancements and shortcomings and an overview of the most common recommendation concepts and approaches. We found that more than half of the recommendation approaches applied content-based filtering (55 %). Collaborative filtering was applied by only 18% of the reviewed approaches, and graph-based recommendations by 16%. Other recommendation concepts included stereotyping, item-centric recommendations, and hybrid recommendations. The content-based filtering approaches mainly utilized papers that the users had authored, tagged, browsed, or downloaded. TF-IDF was the most frequently applied weighting scheme. In addition to simple terms, n-grams, topics, and citations were utilized to model users' information needs. Our review revealed some shortcomings of the current research. First, it remains unclear which recommendation concepts and approaches are the most promising. For instance, researchers reported different results on the performance of content-based and collaborative filtering. Sometimes content-based filtering performed better than collaborative filtering and sometimes it performed worse. We identified three potential reasons for the ambiguity of the results. (A) Several evaluations had limitations. They were based on strongly pruned datasets, few participants in user studies, or did not use appropriate baselines. (B) Some authors provided little information about their algorithms, which makes it difficult to re-implement the approaches. Consequently, researchers use different implementations of the same recommendations approaches, which might lead to variations in the results. (C) We speculated that minor variations in datasets, algorithms, or user populations inevitably lead to strong variations in the performance of the approaches. Hence, finding the most promising approaches is a challenge. As a second limitation, we noted that many authors neglected to take into account factors other than accuracy, for example overall user satisfaction. In addition, most approaches (81%) neglected the user-modeling process and did not infer information automatically but let users provide keywords, text snippets, or a single paper as input. Information on runtime was provided for 10% of the approaches. Finally, few research papers had an impact on research-paper recommender systems in practice. We also identified a lack of authority and long-term research interest in the field: 73% of the authors published no more than one paper on research-paper recommender systems, and there was little cooperation among different co-author groups. We concluded that several actions could improve the research landscape: developing a common evaluation framework, agreement on the information to include in research papers, a stronger focus on non-accuracy aspects and user modeling, a platform for researchers to exchange information, and an open-source framework that bundles the available recommendation approaches.

Place, publisher, year, edition, pages
Springer, 2016
Recommender system, User modeling, Research paper recommender systems, Content based filtering, Review, Survey
National Category
Media and Communication Technology
Research subject
Computer and Information Sciences Computer Science, Media Technology
urn:nbn:se:lnu:diva-72747 (URN)10.1007/s00799-015-0156-0 (DOI)000406745500003 ()2-s2.0-84937865069 (Scopus ID)
Available from: 2018-04-18 Created: 2018-04-18 Last updated: 2019-08-29Bibliographically approved
Beel, J., Breitinger, C., Langer, S., Lommatzsch, A. & Gipp, B. (2016). Towards reproducibility in recommender-systems research. User modeling and user-adapted interaction, 26(1), 69-101
Open this publication in new window or tab >>Towards reproducibility in recommender-systems research
Show others...
2016 (English)In: User modeling and user-adapted interaction, ISSN 0924-1868, E-ISSN 1573-1391, Vol. 26, no 1, p. 69-101Article in journal (Refereed) Published
Abstract [en]

Numerous recommendation approaches are in use today. However, comparing their effectiveness is a challenging task because evaluation results are rarely reproducible. In this article, we examine the challenge of reproducibility in recommender-system research. We conduct experiments using Plista’s news recommender system, and Docear’s research-paper recommender system. The experiments show that there are large discrepancies in the effectiveness of identical recommendation approaches in only slightly different scenarios, as well as large discrepancies for slightly different approaches in identical scenarios. For example, in one news-recommendation scenario, the performance of a content-based filtering approach was twice as high as the second-best approach, while in another scenario the same content-based filtering approach was the worst performing approach. We found several determinants that may contribute to the large discrepancies observed in recommendation effectiveness. Determinants we examined include user characteristics (gender and age), datasets, weighting schemes, the time at which recommendations were shown, and user-model size. Some of the determinants have interdependencies. For instance, the optimal size of an algorithms’ user model depended on users’ age. Since minor variations in approaches and scenarios can lead to significant changes in a recommendation approach’s performance, ensuring reproducibility of experimental results is difficult. We discuss these findings and conclude that to ensure reproducibility, the recommender-system community needs to (1) survey other research fields and learn from them, (2) find a common understanding of reproducibility, (3) identify and understand the determinants that affect reproducibility, (4) conduct more comprehensive experiments, (5) modernize publication practices, (6) foster the development and use of recommendation frameworks, and (7) establish best-practice guidelines for recommender-systems research. © 2016, Springer Science+Business Media Dordrecht.

Evaluation, Experimentation, Recommender systems, Reproducibility
National Category
Computer Systems
Research subject
Computer and Information Sciences Computer Science, Computer Science
urn:nbn:se:lnu:diva-56225 (URN)10.1007/s11257-016-9174-x (DOI)000373021900003 ()2-s2.0-84960395171 (Scopus ID)
Available from: 2016-09-01 Created: 2016-08-31 Last updated: 2017-11-28Bibliographically approved

Search in DiVA

Show all publications