lnu.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Research-paper recommender systems: a literature survey
Docear, Germany.ORCID iD: 0000-0002-4537-5573
Univ Konstanz, Germany.
Otto von Guericke Univ, Germany.
Linnaeus University, Faculty of Technology, Department of computer science and media technology (CM), Department of Media Technology.ORCID iD: 0000-0001-6586-0392
2016 (English)In: International Journal on Digital Libraries, ISSN 1432-5012, E-ISSN 1432-1300, Vol. 17, no 4, p. 305-338Article in journal (Refereed) Published
Abstract [en]

In the last 16 years, more than 200 research articles were published about research-paper recommender systems. We reviewed these articles and present some descriptive statistics in this paper, as well as a discussion about the major advancements and shortcomings and an overview of the most common recommendation concepts and approaches. We found that more than half of the recommendation approaches applied content-based filtering (55 %). Collaborative filtering was applied by only 18% of the reviewed approaches, and graph-based recommendations by 16%. Other recommendation concepts included stereotyping, item-centric recommendations, and hybrid recommendations. The content-based filtering approaches mainly utilized papers that the users had authored, tagged, browsed, or downloaded. TF-IDF was the most frequently applied weighting scheme. In addition to simple terms, n-grams, topics, and citations were utilized to model users' information needs. Our review revealed some shortcomings of the current research. First, it remains unclear which recommendation concepts and approaches are the most promising. For instance, researchers reported different results on the performance of content-based and collaborative filtering. Sometimes content-based filtering performed better than collaborative filtering and sometimes it performed worse. We identified three potential reasons for the ambiguity of the results. (A) Several evaluations had limitations. They were based on strongly pruned datasets, few participants in user studies, or did not use appropriate baselines. (B) Some authors provided little information about their algorithms, which makes it difficult to re-implement the approaches. Consequently, researchers use different implementations of the same recommendations approaches, which might lead to variations in the results. (C) We speculated that minor variations in datasets, algorithms, or user populations inevitably lead to strong variations in the performance of the approaches. Hence, finding the most promising approaches is a challenge. As a second limitation, we noted that many authors neglected to take into account factors other than accuracy, for example overall user satisfaction. In addition, most approaches (81%) neglected the user-modeling process and did not infer information automatically but let users provide keywords, text snippets, or a single paper as input. Information on runtime was provided for 10% of the approaches. Finally, few research papers had an impact on research-paper recommender systems in practice. We also identified a lack of authority and long-term research interest in the field: 73% of the authors published no more than one paper on research-paper recommender systems, and there was little cooperation among different co-author groups. We concluded that several actions could improve the research landscape: developing a common evaluation framework, agreement on the information to include in research papers, a stronger focus on non-accuracy aspects and user modeling, a platform for researchers to exchange information, and an open-source framework that bundles the available recommendation approaches.

Place, publisher, year, edition, pages
Springer, 2016. Vol. 17, no 4, p. 305-338
Keywords [en]
Recommender system, User modeling, Research paper recommender systems, Content based filtering, Review, Survey
National Category
Media and Communication Technology
Research subject
Computer and Information Sciences Computer Science, Media Technology
Identifiers
URN: urn:nbn:se:lnu:diva-72747DOI: 10.1007/s00799-015-0156-0ISI: 000406745500003OAI: oai:DiVA.org:lnu-72747DiVA, id: diva2:1198600
Available from: 2018-04-18 Created: 2018-04-18 Last updated: 2018-04-18Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textfulltext (read only)

Authority records BETA

Breitinger, Corinna

Search in DiVA

By author/editor
Beel, JoeranBreitinger, Corinna
By organisation
Department of Media Technology
In the same journal
International Journal on Digital Libraries
Media and Communication Technology

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 29 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf