Evaluation and analysis of term scoring methods for term extraction

 

  • Suzan Verberne
  • Maya Sappelli
  • Djoerd Hiemstra
  • Wessel Kraaij
  • 3.University of TwenteEnschedeThe Netherlands

Open AccessArticle

DOI: 10.1007/s10791-016-9286-2

Cite this article as:
Verberne, S., Sappelli, M., Hiemstra, D. et al. Inf Retrieval J (2016) 19: 510. doi:10.1007/s10791-016-9286-2

Abstract

We evaluate five term scoring methods for automatic term extraction on four different types of text collections: personal document collections, news articles, scientific articles and medical discharge summaries. Each collection has its own use case: author profiling, boolean query term suggestion, personalized query suggestion and patient query expansion. The methods for term scoring that have been proposed in the literature were designed with a specific goal in mind. However, it is as yet unclear how these methods perform on collections with characteristics different than what they were designed for, and which method is the most suitable for a given (new) collection. In a series of experiments, we evaluate, compare and analyse the output of six term scoring methods for the collections at hand. We found that the most important factors in the success of a term scoring method are the size of the collection and the importance of multi-word terms in the domain. Larger collections lead to better terms; all methods are hindered by small collection sizes (below 1000 words). The most flexible method for the extraction of single-word and multi-word terms is pointwise Kullback–Leibler divergence for informativeness and phraseness. Overall, we have shown that extracting relevant terms using unsupervised term scoring methods is possible in diverse use cases, and that the methods are applicable in more contexts than their original design purpose.