One of the challenges in Multimedia Event Retrieval is the integration of data from multiple modalities. A modality is defined as a single channel of sensory input, such as visual or audio. We also refer to this as data source. Previous research has shown that the integration of different data sources can improve performance compared to only using one source, but a clear insight of success factors of alternative fusion methods is still lacking. We introduce several new blind late fusion methods based on inversions and ratios of the state-of-the-art blind fusion methods and compare performance in both simulations and an international benchmark data set in multimedia event retrieval named TRECVID MED. The results show that five of the proposed methods outperform the state-of-the-art methods in a case with sufficient training examples (100 examples). The novel fusion method named JRER is not only the best method with dependent data sources, but this method is also a robust method in all simulations with sufficient training examples.
In this article we evaluate context-aware recommendation systems for information re-finding by knowledge workers. We identify 4 criteria that are relevant for evaluating the quality of knowledge worker support: context relevance, document relevance, prediction of user action, and diversity of the suggestions. We compare 3 different context-aware recommendation methods for information re-finding in a writing support task. The first method uses contextual prefiltering and content-based recommendation (CBR), the second uses the just-in-time information retrieval paradigm (JITIR), and the third is a novel network-based recommendation system where context is part of the recommendation model (CIA). We found that each method has its own strengths: CBR is strong at context relevance, JITIR captures document relevance well, and CIA achieves the best result at predicting user action. Weaknesses include that CBR depends on a manual source to determine the context and in JITIR the context query can fail when the textual content is not sufficient. We conclude that to truly support a knowledge worker, all 4 evaluation criteria are important. In light of that conclusion, we argue that the network-based approach the CIA offers has the highest robustness and flexibility for context-aware information recommendation.
We evaluate five term scoring methods for automatic term extraction on four different types of text collections: personal document collections, news articles, scientific articles and medical discharge summaries. Each collection has its own use case: author profiling, boolean query term suggestion, personalized query suggestion and patient query expansion. The methods for term scoring that have been proposed in the literature were designed with a specific goal in mind. However, it is as yet unclear how these methods perform on collections with characteristics different than what they were designed for, and which method is the most suitable for a given (new) collection. In a series of experiments, we evaluate, compare and analyse the output of six term scoring methods for the collections at hand. We found that the most important factors in the success of a term scoring method are the size of the collection and the importance of multi-word terms in the domain. Larger collections lead to better terms; all methods are hindered by small collection sizes (below 1000 words). The most flexible method for the extraction of single-word and multi-word terms is pointwise Kullback–Leibler divergence for informativeness and phraseness. Overall, we have shown that extracting relevant terms using unsupervised term scoring methods is possible in diverse use cases, and that the methods are applicable in more contexts than their original design purpose.
Saskia Koldijk, Mark A. Neerincx, and Wessel Kraaij.
Detecting work stress
in offices by combining unobtrusive sensors.
IEEE Transactions on Affective Computing
Employees often report the experience of stress at work. In the SWELL project we investigate how new context aware pervasive systems can support knowledge workers to diminish stress. The focus of this paper is on developing automatic classiers to infer working conditions and stress related mental states from a multimodal set of sensor data (computer logging, facial expressions, posture and physiology). We address two methodological and applied machine learning challenges: 1) Detecting work stress using several (physically) unobtrusive sensors, and 2) Taking into account individual dierences. A comparison of several classication approaches showed that, for our SWELL-KW dataset, neutral and stressful working conditions can be distinguished with 90% accuracy by means of SVM. Posture yields most valuable information, followed by facial expressions. Furthermore, we found that the subjective variable `mental eort’ can be better predicted from sensor data than e.g. `perceived stress’. A comparison of several regression approaches showed that mental eort can be predicted best by a decision tree (correlation of 0.82). Facial expressions yield most valuable information, followed by posture. We nd that especially for estimating mental states it makes sense to address individual dierences. When we train models on particular subgroups of similar users, (in almost all cases) a specialized model performs equally well or better than a generic model.
de Boer, M., Schutte, K. & Kraaij, W. Multimed Tools Appl (2016) 75: 9025. doi:10.1007/s11042-015-2757-4
A common approach in content based video information retrieval is to perform automatic shot annotation with semantic labels using pre-trained classifiers. The visual vocabulary of state-of-the-art automatic annotation systems is limited to a few thousand concepts, which creates a semantic gap between the semantic labels and the natural language query. One of the methods to bridge this semantic gap is to expand the original user query using knowledge bases. Both common knowledge bases such as Wikipedia and expert knowledge bases such as a manually created ontology can be used to bridge the semantic gap. Expert knowledge bases have highest performance, but are only available in closed domains. Only in closed domains all necessary information, including structure and disambiguation, can be made available in a knowledge base. Common knowledge bases are often used in open domain, because it covers a lot of general information. In this research, query expansion using common knowledge bases ConceptNet and Wikipedia is compared to an expert description of the topic applied to content-based information retrieval of complex events. We run experiments on the Test Set of TRECVID MED 2014. Results show that 1) Query Expansion can improve performance compared to using no query expansion in the case that the main noun of the query could not be matched to a concept detector; 2) Query expansion using expert knowledge is not necessarily better than query expansion using common knowledge; 3) ConceptNet performs slightly better than Wikipedia; 4) Late fusion can slightly improve performance. To conclude, query expansion has potential in complex event detection.