Abstract
Our strategy for TREC KBA CCR track is to first retrieve as many vital or documents as possible and then apply more sophisticated classification and ranking methods to differentiate vital from useful documents. We submitted 10 runs generated by 3 approaches: question expansion, classification and learning to rank. Query expansion is an unsupervised baseline, in which we combine entities' names and their related entities' names as phrase queries to retrieve relevant documents. This baseline outperforms the overall median and mean submissions. The system performance is further improved by supervised classification and learning to rank methods. We mainly exploit three kinds of external resources to construct the features in supervised learning: (i) entry pages of Wikipedia entities or profile pages of Twitter entities, (ii) existing citations in the Wikipedia page of an entity, and (iii) burst of Wikipedia page views of an entity. In vital + useful task, one of our ranking-based methods achieves the best result among all participants. In vital only task, one of our classification-based methods achieve the overall best result.
Original language | English |
---|---|
Publication status | Published - 2013 |
Event | 22nd Text REtrieval Conference, TREC 2013 - Gaithersburg, United States Duration: 19 Nov 2013 → 22 Nov 2013 |
Conference
Conference | 22nd Text REtrieval Conference, TREC 2013 |
---|---|
Country/Territory | United States |
City | Gaithersburg |
Period | 19/11/13 → 22/11/13 |