This repository contains the dataset used to analyze user preferences of podcast summaries. The study is described in this paper.

We provide all the releasable data:

We also release a software package to download the copyrighted content:

Abstract

We address the challenge of extracting "query biased audio summaries" from podcasts to support users in making relevance decisions in spoken document search via an audio-only communication channel. We performed a crowdsourced experiment that demonstrates that transcripts of spoken documents created using Automated Speech Recognition (ASR), even with significant errors, are effective sources of document summaries or "snippets" for supporting users in making relevance judgments against a query. In particular, results show that summaries generated from ASR transcripts are comparable, in utility and user-judged preference, to spoken summaries generated from error-free manual transcripts of the same collection. We also observed that content-based audio summaries are at least as preferred as synthesized summaries obtained from manually curated metadata, such as title and description. We describe a methodology for constructing a new test collection which we have made publicly available.

Citation

Please cite the article below if you use this resource in your research:

Damiano Spina, Johanne R. Trippas, Lawrence Cavedon, Mark Sanderson
Extracting Audio Summaries to Support Effective Spoken Document Search
Journal of the Association for Information Science and Technology. 2017.

BibTex

@article {spina2017extracting,
author = {Spina, Damiano and Trippas, Johanne R. and Cavedon, Lawrence and Sanderson, Mark},
title = {Extracting audio summaries to support effective spoken document search},
journal = {Journal of the Association for Information Science and Technology},
volume = {68},
number = {9},
issn = {2330-1643},
url = {http://dx.doi.org/10.1002/asi.23831},
doi = {10.1002/asi.23831},
pages = {2101--2115},
year = {2017}
}