Investigating perception of spoken dialogue acceptability through surprisal

Abstract

Surprisal is used throughout computational psycholinguis- tics to model a range of language processing behaviour. There is growing evidence that language model (LM) estimates of surprisal correlate with human performance on a range of written language comprehension tasks.

Although communicative interaction is arguably the primary form of language use, most studies of surprisal are based on monological, written data. Towards the goal of understand- ing perception in spontaneous, natural language, we present an exploratory investigation into whether the relationship between human comprehension behaviour and LM-estimated surprisal holds when applied to dialogue, considering both written dialogue, and the lexical component of spoken dialogue. We use a novel judgement task of dialogue utterance acceptability to ask two questions: “How well can people make predictions about written dialogue and transcripts of spoken dialogue?” and “Does surprisal correlate with these acceptability judgements?”.

We demonstrate that people can make accurate predictions about upcoming dialogue and that their ability differs between spoken transcripts and written conversation. We investigate the relationship between global and local operationalisations of surprisal and human acceptability judgements, finding a combination of both to provide the most predictive power.

Publication
In Interspeech 2022; Incheon, Korea

Awarded ISCA Best Student Paper

In Interspeech 2022; Incheon, Korea

Check out our stimuli here.

Sarenne Wallbridge
Sarenne Wallbridge
Machine Learning PhD Student

My research interests include machine learning, pyscholinguistics, and information theory.

Related