Aspectual Processing Shifts Visual Event Apprehension

Uğurcan Vurgun*, Yue Ji, Anna Papafragou

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

What is the relationship between language and event cognition? Past work has suggested that linguistic/aspectual distinctions encoding the internal temporal profile of events map onto nonlinguistic event representations. Here, we use a novel visual detection task to directly test the hypothesis that processing telic versus atelic sentences (e.g., “Ebony folded a napkin in 10 seconds” vs. “Ebony did some folding for 10 seconds”) can influence whether the very same visual event is processed as containing distinct temporal stages including a well-defined endpoint or lacking such structure, respectively. In two experiments, we show that processing (a)telicity in language shifts how people later construe the temporal structure of identical visual stimuli. We conclude that event construals are malleable representations that can align with the linguistic framing of events.

Original languageEnglish
Article numbere13476
JournalCognitive Science
Volume48
Issue number6
DOIs
Publication statusPublished - Jun 2024

Keywords

  • Aspect
  • Boundedness
  • Endpoints
  • Event
  • Telicity

Fingerprint

Dive into the research topics of 'Aspectual Processing Shifts Visual Event Apprehension'. Together they form a unique fingerprint.

Cite this

Vurgun, U., Ji, Y., & Papafragou, A. (2024). Aspectual Processing Shifts Visual Event Apprehension. Cognitive Science, 48(6), Article e13476. https://doi.org/10.1111/cogs.13476