A VibV Dataset Integrating Vibration and Vision for Enhanced Safety in Self-Driving Tasks

  • Yang Shen
  • , Xinyu Zhang*
  • , Lei Yang
  • , Zhengxian Chen
  • , Xiaofei Zhang
  • , Li wang
  • , Bo Su
  • , Dan Yin
  • , Wenhao Yu
  • , Jun Li
  • *Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Due to the complexity of real-world traffic scenarios, autonomous driving systems still face safety challenges and uncontrolled threats in blind spots. Currently, it primarily relies on cameras, LiDAR, radar, and their fusion to perceive the environment. However, under special road conditions or extreme weather, there may exhibit defects, resulting in false or missed detections, which can lead to safety accidents. This paper proposes the VibV dataset, which introduces vehicle vibration signals into perception system. By utilizing vibration information as supervisory signals for the detection system, it enhances perception accuracy and thereby improves safety. This dataset recorded vibration signals and vision data simultaneously in scenes such as rumble strips and speed bumps. It performed a total of 39 experiments over two months, resulting in 39 segments of vibration data and 22,677 original video frames. The vibration signals underwent preliminary processing, and the images were manually annotated and classified. Technical evaluations have proven the dataset’s usability and reliability. It can be applied to various autonomous driving tasks to enhance safety and robustness.

Original languageEnglish
Article number74
JournalScientific data
Volume13
Issue number1
DOIs
Publication statusPublished - Dec 2026
Externally publishedYes

Fingerprint

Dive into the research topics of 'A VibV Dataset Integrating Vibration and Vision for Enhanced Safety in Self-Driving Tasks'. Together they form a unique fingerprint.

Cite this