Hadoop Perfect File: A fast and memory-efficient metadata access archive file to face small files problem in HDFS

Yanlong Zhai*, Jude Tchaye-Kondi, Kwei Jay Lin, Liehuang Zhu, Wenjun Tao, Xiaojiang Du, Mohsen Guizani

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

20 Citations (Scopus)

Abstract

HDFS faces several issues when it comes to handling a large number of small files. These issues are well addressed by archive systems, which combine small files into larger ones. They use index files to hold relevant information for retrieving a small file content from the big archive file. However, existing archive-based solutions require significant overheads when retrieving a file content since additional processing and I/Os are needed to acquire the retrieval information before accessing the actual file content, therefore, deteriorating the access efficiency. This paper presents a new archive file named Hadoop Perfect File (HPF). HPF minimizes access overheads by directly accessing metadata from the part of the index file containing the information. It consequently reduces the additional processing and I/Os needed and improves the access efficiency from archive files. Our index system uses two hash functions. Metadata records are distributed across index files using a dynamic hash function. We further build an order-preserving perfect hash function that memorizes the position of a small file's metadata record within the index file.

Original languageEnglish
Pages (from-to)119-130
Number of pages12
JournalJournal of Parallel and Distributed Computing
Volume156
DOIs
Publication statusPublished - Oct 2021

Keywords

  • Distributed file system
  • Fast access
  • HDFS
  • Massive small files

Fingerprint

Dive into the research topics of 'Hadoop Perfect File: A fast and memory-efficient metadata access archive file to face small files problem in HDFS'. Together they form a unique fingerprint.

Cite this