Abstract
Objective The luminance variation in natural scenes is extremely wide, whereas traditional cameras have a limited dynamic range. High dynamic range (HDR) imaging technology overcomes this limitation by preserving scene details more accurately, maintaining highlight information while enhancing shadow details, thus comprehensively retaining scene information. Consequently, HDR technology has been widely applied in digital photography, medical imaging, satellite remote sensing, and video production. Extensive research has been conducted globally on HDR fusion algorithms, with representative methods including entropy-based block fusion, tri-segment linear fitting, multi-exposure nonlinear fusion, and deep learning approaches. However, existing studies still suffer from high algorithmic complexity, poor real-time performance, and low resolution. We propose a fast HDR fusion method based on a large-area dual-channel scientific complementary metal oxide semiconductor (sCMOS) sensor. Using an field-programmable gate array (FPGA) as the core controller, we design a high-dynamic-range large-area real-time imaging system, along with HDR application strategies tailored to different scenarios. The system consumes minimal hardware resources, features a simple algorithm, and achieves superior imaging performance. It integrates the advantages of high-resolution and high-dynamic-range imaging, providing valuable insights for the design of similar instruments. Methods We focus on the principle of fast HDR fusion and the integrated design of an FPGA-based imaging system. The dual-channel sCMOS sensor incorporates two amplifiers per pixel column, enabling simultaneous output of high-gain (HG) and low-gain (LG) images in a single exposure. The core principle of the HDR fusion method lies in leveraging the differences in photoresponse characteristics: in high-intensity illumination conditions, the HG image becomes overexposed and saturated, necessitating the use of LG data to preserve highlight details. Conversely, in low-intensity conditions, the LG image exhibits generally lower grayscale values, requiring the HG image to capture shadow details. First, the HG and LG photoresponse curves are precisely calibrated and parameterized to derive a linear fusion function. Subsequently, an FPGA with a DDR3-based hardware architecture is utilized to integrate on-chip image acquisition, fast HDR fusion, image storage, and transmission. To address the challenges of large-area, high-bit-width data transmission, a shift-based segmented transmission mechanism and an alternating-frame transmission strategy are proposed. Finally, the system is developed and evaluated through experimental tests, demonstrating the effectiveness of HDR fusion. Application strategies for different lighting conditions are proposed: a high-gain-dominant fusion method is adopted for low-light scenarios, whereas a low-gain-dominant fusion method is employed for high-light-intensity environments. Results and Discussions The designed high-dynamic-range large-area real-time imaging system consumes minimal hardware resources (Table 1), achieves an imaging resolution of 4096 pixel×4096 pixel with 16 bit depth, and extends the grayscale value from 4095 to 65535 (Fig. 8), combining the advantages of large-area cameras and high-dynamic-range imaging. Experimental results show that the proposed fast HDR fusion method requires the shortest processing time of only 35 ms compared to three-segment curve fitting algorithm and information entropy block fusion algorithm while maintaining superior HDR imaging performance (Fig. 10), with 8.5% and 14.1% increase in image entropy (Table 2). Furthermore, scenario-specific HDR strategies are proposed (Fig. 13): for low-light conditions (Fig. 14), a high-gain-dominant fusion method is employed, whereas for high-light conditions (Fig. 15), a low-gain-dominant approach is applied to achieve optimal HDR effects across diverse environments. Conclusions The advantages of large-area cameras and high-dynamic-range imaging are integrated into this paper through the design of a high-dynamic-range large-area real-time system. Based on the large-area dual-channel sCMOS sensor GSENSE4040BSI, a fast HDR fusion method is proposed alongside scenario-specific HDR strategies. To address the challenges of large-area image data transmission, a shift-based segmented transmission method, and a dual-channel alternating-frame transmission mechanism are introduced. A complete real-time imaging system is developed: first, the HG and LG photoresponse curves of the sCMOS sensor are tested and calibrated to derive a linear fusion function. Then, on the FPGA hardware platform, on-chip integration of image acquisition, fast HDR fusion, image storage, and transmission is achieved. Ultimately, a 4096 pixel×4096 pixel HDR real-time system with a maximum frame rate of 24 frame/s is implemented. The proposed algorithm is simple yet effective, requiring only 35 ms for processing, demonstrating high realtime performance, and extending the pixel bit depth from 12 bit to 16 bit. It successfully preserves both highlight and shadow details, improving image entropy by 11.3% compared to other algorithms. The system satisfies real-time imaging demands for high dynamic range and high resolution. Further development may target higher frame rates, miniaturization, and self-adaptive HDR mechanisms.
Translated title of the contribution | Design of High Dynamic Large-Area Scientific CMOS Real-Time System |
---|---|
Original language | Chinese (Traditional) |
Article number | 0911001 |
Journal | Guangxue Xuebao/Acta Optica Sinica |
Volume | 45 |
Issue number | 9 |
DOIs | |
Publication status | Published - May 2025 |
Externally published | Yes |