TY - JOUR
T1 - 虚实融合场景中的深度感知研究综述
AU - Ping, Jiamin
AU - Liu, Yue
AU - Weng, Dongdong
N1 - Publisher Copyright:
© 2021, Editorial Office of Journal of Image and Graphics. All right reserved.
PY - 2021/6/16
Y1 - 2021/6/16
N2 - Mixed reality systems can provide virtual and real fusion environment, in which the virtual objects add to the real world in real time. Mixed reality systems have been widely used in education, training, heritage preservation, military simulation, equipment manufacturing, surgery, and exhibition. The mixed reality systems use the calibration data to build a virtual camera model, and then draw virtual content in real time based on the head tracking data and the position of the virtual camera. Finally, the virtual content is superimposed in the real environment. The user perceives the virtual object's depth information according to the integration of graphical cues and virtual object rendering features in the virtual and real fusion environment. When the user observes the virtual-real fusion scene presented by the mixed reality system, the following processes are included: 1) different distance information are converted into respective distance signals. The key role in this process is to present the virtual-real fusion scene through rendering technology. The user judges the distance on the basis of the inherent characteristics of the virtual object. 2) The user recognizes other visual stimulus variables in the scene and converts respective distance signal into the final distance signal. The key role in this process is to provide cues of depth information in the virtual and real fusion scene. The user needs to use depth cues to determine the position of the object. 3) They determine the distance relationship between the objects in the scene and convert the final distance signal into the corresponding indicated distance. The key role in this process is the visual law of the human eye when viewing the virtual and real scene. However, problems, such as the lack of visual principles and perception theories that can be used to guide the rendering of virtual and real fusion scenes, the lack of absolute depth information that the graphical clues can provide, and the lack of rendering features of virtual objects, are found. The study on the visual laws and perception theories that can be used to guide the rendering of virtual and real scenes is limited. The visual model and perception laws of the human eye should be studied when viewing virtual-real fusion scenes to form effective application guidance and improve virtual-real fusion scenes to apply visual laws effectively in the design and development of virtual-real fusion scenes and increase the accuracy of depth perception. The rendering effect of the mixed reality application improves the interactive efficiency and user experience of mixed reality applications. The absolute depth information that can be provided by graphical cues in the virtual-real fusion scene is missing. Graphical cues that can provide effective absolute depth information in the scene should be generated, the characteristics of different graphical cues should be extracted, and the effects on depth perception should be quantified to help users perceive the depth of the target object. This approach improves user performance in depth perception and provide a basis for rendering of virtual and real scenes. The rendering dimensions and characteristic indicators of virtual objects in virtual and real fusion scenes are insufficient. Reasonable parameter indicators and effective object rendering methods should be studied, different feature interaction models should be built, and the role of different virtual object rendering characteristics in depth perception should be clarified to determine the characteristics that play a major role in the rendering of virtual objects in virtual and real scenes. Finally, the study can provide a basis for rendering the fusion scene. The visual principle in virtual and real fusion environment rendering is analyzed, and then the rendering of graphical cues and virtual object in virtual and real fusion scenes is summarized, and finally the research trend of depth perception in virtual and real fusion scenes is discussed. When viewing virtual and real scenes, humans perceive objects in the scene through the visual system. The visual function factors related to the perception mechanism and the guiding effect of visual laws on depth perception should be studied to optimize the rendering of virtual and real scenes. With the development and application of perception technology in mixed reality, in recent years, many researchers have carried out studies on ground contact theory, the anisotropy of human eye perception, and the distribution of human eye gaze points in depth perception. The background environment and virtual objects in the virtual and real fusion scene can provide users with depth information cues. Most existing related studies focus on adding various depth cues to the virtual and real fusion scene and explore the relationship between additional depth information and depth perception in the scene through experiments. With the rapid development of computer graphics, in recent years, an increasing number of graphic technologies have been applied to the creation of virtual and real fusion scenes to enhance the depth position prompts of virtual objects, including linear perspective, graphical techniques for prompting position information, and creating X-ray vision Graphics technology. The virtual objects presented in the mixed reality system are an important part of the virtual and real fusion environment. In recent years, to study the role of the inherent characteristics of virtual objects in virtual and real fusion scenes in depth perception, researchers have carried out a large number of quantifications in terms of the size, color, brightness, transparency, texture, and surface lighting of virtual objects through experimental study. These rendering-based virtual object characteristics were extracted from the 17th century painting techniques, but they are different from traditional painting depth cues.
AB - Mixed reality systems can provide virtual and real fusion environment, in which the virtual objects add to the real world in real time. Mixed reality systems have been widely used in education, training, heritage preservation, military simulation, equipment manufacturing, surgery, and exhibition. The mixed reality systems use the calibration data to build a virtual camera model, and then draw virtual content in real time based on the head tracking data and the position of the virtual camera. Finally, the virtual content is superimposed in the real environment. The user perceives the virtual object's depth information according to the integration of graphical cues and virtual object rendering features in the virtual and real fusion environment. When the user observes the virtual-real fusion scene presented by the mixed reality system, the following processes are included: 1) different distance information are converted into respective distance signals. The key role in this process is to present the virtual-real fusion scene through rendering technology. The user judges the distance on the basis of the inherent characteristics of the virtual object. 2) The user recognizes other visual stimulus variables in the scene and converts respective distance signal into the final distance signal. The key role in this process is to provide cues of depth information in the virtual and real fusion scene. The user needs to use depth cues to determine the position of the object. 3) They determine the distance relationship between the objects in the scene and convert the final distance signal into the corresponding indicated distance. The key role in this process is the visual law of the human eye when viewing the virtual and real scene. However, problems, such as the lack of visual principles and perception theories that can be used to guide the rendering of virtual and real fusion scenes, the lack of absolute depth information that the graphical clues can provide, and the lack of rendering features of virtual objects, are found. The study on the visual laws and perception theories that can be used to guide the rendering of virtual and real scenes is limited. The visual model and perception laws of the human eye should be studied when viewing virtual-real fusion scenes to form effective application guidance and improve virtual-real fusion scenes to apply visual laws effectively in the design and development of virtual-real fusion scenes and increase the accuracy of depth perception. The rendering effect of the mixed reality application improves the interactive efficiency and user experience of mixed reality applications. The absolute depth information that can be provided by graphical cues in the virtual-real fusion scene is missing. Graphical cues that can provide effective absolute depth information in the scene should be generated, the characteristics of different graphical cues should be extracted, and the effects on depth perception should be quantified to help users perceive the depth of the target object. This approach improves user performance in depth perception and provide a basis for rendering of virtual and real scenes. The rendering dimensions and characteristic indicators of virtual objects in virtual and real fusion scenes are insufficient. Reasonable parameter indicators and effective object rendering methods should be studied, different feature interaction models should be built, and the role of different virtual object rendering characteristics in depth perception should be clarified to determine the characteristics that play a major role in the rendering of virtual objects in virtual and real scenes. Finally, the study can provide a basis for rendering the fusion scene. The visual principle in virtual and real fusion environment rendering is analyzed, and then the rendering of graphical cues and virtual object in virtual and real fusion scenes is summarized, and finally the research trend of depth perception in virtual and real fusion scenes is discussed. When viewing virtual and real scenes, humans perceive objects in the scene through the visual system. The visual function factors related to the perception mechanism and the guiding effect of visual laws on depth perception should be studied to optimize the rendering of virtual and real scenes. With the development and application of perception technology in mixed reality, in recent years, many researchers have carried out studies on ground contact theory, the anisotropy of human eye perception, and the distribution of human eye gaze points in depth perception. The background environment and virtual objects in the virtual and real fusion scene can provide users with depth information cues. Most existing related studies focus on adding various depth cues to the virtual and real fusion scene and explore the relationship between additional depth information and depth perception in the scene through experiments. With the rapid development of computer graphics, in recent years, an increasing number of graphic technologies have been applied to the creation of virtual and real fusion scenes to enhance the depth position prompts of virtual objects, including linear perspective, graphical techniques for prompting position information, and creating X-ray vision Graphics technology. The virtual objects presented in the mixed reality system are an important part of the virtual and real fusion environment. In recent years, to study the role of the inherent characteristics of virtual objects in virtual and real fusion scenes in depth perception, researchers have carried out a large number of quantifications in terms of the size, color, brightness, transparency, texture, and surface lighting of virtual objects through experimental study. These rendering-based virtual object characteristics were extracted from the 17th century painting techniques, but they are different from traditional painting depth cues.
KW - Depth cues
KW - Depth perception
KW - Mixed reality
KW - Perceptual matching
KW - Real and virtual fusion environment
KW - Scene rendering
KW - Visual law
UR - http://www.scopus.com/inward/record.url?scp=85109027248&partnerID=8YFLogxK
U2 - 10.11834/jig.210027
DO - 10.11834/jig.210027
M3 - 文献综述
AN - SCOPUS:85109027248
SN - 1006-8961
VL - 26
SP - 1503
EP - 1520
JO - Journal of Image and Graphics
JF - Journal of Image and Graphics
IS - 6
ER -