论文标题
文化遗产对象的触觉和听觉渲染
Combined Hapto-Visual and Auditory Rendering of Cultural Heritage Objects
论文作者
论文摘要
在这项工作中,我们开发了一个多模式渲染框架,包括触觉和听觉数据。主要重点是触觉呈云数据,代表具有文化意义的虚拟3-D模型,并处理其仿射转化。文化遗产物体可能很大,并且可能需要一个以各种细节规模渲染对象。此外,还纳入了诸如纹理和摩擦之类的表面效应,以便向用户提供逼真的触觉感知。此外,提出的框架包括适当的声音综合,以提出对象的声学特性。它还包括一个具有不同选项的图形用户界面,例如选择3-D对象的所需方向,并在运行时自适应地选择空间分辨率的所需水平。提出了一种基于快速的,基于代理的触觉渲染技术,代理更新循环运行速度比1 kHz所需的触觉更新频率快100倍。通过在虚拟3-D模型的深度数据上应用双边滤波器,将表面属性集成在系统中。位置依赖性声音合成与适当的音频夹的合并结合在一起。
In this work, we develop a multi-modal rendering framework comprising of hapto-visual and auditory data. The prime focus is to haptically render point cloud data representing virtual 3-D models of cultural significance and also to handle their affine transformations. Cultural heritage objects could potentially be very large and one may be required to render the object at various scales of details. Further, surface effects such as texture and friction are incorporated in order to provide a realistic haptic perception to the users. Moreover, the proposed framework includes an appropriate sound synthesis to bring out the acoustic properties of the object. It also includes a graphical user interface with varied options such as choosing the desired orientation of 3-D objects and selecting the desired level of spatial resolution adaptively at runtime. A fast, point proxy-based haptic rendering technique is proposed with proxy update loop running 100 times faster than the required haptic update frequency of 1 kHz. The surface properties are integrated in the system by applying a bilateral filter on the depth data of the virtual 3-D models. Position dependent sound synthesis is incorporated with the incorporation of appropriate audio clips.