首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In the last decade a new family of methods, namely Image‐Based Rendering, has appeared. These techniques rely on the use of precomputed images to totally or partially substitute the geometric representation of the scene. This allows to obtain realistic renderings even with modest resources. The main problem is the amount of data needed, mainly due to the high redundancy and the high computational cost of capture. In this paper we present a new method to automatically determine the correct camera placement positions in order to obtain a minimal set of views for Image‐Based Rendering. The input is a 3D polyhedral model including textures and the output is a set of views that sample all visible polygons at an appropriate rate. The viewpoints should cover all visible polygons with an adequate quality, so that we sample the polygons at sufficient rate. This permits to avoid the excessive redundancy of the data existing in several other approaches. We also reduce the cost of the capturing process, as the number of actually computed reference views decreases. The localization of interesting viewpoints is performed with the aid of an information theory‐based measure, dubbed viewpoint entropy. This measure is used to determine the amount of information seen from a viewpoint. Next we develop a greedy algorithm to minimize the number of images needed to represent a scene. In contrast to other approaches, our system uses a special preprocess for textures to avoid artifacts appearing in partially occluded textured polygons. Therefore no visible detail of these images is lost. ACM CSS: I.3.7 Computer Graphics—Three‐Dimensional Graphics and Realism  相似文献   

2.
A specific technique-viewpoint resolution-is proposed as a means of providing early validation of the requirements for a complex system, and some initial empirical evidence of the effectiveness of a semi-automated implementation of the technique is provided. The technique is based on the fact that software requirements can and should be elicited from different viewpoints, and that examination of the differences resulting from them can be used as a way of assisting in the early validation of requirements. A language for expressing views from different viewpoints and a set of analogy heuristics for performing a syntactically oriented analysis of views are proposed. This analysis of views is capable of differentiating between missing information and conflicting information, thus providing support for viewpoint resolution  相似文献   

3.
While many measures of viewpoint goodness have been proposed in computer graphics, none have been evaluated for ribbon representations of protein secondary structure. To fill this gap, we conducted a user study on Amazon's Mechanical Turk platform, collecting human viewpoint preferences from 65 participants for 4 representative superfamilies of protein domains. In particular, we evaluated viewpoint entropy, which was previously shown to be a good predictor for human viewpoint preference of other, mostly non‐abstract objects. In a second study, we asked 7 experts in molecular biology to find the best viewpoint of the same protein domains and compared their choices with viewpoint entropy. Our results indicate that viewpoint entropy overall is a significant predictor of human viewpoint preference for ribbon representations of protein secondary structure. However, the accuracy depends on the type and composition of the structure: while most participants agree on good viewpoints for structures with mainly beta sheets, viewpoint preference varies considerably for complex arrangements of alpha helices. Finally, experts tend to choose viewpoints of both low and high viewpoint entropy to emphasize different aspects of the respective structure.  相似文献   

4.
5.
This paper is dedicated to virtual world exploration techniques that help humans to understand a 3D scene. The paper presents a technique to calculate the quality of a viewpoint for a scene, and describes how this information can be used. A two-step method for an automatic real-time scene exploration is introduced. In the first step, a minimal set of “good” points of view is determined; in the second step, these viewpoints are used to compute a camera path around the scene. The proposed method enables one to get a good comprehension of a single virtual artifact or of the scene structure.  相似文献   

6.
Free-viewpoint video (FVV) is a promising approach that allows users to control their viewpoint and generate virtual views from any desired perspective. The individual user viewpoints are synthesized from two or more camera streams and correspondent depth sequences. In case of continuous viewpoint changes, the camera inputs of the view-synthesis process must be changed in a seamless way, to avoid the starvation of the viewpoint synthesizer algorithm. Starvation occurs when the desired user viewpoint cannot be synthesized with the currently streamed camera views, and thus, the FVV playout interrupts. In this paper, we propose three different camera handover schemes (TCC, MA, and SA) based on viewpoint prediction to minimize the probability of playout stalls and find the trade-off between the image quality and the camera handover frequency. Our simulation results show that the introduced camera switching methods can reduce the handover frequency with more than 40%, and hence, the viewpoint synthesis starvation and the playout interruption can be minimized. By providing seamless viewpoint changes, the quality of experience can be significantly improved, making the new FVV service more attractive in the future.  相似文献   

7.
This paper presents a new and general solution to the problem of range view integration. The integration problem consists in computing a connected surface model from a set of registered range images acquired from different viewpoints. The proposed method does not impose constraints on the topology of the observed surfaces, the position of the viewpoints, or the number of views that can be merged. The integrated surface model is piecewise estimated by a set of triangulations modeling each canonical subset of the Venn diagram of the set of range views. The connection of these local models by constrained Delaunay triangulations yields g non-redundant surface triangulation describing all surface elements sampled by the set of range views. Experimental results show that the integration technique can be used to build connected surface models of free-form objects. No integrated models built from objects of such complexity have yet been reported in the literature, It is assumed that accurate range views are available and that frame transformations between all pairs of views can be reliably computed  相似文献   

8.
The ability to recognize human actions using a single viewpoint is affected by phenomena such as self-occlusions or occlusions by other objects. Incorporating multiple cameras can help overcome these issues. However, the question remains how to efficiently use information from all viewpoints to increase performance. Researchers have reconstructed a 3D model from multiple views to reduce dependency on viewpoint, but this 3D approach is often computationally expensive. Moreover, the quality of each view influences the overall model and the reconstruction is limited to volumes where the views overlap. In this paper, we propose a novel method to efficiently combine 2D data from different viewpoints. Spatio-temporal features are extracted from each viewpoint and then used in a bag-of-words framework to form histograms. Two different sizes of codebook are exploited. The similarity between the obtained histograms is represented via the Histogram Intersection kernel as well as the RBF kernel with \(\chi ^2\) distance. Lastly, we combine all the basic kernels generated by selection of different viewpoints, feature types, codebook sizes and kernel types. The final kernel is a linear combination of basic kernels that are properly weighted based on an optimization process. For higher accuracy, the sets of kernel weights are computed separately for each binary SVM classifier. Our method not only combines the information from multiple viewpoints efficiently, but also improves the performance by mapping features into various kernel spaces. The efficiency of the proposed method is demonstrated by testing on two commonly used multi-view human action datasets. Moreover several experiments indicate the efficacy of each part of the method on the overall performance.  相似文献   

9.
10.
Direct replay of the experience of a user in a virtual environment is difficult for others to watch due to unnatural camera motions. We present methods for replaying and summarizing these egocentric experiences that effectively communicate the user's observations while reducing unwanted camera movements. Our approach summarizes the viewpoint path as a concise sequence of viewpoints that cover the same parts of the scene. The core of our approach is a novel content-dependent metric that can be used to identify similarities between viewpoints. This enables viewpoints to be grouped by similar contextual view information and provides a means to generate novel viewpoints that can encapsulate a series of views. These resulting encapsulated viewpoints are used to synthesize new camera paths that convey the content of the original viewer's experience. Projecting the initial movement of the user back on the scene can be used to convey the details of their observations, and the extracted viewpoints can serve as bookmarks for control or analysis. Finally we present performance analysis along with two forms of validation to test whether the extracted viewpoints are representative of the viewer's original observations and to test for the overall effectiveness of the presented replay methods.  相似文献   

11.
Non-Single Viewpoint Catadioptric Cameras: Geometry and Analysis   总被引:1,自引:0,他引:1  
Conventional vision systems and algorithms assume the imaging system to have a single viewpoint. However, these imaging systems need not always maintain a single viewpoint. For instance, an incorrectly aligned catadioptric system could cause non-single viewpoints. Moreover, a lot of flexibility in imaging system design can be achieved by relaxing the need for imaging systems to have a single viewpoint. Thus, imaging systems with non-single viewpoints can be designed for specific imaging tasks, or image characteristics such as field of view and resolution. The viewpoint locus of such imaging systems is called a caustic. In this paper, we present an in-depth analysis of caustics of catadioptric cameras with conic reflectors. We use a simple parametric model for both, the reflector and the imaging system, to derive an analytic solution for the caustic surface. This model completely describes the imaging system and provides a map from pixels in the image to their corresponding viewpoints and viewing direction. We use the model to analyze the imaging system's properties such as field of view, resolution and other geometric properties of the caustic itself. In addition, we present a simple technique to calibrate the class of conic catadioptric cameras and estimate their caustics from known camera motion. The analysis and results we present in this paper are general and can be applied to any catadioptric imaging system whose reflector has a parametric form.  相似文献   

12.
13.
已有视图度量无法同时描述3维模型整体和局部细节特征,因此难以得到理想的最优视图.提出一种结合统计分类和视图边缘细节特征的最优视图提取算法.首先,采用Adaboost进行样例学习,通过最优视图之间的几何特征相似性得到候选视图集合.然后,定义边缘分布熵对候选视图进行局部特征分析,用以提取最优视图,从而使提取出来的最优视图能够有效描述出3维模型的结构特征和内在细节特征,符合人类视觉感知效果.最后,通过3维模型数据库对算法进行统计分析.实验结果表明,本文算法要优于类似的最优视图算法.  相似文献   

14.
通常三维可视化的最佳视点选择是通过人工试探,这样会导致反复迭代尝试的次数增加且效率低下,针对上述问题提出了一种基于粒子群的视点优化方法。该方法把视点利用多分辨率层级来表示,引入图像信息熵评价不同视点下绘制的三维图像的质量,熵值作为视点优化的依据和粒子群的适应度函数值。在三维可视化中利用粒子群算法进行视点的智能、自动的优化,从而实现最佳视点的选择。实验结果表明,该方法具有较快的收敛速度,有效地减少了评估次数,可提高三维可视化的绘制图像的质量和绘制效率。  相似文献   

15.
刘志  李江川 《计算机科学》2019,46(1):278-284
为了更有效地利用三维模型数据集进行特征的自主学习,提出一种使用自然图像作为输入源,以三维模型的较优视图集为基础,通过深度卷积神经网络的训练提取深度特征用于检索的三维模型检索方法。首先,从多个视点对三维模型进行视图提取,并根据灰度熵的排序选取较优视图;然后,通过深度卷积神经网络对视图集进行训练,从而提取较优视图的深度特征并进行降维,同时,对输入的自然图像提取边缘轮廓图,经过相似度匹配获得一组三维模型;最后,基于检索结果中同类模型总数占检索列表长度的比例对列表进行重排序,从而获得最终的检索结果。实验结果表明,该算法能够有效利用深度卷积神经网络对三维模型的视图进行深度特征提取,同时降低了输入源的获取难度,有效提高了检索效果。  相似文献   

16.
传统多视点方法存在着诸多不足之处。本文在此基础上提出了一种基于行为的多视点需求描述方法(BVORA),从行为的角度对需求进行了描述,提出了以场景对视点进行划分的方法,给出了视点的定义以及视点表达,定义了视点间的关系。  相似文献   

17.
Machine vision, especially deep learning methods, has become a hot topic for product surface inspection. In practice, capturing high quality images is a base for defect detection. It turns out to be challenging for complex products as image quality suffers from occlusion, illumination, and other issues. Multiple images from different viewpoints are often required in this scenario to cover all the important areas of the products. Reducing the viewpoints while ensuring the coverage is the key to make the inspection system more efficient in production. This paper proposes a high-efficient view planning method based on deep reinforcement learning to solve this problem. First, visibility estimation method is developed so that the visible areas can be quickly identified for a given viewpoint. Then, a new reward function is designed, and the Asynchronous Advantage Actor-Critic method is applied to solve the view planning problem. The effectiveness and efficiency of the proposed method is verified with a set of experiments. The proposed method could also be potentially applied to other similar vision-based tasks.  相似文献   

18.
基于视平面上特征计算的视点选择   总被引:3,自引:2,他引:1  
提出一种能高效观察三维模型的视点选择方法.该方法基于视平面定义一种曲率特征,以度量模型的三维几何特征在视平面上的分布和体现状况,并由此计算与视点相关的一种熵值;然后对各个视点相关的这种熵值进行比较,可得到熵值最大的视点.实验结果表明,采用该方法找到的视点能观察到模型尽可能多的显著特征,与人眼的观察习惯比较接近.与已有方法相比,文中方法计算简单、不需要语义计算、有较高的工作效率.  相似文献   

19.
An execution view is an important asset for developing large and complex systems. An execution view helps practitioners to describe, analyze, and communicate what a software system does at runtime and how it does it. In this paper, we present an approach to define and document viewpoints that guide the construction and use of execution views for an existing large and complex software-intensive system. This approach includes the elicitation of the organization's requirements for execution views, the initial definition and validation of a set of execution viewpoints, and the documentation of the execution viewpoints. The validation and application of the approach have helped us to produce mature viewpoints that are being used to support the construction and use of execution views of the Philips Healthcare MRI scanner, a representative large software-intensive system in the healthcare domain.  相似文献   

20.
LiveSync: deformed viewing spheres for knowledge-based navigation   总被引:1,自引:0,他引:1  
Although real-time interactive volume rendering is available even for very large data sets, this visualization method is used quite rarely in the clinical practice. We suspect this is because it is very complicated and time consuming to adjust the parameters to achieve meaningful results. The clinician has to take care of the appropriate viewpoint, zooming, transfer function setup, clipping planes and other parameters. Because of this, most often only 2D slices of the data set are examined. Our work introduces LiveSync, a new concept to synchronize 2D slice views and volumetric views of medical data sets. Through intuitive picking actions on the slice, the users define the anatomical structures they are interested in. The 3D volumetric view is updated automatically with the goal that the users are provided with expressive result images. To achieve this live synchronization we use a minimal set of derived information without the need for segmented data sets or data-specific pre-computations. The components we consider are the picked point, slice view zoom, patient orientation, viewpoint history, local object shape and visibility. We introduce deformed viewing spheres which encode the viewpoint quality for the components. A combination of these deformed viewing spheres is used to estimate a good viewpoint. Our system provides the physician with synchronized views which help to gain deeper insight into the medical data with minimal user interaction.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号