首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 109 毫秒
1.
提出了一个结合层次遮挡图像缓存的快速消隐绘制算法,本算法首先利用空间连贯性对场景实行快速保守的消隐,对可能可见的近景、中景使用几何绘制,对可能可见的远景实现了基于图像和几何混合的加速绘制,实验表明,由于充分利用了空间连贯性和图像简化技术,本算法效果良好,可适合各种复杂度场景的快速绘制。  相似文献   

2.
随着虚拟现实技术应用的日益广泛,场景的规模越来越大,虚拟场景画面的实时显示成为整个虚拟现实系统成败的关键技术。提出了一个结合层次遮挡图和图像缓存的快速消隐绘制算法,该算法不仅在图形绘制速度和图像质量之间获得了一个很好的折衷,而且可适合各种复杂度场景的快速绘制需要。对可能可见的近景、中景使用几何绘制,对可能可见的远景实现了图像和几何混合的加速绘制。实验表明,该算法在实践中具有良好的效果。  相似文献   

3.
在大型复杂场景绘制关键技术研究项目中,通过把复杂场景绘制与调度技术,如Slicing、MMI、层次图像缓存和基本布告板等大场景实时加速绘制技术的多层次绘制方法,融合到非真实感绘制技术中,以提高系统面对复杂场景的实时绘制能力。最后实现了对森林等大型场景的钢笔画绘制,借助非真实感绘制技术中基于图像空间的生成方法,利用各种不同大小及形状的笔画实现了艺术化森林场景实时交互绘制。提出了植物多层次钢笔风格化渲染算法。  相似文献   

4.
顾耀林  朱丽华  王华 《计算机应用》2007,27(7):1626-1628
对虚拟图像场景绘制算法进行研究,提出了一种基于低维线性子空间的球面谐波光照表示算法。从物体模型得到子空间,并抽取其中的低维线性空间,利用球面谐波函数将朗伯反射作为一个卷积处理,来近似表示朗伯反射,同时添加了漫反射阴影交互转移。实验结果表明该算法逼近程度好,绘制质量高,具有一定的实用价值。  相似文献   

5.
周杨  徐青  肖勇辉 《计算机工程》2007,33(12):214-216
碰撞检测算法是增强虚拟环境的逼真感和沉浸感的一个重要手段。原有的碰撞检测算法计算复杂,在复杂大范围三维场景绘制时会占用系统大量计算资源。针对传统碰撞检测算法的缺点,提出了一种基于缓冲区Z_buffer值的快速碰撞检测算法。该算法充分利用场景绘制时的变换矩阵和深度信息,实现了用户以第一人称在虚拟场景中漫游时进行快速碰撞检测与响应。实验证明该算法计算简单、速度快且与场景复杂度无关。  相似文献   

6.
阴影图是当前实时阴影绘制中的一种经典算法。该算法基于图像空间,当有限分辨率的阴影图映射到较大场景中时,就会由于采样不足造成锯齿形变走样。提出了一种实时的反走样阴影图算法,该算法首先获取当前视点所能够看到的场景范围,然后绘制该范围内的阴影图,并映射到场景中生成实时阴影。该方法同经典的阴影图算法相比,避免了场景中不必要的阴影绘制,提高了阴影图的利用率,反走样的效果很好。而且,该方法只需要绘制一到两幅阴影图,算法的实时性很强,可以满足一个上百万面片的大规模场景中实时阴影绘制的需要。  相似文献   

7.
提出了一个新的面向交互操作的三维模型数据外存调度算法,该算法解决了基于外存三维模型数据难以进行添加、删除、平移等交互操作的问题.同时,文中还提出了双层的BSP空间剖分结构,在交互操作的过程中,保持每个物体的BSP树不变,自适应地更新整个BSP场景绘制加速结构,使得交互操作不会降低场景的绘制加速空间削分结构的使用效率.  相似文献   

8.
大区域地形可视化技术的研究   总被引:28,自引:0,他引:28       下载免费PDF全文
近年来,地形场景的实时绘制已受到人们越来越广泛的关注,目前已经提出的一系列场景加速绘制算法,虽然在不同的应用场合也取得了一定的效果,但都存在着局限性,尚不能满足大区域地形环境的实时高速绘制的要求,而与其密切相关的技术主要涉及到地形多分辨率表示、海量地形数据和纹理数据的分页管理、地形和纹理数据的LOD控制、地形和纹理数据的快速存取和更新等.为了能够对地形场景进行实时绘制,在对大区域地形数据管理和实时绘制技术进行研究和试验的基础上,对构建视相关动态多分辨率模型的方法进行了改进,实现了地形模型多分辨率表示与视相关的有机结合,并提出了一种高效的场景数据存取方法,进而实现了一个整合自适应三角网剖分、地形场景数据分页管理和动态更新等相关技术于一体的地形三维可视化系统,试验结果表明,该算法能够实时绘制地形场景,且质量较好.  相似文献   

9.
梁晓辉  任威  于卓  梁爱民 《软件学报》2009,20(6):1685-1693
对复杂动态场景进行高效的可见性裁剪是实时绘制领域研究中的一个重要问题.围绕该问题开展工作,并针对相关性遮挡裁剪算法中的问题进行了改进.针对相关性层次遮挡裁剪算法存在冗余和不必要遮挡查询的问题,给出了一种概率计算模型.通过比较遮挡查询时间开销与绘制时间开销的数学期望,改进了相关性遮挡裁剪算法中遮挡查询的查询策略,从而进一步缩小了查询集合,使遮挡查询更加合理.实验结果表明,该算法对深度复杂度高、面片数量大的复杂动态场景有较好的裁剪效率,能够很好地满足实时绘制的要求.  相似文献   

10.
GPU在复杂场景的阴影绘制中的应用   总被引:4,自引:0,他引:4       下载免费PDF全文
通过有效利用图形硬件的图形处理单元(GPU)的运算能力和可编程性,将人量计算从CPU分离出来。在GPU上采用顶点和片元程序进行阴影计算,从而加速复杂场景阴影绘制。选择图像空间阴影算法进行GPU加速绘制。用Cg图形编程语言和OpenGL实现了算法的绘制过程,能够满足通用的复杂3D场景应用的需要,达到满意的实时绘制效果。  相似文献   

11.
Abstract This paper describes an approach to the design of interactive multimedia materials being developed in a European Community project. The developmental process is seen as a dialogue between technologists and teachers. This dialogue is often problematic because of the differences in training, experience and culture between them. Conditions needed for fruitful dialogue are described and the generic model for learning design used in the project is explained.  相似文献   

12.
This paper deals with the question: What are the criteria that an adequate theory of computation has to meet? (1) Smith’s answer: it has to meet the empirical criterion (i.e. doing justice to computational practice), the conceptual criterion (i.e. explaining all the underlying concepts) and the cognitive criterion (i.e. providing solid grounds for computationalism). (2) Piccinini’s answer: it has to meet the objectivity criterion (i.e. identifying computation as a matter of fact), the explanation criterion (i.e. explaining the computer’s behaviour), the right things compute criterion, the miscomputation criterion (i.e. accounting for malfunctions), the taxonomy criterion (i.e. distinguishing between different classes of computers) and the empirical criterion. (3) Von Neumann’s answer: it has to meet the precision and reliability of computers criterion, the single error criterion (i.e. addressing the impacts of errors) and the distinction between analogue and digital computers criterion. (4) “Everything” computes answer: it has to meet the implementation theory criterion by properly explaining the notion of implementation.  相似文献   

13.
面向查询的多文档摘要技术有两个难点 第一,为了保证摘要与查询密切相关,容易造成摘要内容重复,不够全面;第二,原始查询难以完整描述查询意图,需进行查询扩展,而现有查询扩展方法多依赖于外部语义资源。针对以上问题,该文提出一种面向查询的多文档摘要方法,利用主题分析技术识别出当前主题下的子主题,综合考虑句子所在的子主题与查询的相关度以及子主题的重要度两方面因素来选择摘要句,并根据词语在子主题之间的共现信息,在不使用任何外部知识的情况下,进行查询扩展。在DUC2006评测语料上的实验结果表明,与Baseline系统相比,该系统取得了更高的ROUGE评价值,基于子主题的查询扩展方法则进一步提高了摘要的质量。  相似文献   

14.
Although there are many arguments that logic is an appropriate tool for artificial intelligence, there has been a perceived problem with the monotonicity of classical logic. This paper elaborates on the idea that reasoning should be viewed as theory formation where logic tells us the consequences of our assumptions. The two activities of predicting what is expected to be true and explaining observations are considered in a simple theory formation framework. Properties of each activity are discussed, along with a number of proposals as to what should be predicted or accepted as reasonable explanations. An architecture is proposed to combine explanation and prediction into one coherent framework. Algorithms used to implement the system as well as examples from a running implementation are given.  相似文献   

15.
The new method of defuzzification of output parameters from the base of fuzzy rules for a Mamdani fuzzy controller is given in the paper. The peculiarity of the method is the usage of the universal equation for the area computation of the geometric shapes. During the realization of fuzzy inference linguistic terms, the structure changes from the triangular into a trapezoidal shape. That is why the universal equation is used. The method is limited and can be used only for the triangular and trapezoidal membership functions. Gaussian functions can also be used while modifying the proposed method. Traditional defuzzification models such as Middle of Maxima − MoM, First of Maxima − FoM, Last of Maxima − LoM, First of Suppport − FoS, Last of Support − LoS, Middle of Support − MoS, Center of Sums − CoS, Model of Height − MoH have a number of systematic errors: curse of dimensionality, partition of unity condition and absence of additivity. The above-mentioned methods can be seen as Center of Gravity − CoG, which has the same errors. These errors lead to the fact that accuracy of fuzzy systems decreases, because during the training root mean square error increases. One of the reasons that provokes the errors is that some of the activated fuzzy rules are excluded from the fuzzy inference. It is also possible to increase the accuracy of the fuzzy system through properties of continuity. The proposed method guarantees fulfilling of the property of continuity, as the intersection point of the adjustment linguistic terms equals 0.5 when a parametrized membership function is used. The causes of errors and a way to delete them are reviewed in the paper. The proposed method excludes errors which are inherent to the traditional and non- traditional models of defuzzification. Comparative analysis of the proposed method of defuzzification with traditional and non-traditional models shows its effectiveness.  相似文献   

16.
This paper provides the author's personal views and perspectives on software process improvement. Starting with his first work on technology assessment in IBM over 20 years ago, Watts Humphrey describes the process improvement work he has been directly involved in. This includes the development of the early process assessment methods, the original design of the CMM, and the introduction of the Personal Software Process (PSP)SM and Team Software Process (TSP){SM}. In addition to describing the original motivation for this work, the author also reviews many of the problems he and his associates encountered and why they solved them the way they did. He also comments on the outstanding issues and likely directions for future work. Finally, this work has built on the experiences and contributions of many people. Mr. Humphrey only describes work that he was personally involved in and he names many of the key contributors. However, so many people have been involved in this work that a full list of the important participants would be impractical.  相似文献   

17.
Impact of cognitive theory on the practice of courseware authoring   总被引:1,自引:0,他引:1  
Abstract The cognitive revolution has yielded unprecedented progress in our understanding of higher cognitive processes such as remembering and learning. It is natural to expect this scientific breakthrough to inform and guide the design of instruction in general and computer-based instruction in particular. In this paper I survey the different ways in which recent advances in cognitive theory might influence the design of computer-based instruction and spell out their implications for the design of authoring tools and tutoring system shells. The discussion will be divided into four main sections. The first two sections deal with the design and the delivery of instruction. The third section analyzes the consequences for authoring systems. In the last section I propose a different way of thinking about this topic.  相似文献   

18.
Possibilistic distributions admit both measures of uncertainty and (metric) distances defining their information closeness. For general pairs of distributions these measures and metrics were first introduced in the form of integral expressions. Particularly important are pairs of distributions p and q which have consonant ordering—for any two events x and y in the domain of discourse p(x)⪋ p(y) if and only if q(x) ⪋ q(y). We call such distributions confluent and study their information distances.

This paper presents discrete sum form of uncertainty measures of arbitrary distributions, and uses it to obtain similar representations of metrics on the space of confluent distributions. Using these representations, a number of properties like additivity. monotonicity and a form of distributivity are proven. Finally, a branching property is introduced, which will serve (in a separate paper) to characterize axiomatically possibilistic information distances.  相似文献   


19.
CCD camera modeling and simulation   总被引:2,自引:0,他引:2  
In this paper we propose a modeling of an acquisition line made up of a CCD camera, a lens and a frame grabber card. The purpose of this modeling is to simulate the acquisition process in order to obtain images of virtual objects. The response time has to be short enough to permit interactive simulation. All the stages are modelised: in the first phase, we present a geometric model which supplies a point to point transformation that provides, for a space point in the camera field, the corresponding point on the plane of the CCD sensor. The second phase consists of modeling the discrete space which implies passing from the continous known object view to a discrete image, in accordance with the different orgin of the contrast loss. In the third phase, the video signal is reconstituted in order to be sampled by the frame grabber card. The practical results are close to reality when compared to image processing. This tool makes it possible to obtain a short computation time simulation of a vision sensor. This enables interactivity either with the user or with software for the design/simulation of an industrial workshop equipped with a vision system. It makes testing possible and validates the choice of sensor placement and image processing and analysis. Thanks to this simulation tool, we can control perfectly the position of the object image placed under the camera and in this way, we can characterise the performance of subpixel accuracy determining methods for object positioning.  相似文献   

20.
This paper is concerned with the problem of gain-scheduled H filter design for a class of parameter-varying discrete-time systems. A new LMI-based design approach is proposed by using parameter-dependent Lyapunov functions. Recommended by Editorial Board member Huanshui Zhang under the direction of Editor Jae Weon Choi. This work was supported in part by the National Natural Science Foundation of P. R. China under Grants 60874058, by 973 program No 2009CB320600, but also the National Natural Science Foundation of Province of Zhejiang under Grants Y107056, and in part by a Research Grant from the Australian Research Council. Shaosheng Zhou received the B.S. degree in Applied Mathematics and the M.Sc. and Ph.D. degrees in Electrical Engineering, in January 1992, July 1996 and October 2001, from Qufu Normal University and Southeast University. His research interests include nonlinear control and stochastic systems. Baoyong Zhang received the B.S. and M.Sc. degrees in Applied Mathematics, in July 2003 and July 2006, all from Qufu Normal University. His research interests include and nonlinear systems, robust control and filtering. Wei Xing Zheng received the B.Sc. degree in Applied Mathematics and the M.Sc. and Ph.D. degrees in Electrical Engineering, in January 1982, July 1984 and February 1989, respectively, all from the Southeast University, Nanjing, China. His research interests include signal processing and system identification.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号