首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Today, data storage capabilities as well as computational power are rapidly increasing. On the one hand, this improvement makes it possible to generate and store a great amount of temporal (time-oriented) data for future query, analysis and discovery of new knowledge. On the other hand, systems and experts are encountering new problems in processing this increased amount of data. The rapid growth in stored time-oriented data necessitates the development of new methods for handling, processing, and interpreting large amounts of temporal data. One approach is to use an automatic summarization process based on predefined knowledge, such the Knowledge-Based Temporal-Abstraction (KBTA) method. This method enables one to summarize and reduce the amount of raw data by creating higher level interpretations based on predefined domain knowledge. Unfortunately, the task of temporal abstraction is inherently computationally expensive, especially when an enormous volume of multivariate data has to be handled and when complex patterns need to be considered. In this research, we address the scalability problem of a temporal-abstraction task that involves processing significantly large amounts of raw data. We propose a new computational framework, the Distributed KBTA (DKBTA), which efficiently distributes the abstraction process among several parallel computational nodes, in order to achieve an acceptable computation time. The DKBTA framework distributes the temporal-abstraction process along one or more computational axes, each of which enables parallelization of one or more temporal-abstraction tasks into which the main temporal-abstraction task is decomposed, such as by different subject groups, concepts types, or abstraction types. We have implemented the DKBTA framework and have evaluated it in a preliminary fashion in the medical and the information security domains, with encouraging results. In our small-scale evaluation, only distribution along the subjects axis and sometimes along the concept-type axis seemed to consistently enhance performance, and only for computations involving individual subjects and not functions of sets of subjects; but this observation might depend on the number of processing units. Additionally, since the communication between the processing units was based on the TCP protocol, we could not observe any speedup even when using two processing units on the same machine. In our further evaluations we plan to use a shared memory architecture in order to exchange data between processing units.  相似文献   

2.
The paper presents a knowledge-based analysis approach that generates first order predicate logic annotations of loops. A classification of loops according to their complexity levels is presented. Based on this taxonomy, variations on the basic analysis approach that best fit each of the different classes are described. In general, mechanical annotation of loops is performed by first decomposing them using data flow analysis. This decomposition encapsulates closely related statements in events, that can be analyzed individually. Specifications of the resulting loop events are then obtained by utilizing patterns, called plans, stored in a knowledge base. Finally, a consistent and rigorous functional abstraction of the whole loop is synthesized from the specifications of its individual events. To test the analysis techniques and to assess their effectiveness, a case study was performed on an existing program of reasonable size. Results concerning the analyzed loops and the plans designed for them are given  相似文献   

3.
ContextFace-to-Face (F2F) interaction is a strong means to foster social relationships and effective knowledge sharing within a team. However, communication in Global Software Development (GSD) teams is usually restricted to computer-mediated conversation that is perceived to be less effective and interpersonal. Temporary collocation of dispersed members of a software development team is a well-known practice in GSD. Despite broad realization of the benefits of visits, there is lack of empirical evidence that explores how temporary F2F interactions are organized in practice and how they can impact knowledge sharing between sites.ObjectiveThis study aimed at empirically investigating activities that take place during temporary collocation of dispersed members and analyzing the outcomes of the visit for supporting and improving knowledge sharing.MethodWe report a longitudinal case study of a GSD team distributed between Denmark and Pakistan. We have explored a particular visit organized for a group of offshore team members visiting onshore site for two weeks. Our findings are based on a systematic and rigorous analysis of the calendar entries of the visitors during the studied visit, several observations of a selected set of the team members’ activities during the visit and 13 semi-structured interviews.ResultsLooking through the lens of knowledge-based theory of the firm, we have found that social and professional activities organized during the visit, facilitated knowledge sharing between team members from both sites. The findings are expected to contribute to building a common knowledge and understanding about the role and usefulness of the site visits for supporting and improving knowledge sharing in GSD teams by establishing and sustaining social and professional ties.  相似文献   

4.
提出一种新的蒙特卡洛光线跟踪计算方法.该方法利用半球谐基函数对入射光线正交化,利用两个半球上的笛卡儿积来定义物体表面的双向反射率分布函数.对光滑平面上的辐亮度进行取样,然后把其放进高速缓存器中,经过计算再对其它点进行插值.插值时,利用梯度方向插值,并且用了一种简便的方法来计算一个点的梯度.该方法能极大地提高全局照明的计算速度.该方法对于照明工程,高质量的电影动画及游戏制作及虚拟现实等领域都具有非常广阔的应用前景.  相似文献   

5.
Radiance caching for efficient global illumination computation   总被引:1,自引:0,他引:1  
In this paper, we present a ray tracing-based method for accelerated global illumination computation in scenes with low-frequency glossy BRDFs. The method is based on sparse sampling, caching, and interpolating radiance on glossy surfaces. In particular, we extend the irradiance caching scheme proposed by Ward et al. (1988) to cache and interpolate directional incoming radiance instead of irradiance. The incoming radiance at a point is represented by a vector of coefficients with respect to a hemispherical or spherical basis. The surfaces suitable for interpolation are selected automatically according to the roughness of their BRDF. We also propose a novel method for computing translational radiance gradient at a point.  相似文献   

6.
Classical computation is essentially local in time, yet some formulations of physics are global in time. Here, I examine these differences and suggest that certain forms of unconventional computation are needed to model physical processes and complex systems. These include certain forms of analogue computing, massively parallel field computing and self-modifying computations.  相似文献   

7.
In a distributed system, many algorithms need repeated computation of a global function. These algorithms generally use a static hierarchy for gathering the necessary data from all processes. As a result, they are unfair to processes at higher levels of the hierarchy, which have to perform more work than processes at lower levels do. In this paper, we present a new revolving hierarchical scheme in which the position of a process in the hierarchy changes with time. This reorganization of the hierarchy is achieved concurrently with its use. It results in algorithms that are not only fair to all processes but also less expensive in terms of messages. The reduction in the number of messages is achieved by reusing messages for more than one computation of the global function. The technique is illustrated for a distributed branch-and-bound problem and for asynchronous computation of fixed points  相似文献   

8.
Power industry around the world is facing several changes since deregulation with constant pressure put on improving security, reliability and quality of the power supply. Computational fault analysis and diagnosis of power networks have been active research topics with several theories and algorithms proposed. This paper proposes a distributed diagnostic algorithm for fault analysis in power networks. Distributed architecture for power network fault analysis (DAPFA) is an intelligent, model-based diagnostic algorithm that incorporates a hierarchical power network representation and model. The architecture is based on the industry’s substation automation implementation standards. The structural and functional model is a multi-level representation with each level depicting a more complex grouping of components than its predecessor in the hierarchy. The distributed functional representation contains the behavioral knowledge related to the components of that level in the structural model.The diagnostic algorithm of DAPFA is designed to perform fault analysis in pre-diagnostic and diagnostic levels. Pre-diagnostic phase provides real-time analysis while the diagnostic phase provides the final diagnostic analysis. The diagnostic algorithm incorporates knowledge-based and model-based reasoning mechanisms with one of the model levels represented as a network of neural nets. The relevant algorithms and techniques are discussed. The resulting system has been implemented on a New Zealand sub-system and the results are analyzed.  相似文献   

9.
A variety of belief maintenance schemes for image analysis have been suggested and used to date. In the recent past, several researchers have suggested the use of the Dempster-Shafer theory of evidence for representation of belief. This approach appears to be particularly suited for knowledge-based image analysis systems because of its intuitively convincing ways of representing beliefs, support, plausibility, ignorance, dubiety, and a host of other measures that can be used for the purpose of decision making. It also provides a very attractive technique to combine these measures obtained from disparate knowledge sources. In this article, we show how the Dempster-Shafer theoretic concepts of refinement and coarsening can be used to aggregate and propagate evidence in a multi-resolution image analysis system based on a hierarchical knowledge base.  相似文献   

10.
In this paper the notion of conceptual cohesiveness is precised and used to group objects semantically, based on a knowledge structure called ‘cohesion forest’. A set of axioms is proposed which should be satisfied to make the generated clusters meaningful.  相似文献   

11.
Two efficient algorithms for computing the incomplete beta function are considered. They have comparable performance for a wide range of values of parameters and the variable.  相似文献   

12.
提出一种基于蒙特卡洛积分,利用半球谐函数对光滑平面进行的快速全局照明计算方法。该方法通过在光滑平面上的辐亮度进行取样,然后把其放进高速缓存器中,经过计算再对其它点进行插值。为了提高计算速度,物体表面的入射辐亮度被半球谐化,并且物体表面的双向反射率分布函数也被定义成两个半球面上的笛卡儿积。插值时,利用梯度方向插值,并且用了一种简便的方法来计算一个点的梯度。该方法能极大提高了全局照明的计算速度。这对于照明工程、高质量的动画制作及虚拟现实等领域都具有非常广阔的应用前景。  相似文献   

13.
We present an algorithm for computing the global penetration depth between an articulated model and an obstacle or between the distinctive links of an articulated model. In so doing, we use a formulation of penetration depth derived in configuration space. We first compute an approximation of the boundary of the obstacle regions using a support vector machine in a learning stage. Then, we employ a nearest neighbor search to perform a runtime query for penetration depth. The computational complexity of the runtime query depends on the number of support vectors, and its computational time varies from 0.03 to 3 milliseconds in our benchmarks. We can guarantee that the configuration realizing the penetration depth is penetration free, and the algorithm can handle general articulated models. We tested our algorithm in robot motion planning and grasping simulations using many high degree of freedom (DOF) articulated models. Our algorithm is the first to efficiently compute global penetration depth for high-DOF articulated models.  相似文献   

14.
A knowledge-based thinning algorithm   总被引:3,自引:0,他引:3  
One common defect of thinning algorithms is deformation at crossing points. To solve this problem, a new thinning method, called the knowledge-based thinning algorithm (KBTA), is proposed. It first represents a binary pattern by coded run lengths of the horizontal line segments. Then the relationship between line segments is described quantitatively by another new algorithm which makes use of both forward and backward derivatives. It afterwards identifies the regions where branches of the pattern meet, then extracts their shape features and thins all of them. Knowing the identities of these regions, perfect skeletons can be obtained. Other regions are thinned by an existing algorithm which is based on contour generation. Experiments with a wide variety of binary patterns show that this new technique generates better skeletons than several other well-known algorithms.  相似文献   

15.
《Pattern recognition letters》1999,20(11-13):1415-1422
This paper focuses on the problem of cost-based feature subset selection in the framework of the verification and updating of land-cover maps with imagery from different space-borne sensors. The concept of objects is applied and prior knowledge about an object (polygon under analysis) and its direct neighbors is combined with knowledge about the sensor and utilized in a feature subset selection step.  相似文献   

16.
Consistent global checkpoints have many uses in distributed computations. A central question in applications that use consistent global checkpoints is to determine whether a consistent global checkpoint that includes a given set of local checkpoints can exist. Netzer and Xu (1995) presented the necessary and sufficient conditions under which such a consistent global checkpoint can exist, but they did not explore what checkpoints could be constructed. In this paper, we prove exactly which local checkpoints can be used for constructing such consistent global checkpoints. We illustrate the use of our results with a simple and elegant algorithm to enumerate all such consistent global checkpoints  相似文献   

17.
The overall aim of this paper is to provide a general setting for quantitative quality measures of knowledge-based system behaviour that is widely applicable to many knowledge-based systems. We propose a general approach that we call degradation studies: an analysis of how system output changes as a function of degrading system input, such as incomplete or incorrect data or knowledge. To show the feasibility of our approach, we have applied it in a case study. We have taken a large and realistic vegetation-classification system, and have analysed its behaviour under various varieties of incomplete and incorrect input. This case study shows that degradation studies can reveal interesting and surprising properties of the system under study.  相似文献   

18.
We formulate the rudiments of a method for assessing the difficulty of dividing a computational problem into “independent simpler parts”. This work illustrates measures of complexity which attempt to capture the distinction between “local” and “global” computational problems. One such measure is the covering multiplicity, or average number of partial computations which take account of a given piece of data. Another measure reflects the intuitive notion of a “highly interconnected” computational problem, for which subsets of the data cannot be processed “in isolation”. These ideas are applied in the setting of computational geometry to show that the connectivity predicate has unbounded covering multiplicity and is highly interconnected; and in the setting of numerical computations to measure the complexity of evaluating polynomials and solving systems of linear equations.  相似文献   

19.
A global algorithm which uses interval analysis techniques to recognize live variables is presented.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号