首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Annals of Mathematics and Artificial Intelligence - Knowledge computation tasks, such as computing a base of valid implications, are often infeasible for large data sets. This is in particular true...  相似文献   

2.
Topological fisheye views for visualizing large graphs   总被引:1,自引:0,他引:1  
Graph drawing is a basic visualization tool that works well for graphs having up to hundreds of nodes and edges. At greater scale, data density and occlusion problems often negate its effectiveness. Conventional pan-and-zoom, multiscale, and geometric fisheye views are not fully satisfactory solutions to this problem. As an alternative, we propose a topological zooming method. It precomputes a hierarchy of coarsened graphs that are combined on-the-fly into renderings, with the level of detail dependent on distance from one or more foci. A related geometric distortion method yields constant information density displays from these renderings.  相似文献   

3.
In this paper, we will present efficient strategies how composite finite elements can be realized for the discretization of PDEs on domains containing small geometric details. In contrast to standard finite elements, the minimal dimension of this new class of finite element spaces is completely independent of the number of geometric details of the physical domains. Hence, it allows coarse level discretization of PDEs which can be used, e.g., preferably for multi-grid methods and homogenization of PDEs in non-periodic situations. Received: 23 September 1996 / Accepted: 23 January 1997  相似文献   

4.
A new algorithm is presented for the modeling and simulation of multi-flexible-body systems. This algorithm is built upon a divide-and-conquer-based multibody dynamics framework, and it is capable of handling arbitrary large rotations and deformations in articulated flexible bodies. As such, this work extends the current capabilities of the flexible divide-and-conquer algorithm (Mukherjee and Anderson in Comput. Nonlinear Dyn. 2(1):10–21, 2007), which is limited to the use of assumed modes in a floating frame of reference configuration. The present algorithm utilizes the existing finite element modeling techniques to construct the equations of motion at the element level, as well as at the body level. It is demonstrated that these equations can be assembled and solved using a divide-and-conquer type methodology. In this respect, the new algorithm is applied using the absolute nodal coordinate formulation (ANCF) (Shabana, 1996). The ANCF is selected because of its straightforward implementation and effectiveness in modeling large deformations. It is demonstrated that the present algorithm provides an efficient and robust method for modeling multi-flexible-body systems that employ highly deformable bodies. The new algorithm is tested using three example systems employing deformable bodies in two and three spatial dimensions. The current examples are limited to the ANCF line or cable elements, but the approach may be extended to higher order elements. In its basic form, the divide-and-conquer algorithm is time and processor optimal, yielding logarithmic complexity O(log(N b )) when implemented using O(N b ) processors, where N b is the number of bodies in the system.  相似文献   

5.
In this paper we present a model and a fully implicit algorithm for large strain anisotropic elasto-plasticity with mixed hardening in which the elastic anisotropy is taken into account. The formulation is developed using hyperelasticity in terms of logarithmic strains, the multiplicative decomposition of the deformation gradient into an elastic and a plastic part, and the exponential mapping. The novelty in the computational procedure is that it retains the conceptual simplicity of the large strain isotropic elasto-plastic algorithms based on the same ingredients. The plastic correction is performed using a standard small strain procedure in which the stresses are interpreted as generalized Kirchhoff stresses and the strains as logarithmic strains, and the large strain kinematics is reduced to a geometric pre- and post-processor. The procedure is independent of the specified yield function and type of hardening used, and for isotropic elasticity, the algorithm of Eterovi? and Bathe is automatically recovered as a special case. The results of some illustrative finite element solutions are given in order to demonstrate the capabilities of the algorithm.  相似文献   

6.
This paper presents an algorithm for image completion based on the views of large displacement. A distinct from most existing image completion methods, which exploit only the target image’s own information to complete the damaged regions, our algorithm makes full use of a large displacement view (LDV) of the same scene, which introduces enough information to resolve the original ill-posed problem. To eliminate any perspective distortion during the warping of the LDV image, we first decompose the target image and the LDV one into several corresponding planar scene regions (PSRs) and transform the candidate PSRs on the LDV image onto their correspondences on the target image. Then using the transformed PSRs, we develop a new image repairing algorithm, coupled with graph cut based image stitching, texture synthesis based image inpainting, and image fusion based hole filling, to complete the missing regions seamlessly. Finally, the ghost effect between the repaired region and its surroundings is eliminated by Poisson image blending. Our algorithm effectively preserves the structure information on the missing area of the target image and produces a repaired result comparable to its original appearance. Experiments show the effectiveness of our method.  相似文献   

7.
Great success of scene parsing (also known as,semantic segmentation) has been achieved with the pipeline of fully convolutional networks (FCNs).Nevertheless,the...  相似文献   

8.
Abstract The main new result reported in this paper is a computer-based approach for training strategic competences in practical jobs such as salesmen or waiters. The environment represents job situations in which the students may act from basic principles. The interaction is similar to that in simulations or microworlds, but there is no underlying model of the activity. In order to provide feedback to the learner, a model has been built which is based on the analysis of a large set of cases provided by experts. This model uses formal knowledge and contextual knowledge in order to build an explanation mechanism providing textual comments and multimedia illustrations.  相似文献   

9.
This paper presents joint contexts optimization in mobile grid. The paper describes device context information for context-aware services in the mobile device collaboration. The objective of the paper is to dynamically deliver services to mobile grid users according to current context of mobile grid environment. A utility function is used as objective function that expresses values for the current contexts. The optimization is carried out by the joint context parameter optimizer with respect to an objective function. A joint contexts optimization algorithm is proposed which decomposes mobile grid system optimization problem into sub-problems. In the experiment, the performance evaluation of joint contexts optimization algorithm is conducted.  相似文献   

10.
A new axiomatic system OST of operational set theory is introduced in which the usual language of set theory is expanded to allow us to talk about (possibly partial) operations applicable both to sets and to operations. OST is equivalent in strength to admissible set theory, and a natural extension of OST is equivalent in strength to ZFC. The language of OST provides a framework in which to express “small” large cardinal notions—such as those of being an inaccessible cardinal, a Mahlo cardinal, and a weakly compact cardinal—in terms of operational closure conditions that specialize to the analogue notions on admissible sets. This illustrates a wider program whose aim is to provide a common framework for analogues of large cardinal notions that have appeared in admissible set theory, admissible recursion theory, constructive set theory, constructive type theory, explicit mathematics, and systems of recursive ordinal notations that have been used in proof theory.  相似文献   

11.
As the user base for ubiquitous technology expands to developing regions, the likelihood of disparity between the lived experience of design team members (developers, designers, researchers, etc.) and end users has increased. Human-centered design (HCD) provides a toolkit of research methods aimed at helping bridge the distance between technology design teams and end users. However, we have found that traditional approaches to HCD research methods are difficult to deploy in developing regions. In this paper, we share our experiences of adapting HCD research methodologies to the Central Asia context and some lessons we have learned. While our lessons are many, reconsidering the unit of analysis from the individual to larger social units was an early discovery that provided a frame for later research activities that focused on ubicomp development. We argue that lessons and challenges derived from our experience will generalize to other research investigations in which researchers are trying to adapt common HCD data collection methods to create ubiquitous technologies for and/or with distant audiences in developing regions.  相似文献   

12.
In next-generation classrooms and educational environments, interactive technologies such as surface computing, natural gesture interfaces, and mobile devices will enable new means of motivating and engaging students in active learning. Our foundational studies provide a corpus of over 10,000 touch interactions and nearly 7,000 gestures collected from nearly 70 adults and children aged 7 years and above, which can help us understand the characteristics of children’s interactions in these modalities and how they differ from adults. Based on these data, we identify key design and implementation challenges of supporting children’s touch and gesture interactions, and we suggest ways to address them. For example, we find children have more trouble successfully acquiring onscreen targets and having their gestures recognized than do adults, especially the youngest age group (7–10 years old). The contributions of this work provide a foundation that will enable touch-based interactive educational apps that increase student success.  相似文献   

13.
The problem of the logarithmic discretization of an arbitrary positive function (such as the density of states) is studied in general terms. Logarithmic discretization has arbitrary high resolution around some chosen point (such as Fermi level) and it finds application, for example, in the numerical renormalization group (NRG) approach to quantum impurity problems (Kondo model), where the continuum of the conduction band states needs to be reduced to a finite number of levels with good sampling near the Fermi level. The discretization schemes under discussion are required to reproduce the original function after averaging over different interleaved discretization meshes, thus systematic deviations which appear in the conventional logarithmic discretization are eliminated. An improved scheme is proposed in which the discretization-mesh points themselves are determined in an adaptive way; they are denser in the regions where the function has higher values. Such schemes help in reducing the residual numeric artefacts in NRG calculations in situations where the density of states approaches zero over extended intervals. A reference implementation of the solver for the differential equations which determine the full set of discretization coefficients is also described.  相似文献   

14.
Temporal abstraction is the task of abstracting higher‐level concepts from time‐stamped data in a context‐sensitive manner. We have developed and implemented a formal knowledge‐based framework for decomposing and solving that task that supports acquisition, maintenance, reuse, and sharing of temporal‐abstraction knowledge. We present the logical model underlying the representation and runtime formation of interpretation contexts. Interpretation contexts are relevant for abstraction of time‐oriented data and are induced by input data, concluded abstractions, external events, goals of the temporal‐abstraction process, and certain combinations of interpretation contexts. Knowledge about interpretation contexts is represented as a context ontology and as a dynamic induction relation over interpretation contexts and other proposition types. Induced interpretation contexts are either basic, composite, generalized, or nonconvex. We provide two examples of applying our model using an implemented system; one in the domain of clinical medicine (monitoring of diabetes patients) and one in the domain of traffic engineering (evaluation of traffic‐control actions). We discuss several distinct advantages to the explicit separation of interpretation‐context propositions from the propositions inducing them and from the abstractions created within them. This revised version was published online in June 2006 with corrections to the Cover Date.  相似文献   

15.
16.
17.
This paper addresses the estimation of a small gallery size that can generate the optimal error estimate and its confidence on a large population (relative to the size of the gallery) which is one of the fundamental problems encountered in performance prediction for object recognition. It uses a generalized two-dimensional prediction model that combines a hypergeometric probability distribution model with a binomial model and also considers the data distortion problem in large populations. Learning is incorporated in the prediction process in order to find the optimal small gallery size and to improve the prediction. The Chernoff and Chebychev inequalities are used as a guide to obtain the small gallery size. During the prediction, the expectation–maximization (EM) algorithm is used to learn the match score and the non-match score distributions that are represented as a mixture of Gaussians. The optimal size of the small gallery is learned by comparing it with the sizes obtained by the statistical approaches and at the same time the upper and lower bounds for the prediction on large populations are obtained. Results for the prediction are presented for the NIST-4 fingerprint database.  相似文献   

18.
Electronically mediated social contexts (EMSCs), in which interactions and activities are largely or completely computer-mediated, have become important settings for investigation by Information Systems (IS) scholars. Owing to the relative novelty and originality of EMSCs, IS researchers often lack existing theories to make sense of the processes that emerge in them. Therefore, many IS researchers have relied upon grounded theory in order to develop new theory based on empirical observations from EMSCs. This article reviews a selected set of papers concerned with grounded IS research on EMSCs. It examines how the authors of these papers handled the characteristics of EMSCs and, in particular, addresses the topics of data collection, data analysis, and theory building. The paper also draws implications and recommendations for grounded researchers interested in investigating these original and fascinating environments in their future work. For example, it calls for grounded researchers on EMSCs to reflect upon the characteristics of their domains of inquiry, to respect the logic of discovery of grounded methods, and to articulate more clearly their theoretical ambitions along the induction/abduction continuum. The paper closes by suggesting promising areas for future grounded research on EMSCs, including taking advantage of the potential for combining qualitative and quantitative analytical methods.  相似文献   

19.
Fan  Shidi  Xu  Hongji  Xiong  Hailiang  Chen  Min  Liu  Qiang  Xing  Qinghua  Li  Tiankuo 《Applied Intelligence》2022,52(1):681-698

As the key products of ubiquitous computing, context-aware systems have been widely used in many fields such as digital home, smart healthcare and so on. However, in the face of the typical application environment formed by multiple sensors and intelligent devices, the inconsistency of contexts that hinders the normal operation of the systems has become an inevitable and urgent problem that needs to be resolved. In this paper, we propose a new quality of context (QoC) parameter relevance to enrich the comprehensive assessment of the context quality. Moreover, on this basis, we put forward novel context inconsistency elimination algorithms that use multiple QoC parameters and Dempster-Shafer theory to solve the inconsistency problem of sensed contexts and non-sensed contexts, respectively. Experimental analyses from multiple dimensions fully show that the proposed algorithms have obvious advantages over the other algorithms in terms of accuracy, stability, and robustness.

  相似文献   

20.
Bidirectional texture functions, or BTFs, accurately model reflectance variation at a fine (meso-) scale as a function of lighting and viewing direction. BTFs also capture view-dependent visibility variation, also called masking or parallax, but only within surface contours. Mesostructure detail is neglected at silhouettes, so BTF-mapped objects retain the coarse shape of the underlying model. We augment BTF rendering to obtain approximate mesoscale silhouettes. Our new representation, the 4D mesostructure distance function (MDF), tabulates the displacement from a reference frame where a ray first intersects the mesoscale geometry beneath as a function of ray direction and ray position along that reference plane. Given an MDF, the mesostructure silhouette can be rendered with a per-pixel depth peeling process on graphics hardware, while shading and local parallax are handled by the BTF. Our approach allows real-time rendering, handles complex, non-height-field mesostructure, requires that no additional geometry be sent to the rasterizer other than the mesh triangles, is more compact than textured visibility representations used previously, and, for the first time, can be easily measured from physical samples. We also adapt the algorithm to capture detailed shadows cast both by and onto BTF-mapped surfaces. We demonstrate the efficiency of our algorithm on a variety of BTF data, including real data acquired using our BTF–MDF measurement system.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号