首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Neurons in primary visual cortex respond selectively to oriented stimuli such as edges and lines. The long-range horizontal connections between them are thought to facilitate contour integration. While many physiological and psychophysical findings suggest that collinear or association field models of good continuation dictate particular projection patterns of horizontal connections to guide this integration process, significant evidence of interactions inconsistent with these hypotheses is accumulating. We first show that natural random variations around the collinear and association field models cannot account for these inconsistencies, a fact that motivates the search for more principled explanations. We then develop a model of long-range projection fields that formalizes good continuation based on differential geometry. The analysis implicates curvature(s) in a fundamental way, and the resulting model explains both consistent data and apparent outliers. It quantitatively predicts the (typically ignored) spread in projection distribution, its nonmonotonic variance, and the differences found among individual neurons. Surprisingly, and for the first time, this model also indicates that texture (and shading) continuation can serve as alternative and complementary functional explanations to contour integration. Because current anatomical data support both (curve and texture) integration models equally and because both are important computationally, new testable predictions are derived to allow their differentiation and identification.  相似文献   

2.
We present a new approach aimed at understanding the structure of connections in edge‐bundling layouts. We combine the advantages of edge bundles with a bundle‐centric simplified visual representation of a graph's structure. For this, we first compute a hierarchical edge clustering of a given graph layout which groups similar edges together. Next, we render clusters at a user‐selected level of detail using a new image‐based technique that combines distance‐based splatting and shape skeletonization. The overall result displays a given graph as a small set of overlapping shaded edge bundles. Luminance, saturation, hue, and shading encode edge density, edge types, and edge similarity. Finally, we add brushing and a new type of semantic lens to help navigation where local structures overlap. We illustrate the proposed method on several real‐world graph datasets.  相似文献   

3.
On Discriminative Bayesian Network Classifiers and Logistic Regression   总被引:5,自引:1,他引:4  
Discriminative learning of the parameters in the naive Bayes model is known to be equivalent to a logistic regression problem. Here we show that the same fact holds for much more general Bayesian network models, as long as the corresponding network structure satisfies a certain graph-theoretic property. The property holds for naive Bayes but also for more complex structures such as tree-augmented naive Bayes (TAN) as well as for mixed diagnostic-discriminative structures. Our results imply that for networks satisfying our property, the conditional likelihood cannot have local maxima so that the global maximum can be found by simple local optimization methods. We also show that if this property does not hold, then in general the conditional likelihood can have local, non-global maxima. We illustrate our theoretical results by empirical experiments with local optimization in a conditional naive Bayes model. Furthermore, we provide a heuristic strategy for pruning the number of parameters and relevant features in such models. For many data sets, we obtain good results with heavily pruned submodels containing many fewer parameters than the original naive Bayes model.Editors: Pedro Larrañaga, Jose A. Lozano, Jose M. Peña and Iñaki Inza  相似文献   

4.
This paper presents a new approach for the transformation and normal vector calculation algorithms of parametrically defined surfaces via dual vectors and line transformations. The surface is defined via dual points, the transformation is performed by rotations and translations based on screw theory while normal vector calculation is utilized for shading based on Phong's illumination model. The main benefit of this approach lies into the compactness of the surface's representation since geometrical characteristics, such as tangent vectors, that are necessary for shading algorithms, are included within its definition. An extensive comparison is performed between the proposed approach and the traditional homogeneous model, presenting the merits of our approach. Analytical and experimental determination of the computational cost via computer implementation of 3D surface transformation and shading is presented. Point‐based methods for the representation, transformation and shading of parametrically defined surfaces are compared to the introduced line‐based methods (dual quaternions and dual orthogonal matrices). It is shown that the simplified rendering procedure of 3D objects, is considerably faster using screw theory over the traditional point‐based structures.  相似文献   

5.
The main challenge of a search engine is to find information that are relevant and appropriate. However, this can become difficult when queries are issued using ambiguous words. Rijsbergen first hypothesized a clustering approach for web pages wherein closely associated pages are treated as a semantic group with the same relevance to the query (Rijsbergen 1979). In this paper, we extend Rijsbergen’s cluster hypothesis to multimedia content such as images. Given a user query, the polysemy in the return image set is related to the many possible meanings of the query. We develop a method to cluster the polysemous images into their semantic categories. The resulting clusters can be seen as the visual senses of the query, which collectively embody the visual interpretations of the query. At the heart of our method is a non-parametric Bayesian approach that exploits the complementary text and visual information of images for semantic clustering. Latent structures of polysemous images are mined using the Hierarchical Dirichlet Process (HDP). HDP is a non-parametric Bayesian model that represents images using a mixture of components. The main advantage of our model is that the number of mixture components is not fixed a priori, but is determined during the posterior inference process. This allows our model to grow with the level of polysemy (and visual diversity) of images. The same set of components is used to model all images, with only the mixture weights varying amongst images. Evaluation results on a large collection of web images show the efficacy of our approach.  相似文献   

6.
Visual saliency guided normal enhancement technique for 3D shape depiction   总被引:1,自引:0,他引:1  
Visual saliency can effectively guide the viewer's visual attention to salient regions of a 3D shape. Incorporating the visual saliency measure of a polygonal mesh into the normal enhancement operation, a novel saliency guided shading scheme for shape depiction is developed in this paper. Due to the visual saliency measure of the 3D shape, our approach will adjust the illumination and shading to enhance the geometric salient features of the underlying model by dynamically perturbing the surface normals. The experimental results demonstrate that our non-photorealistic shading scheme can enhance the depiction of the underlying shape and the visual perception of its salient features for expressive rendering. Compared with previous normal enhancement techniques, our approach can effectively convey surface details to improve shape depiction without impairing the desired appearance.  相似文献   

7.
Molecular dynamics simulations are a principal tool for studying molecular systems. Such simulations are used to investigate molecular structure, dynamics, and thermodynamical properties, as well as a replacement for, or complement to, costly and dangerous experiments. With the increasing availability of computational power the resulting data sets are becoming increasingly larger, and benchmarks indicate that the interactive visualization on desktop computers poses a challenge when rendering substantially more than millions of glyphs. Trading visual quality for rendering performance is a common approach when interactivity has to be guaranteed. In this paper we address both problems and present a method for high‐quality visualization of massive molecular dynamics data sets. We employ several optimization strategies on different levels of granularity, such as data quantization, data caching in video memory, and a two‐level occlusion culling strategy: coarse culling via hardware occlusion queries and a vertex‐level culling using maximum depth mipmaps. To ensure optimal image quality we employ GPU raycasting and deferred shading with smooth normal vector generation. We demonstrate that our method allows us to interactively render data sets containing tens of millions of high‐quality glyphs.  相似文献   

8.
With ever‐increasing display resolution for wide field‐of‐view displays—such as head‐mounted displays or 8k projectors—shading has become the major computational cost in rasterization. To reduce computational effort, we propose an algorithm that only shades visible features of the image while cost‐effectively interpolating the remaining features without affecting perceived quality. In contrast to previous approaches we do not only simulate acuity falloff but also introduce a sampling scheme that incorporates multiple aspects of the human visual system: acuity, eye motion, contrast (stemming from geometry, material or lighting properties), and brightness adaptation. Our sampling scheme is incorporated into a deferred shading pipeline to shade the image's perceptually relevant fragments while a pull‐push algorithm interpolates the radiance for the rest of the image. Our approach does not impose any restrictions on the performed shading. We conduct a number of psycho‐visual experiments to validate scene‐ and task‐independence of our approach. The number of fragments that need to be shaded is reduced by 50 % to 80 %. Our algorithm scales favorably with increasing resolution and field‐of‐view, rendering it well‐suited for head‐mounted displays and wide‐field‐of‐view projection.  相似文献   

9.
Locally parallel dense patterns-sometimes called texture flows-define a perceptually coherent structure of particular significance to perceptual organization. We argue that with applications ranging from image segmentation and edge classification to shading analysis and shape interpretation, texture flows deserve attention equal to edge segment grouping and curve completion. This paper develops the notion of texture flow from a geometrical point of view to argue that local measurements of such structures must incorporate two curvatures. We show how basic theoretical considerations lead to a unique model for the local behavior of the flow and to a notion of texture flow "good continuation." This, in turn, translates to a specification of consistency constraints between nearby flow measurements which we use for the computation of globally (piecewise) coherent structure through the contextual framework of relaxation labeling. We demonstrate the results on synthetic and natural images.  相似文献   

10.
One of the challenging problems for shape editing is to adapt shapes with diversified structures for various editing needs. In this paper we introduce a shape editing approach that automatically adapts the structure of a shape being edited with respect to user inputs. Given a category of shapes, our approach first classifies them into groups based on the constituent parts. The group‐sensitive priors, including both inter‐group and intra‐group priors, are then learned through statistical structure analysis and multivariate regression. By using these priors, the inherent characteristics and typical variations of shape structures can be well captured. Based on such group‐sensitive priors, we propose a framework for real‐time shape editing, which adapts the structure of shape to continuous user editing operations. Experimental results show that the proposed approach is capable of both structure‐preserving and structure‐varying shape editing.  相似文献   

11.
Hu  Hao  Shen  Gang  Liu  Yuling  Fu  Zhengxin  Yu  Bin 《Multimedia Tools and Applications》2019,78(9):12055-12082

Random grid (RG) is an alternative approach to realize a visual secret sharing (VSS) scheme. RG-based VSS has merits such as no pixel expansion and no tailor-made matrix requirement. Recently, many investigations on RG-based VSS are made. However, they need further improvements. In this paper, we obtain some improvements on RG-based VSS. Actually, two improved schemes are proposed, namely RG-based VSS for general access structure (GAS) with improved contrast and extended RG-based VSS with improved access structure. The first scheme can achieve better contrast than previous schemes. The second scheme reduces the chance of suspicion on secret image encryption by generating meaningful shares instead of noise-like shares in the first scheme, and improves the access structure from (k, k) to GAS while maintaining the property that the contrast of the recovered image is traded with that of share images by setting a certain parameter from small to large. Finally, theoretical analyses and experimental results are provided to demonstrate the effectiveness and advantages of the proposed schemes.

  相似文献   

12.
The “visual space” of an optical observer situated at a single, fixed viewpoint is necessarily very ambiguous. Although the structure of the “visual field” (the lateral dimensions, i.e., the “image”) is well defined, the “depth” dimension has to be inferred from the image on the basis of “monocular depth cues” such as occlusion, shading, etc. Such cues are in no way “given”, but are guesses on the basis of prior knowledge about the generic structure of the world and the laws of optics. Thus such a guess is like a hallucination that is used to tentatively interpret image structures as depth cues. The guesses are successful if they lead to a coherent interpretation. Such “controlled hallucination” (in psychological terminology) is similar to the “analysis by synthesis” of computer vision. Although highly ambiguous, visual spaces do have geometrical structure. The group of ambiguities left open by the cues (e.g., the well known bas-relief ambiguity in the case of shape from shading) may be interpreted as the group of congruences (proper motions) of the space. The general structure of visual spaces for different visual fields is explored in the paper. Applications include improved viewing systems for optical man-machine interfaces.  相似文献   

13.
Improving shape depiction under arbitrary rendering   总被引:1,自引:0,他引:1  
Based on the observation that shading conveys shape information through intensity gradients, we present a new technique called Radiance Scaling that modifies the classical shading equations to offer versatile shape depiction functionalities. It works by scaling reflected light intensities depending on both surface curvature and material characteristics. As a result, diffuse shading or highlight variations become correlated with surface feature variations, enhancing concavities and convexities. The first advantage of such an approach is that it produces satisfying results with any kind of material for direct and global illumination: we demonstrate results obtained with Phong and Ashikmin-Shirley BRDFs, Cartoon shading, sub-Lambertian materials, perfectly reflective or refractive objects. Another advantage is that there is no restriction to the choice of lighting environment: it works with a single light, area lights, and interreflections. Third, it may be adapted to enhance surface shape through the use of precomputed radiance data such as Ambient Occlusion, Prefiltered Environment Maps or Lit Spheres. Finally, our approach works in real time on modern graphics hardware making it suitable for any interactive 3D visualization.  相似文献   

14.
Recently, the detection of visual attention regions (VAR) is becoming more important due to its useful application in the area of multimedia. Although there exist a lot of approaches to detect visual attention regions, few of them consider the semantic gap between the visual attention regions and high-level semantics. In this paper, we propose a rule based technique for the extraction of visual attention regions at the object level based on real-time clustering, such that VAR detection can be performed in a very efficient way. The proposed technique consists of four stages: 1) a fast segmentation technique which is called the real time clustering algorithm (RTCA); 2) a refined specification of VAR which is known as the hierarchical visual attention regions (HVAR); 3) a new algorithm known as the rule based detection algorithm (RADA) to obtain the set of HVARs in real time, and 4) a new adaptive image display module and the corresponding adaptation operations using HVAR. We also define a new background measure which combines both feature contrast and the geometric property of the region to identify the background region, and a confidence factor which is used to extract the set of hierarchical visual attention regions. Compared with existing techniques, our approach has two advantages: 1) the approach detects the visual attention region at the object level, which bridges the gap between traditional visual attention regions and high-level semantics; 2) our approach is efficient and easy to implement  相似文献   

15.
16.
This paper presents an interactive technique for the dense texture-based visualization of unsteady 3D flow, taking into account issues of computational efficiency and visual perception. High efficiency is achieved by a 3D graphics processing unit (GPU)-based texture advection mechanism that implements logical 3D grid structures by physical memory in the form of 2D textures. This approach results in fast read and write access to physical memory, independent of GPU architecture. Slice-based direct volume rendering is used for the final display. We investigate two alternative methods for the volumetric illumination of the result of texture advection: First, gradient-based illumination that employs a real-time computation of gradients, and, second, line-based lighting based on illumination in codimension 2. In addition to the Phong model, perception-guided rendering methods are considered, such as cool/warm shading, halo rendering, or color-based depth cueing. The problems of clutter and occlusion are addressed by supporting a volumetric importance function that enhances features of the flow and reduces visual complexity in less interesting regions. GPU implementation aspects, performance measurements, and a discussion of results are included to demonstrate our visualization approach.  相似文献   

17.
Motivated by the studies in Gestalt principle, this paper describes a novel approach on the adaptive selection of visual features for trademark retrieval. We consider five kinds of visual saliencies: symmetry, continuity, proximity, parallelism and closure property. The first saliency is based on Zernike moments, while the others are modeled by geometric elements extracted illusively as a whole from a trademark. Given a query trademark, we adaptively determine the features appropriate for retrieval by investigating its visual saliencies. We show that in most cases, either geometric or symmetric features can give us good enough accuracy. To measure the similarity of geometric elements, we propose a maximum weighted bipartite graph (WBG) matching algorithm under transformation sets which is found to be both effective and efficient for retrieval.  相似文献   

18.
For designing and modeling complicated and sophisticated systems such as cyberworlds, their mathematical foundation is critical. To realize it, two important properties called the homotopy lifting property (HLP) and homotopy extension property (HEP) are applied for designing and modeling a system in a bottom-up way and a top-down way, respectively. In this paper, an enterprise system and a real-time embedded system are considered as important socially emerging cases of cyberworlds, where the π-calculus processes for describing these behaviors formally, a Petri net for explaining process interactions, and XMOS XC programs are modeled and designed by our approach. The spaces in both properties are specified by the incrementally modular abstraction hierarchy by climbing down the abstraction hierarchy from the most abstract homotopy level to the most specific view level, while keeping invariants such as homotopy equivalence and topological equivalence.  相似文献   

19.
Shape from shading (SfS) and stereo are two fundamentally different strategies for image-based 3-D reconstruction. While approaches for SfS infer the depth solely from pixel intensities, methods for stereo are based on a matching process that establishes correspondences across images. This difference in approaching the reconstruction problem yields complementary advantages that are worthwhile being combined. So far, however, most “joint” approaches are based on an initial stereo mesh that is subsequently refined using shading information. In this paper we follow a completely different approach. We propose a joint variational method that combines both cues within a single minimisation framework. To this end, we fuse a Lambertian SfS approach with a robust stereo model and supplement the resulting energy functional with a detail-preserving anisotropic second-order smoothness term. Moreover, we extend the resulting model in such a way that it jointly estimates depth, albedo and illumination. This in turn makes the approach applicable to objects with non-uniform albedo as well as to scenes with unknown illumination. Experiments for synthetic and real-world images demonstrate the benefits of our combined approach: They not only show that our method is capable of generating very detailed reconstructions, but also that joint approaches are feasible in practice.  相似文献   

20.
In this paper, we deal with the problem of automatically synthesizing “good” neighborhoods for a specific class of problems, namely constrained cardinality‐minimization problems. Exploiting the peculiarity of the objective function of such problems, we develop automatic ejection chain moves that define neighborhood structures to be explored with a black‐box solver. In particular, starting from a formulation of a cardinality‐minimization problem and a feasible solution, our procedure automatically detects the “entities” involved in the problem and learns the strength of the relationships among them. This information is then used to define the characteristics of our moves that consist in ejecting one entity at a time from the solution. If one of such moves results in an infeasible solution, then feasibility is recovered by performing an additional step based on the solution of an auxiliary problem. The computational results show that, when assessed on four well‐known constrained cardinality‐minimization problems, our approach outperforms both a black‐box mixed integer programming solver and a state‐of‐the‐art model‐based neighborhood search procedure with respect to both solution quality and computing times.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号