首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
基于物理模型的虚拟颜色空间   总被引:1,自引:0,他引:1       下载免费PDF全文
传统的色度学系统是建立在人类视觉系统的心理物理实验基础上的,共有基于实验的色标系统(如MUNSELL颜色体系)和基于实验的经验公式系统(如CIELAB,CIELUV体系)二类体系。但是,人类颜色视觉系统的色度学体系及相应的颜色测量方法并不适合于机器视觉系统,因为人类色度学体系的颜色空间是非均匀空间,即色度图上两点之间的欧氏距离并不能代表颜色知觉的差异,由于两者存在着非线性关系,为此,提出将现代计算机礼堂中关于环境光照和物体表面的物理模型引入经典的色度学,以建立一套新的基于物理模型的虚拟颜色空间,用于标定不同颜色视觉系统的颜色率。  相似文献   

2.
Exploratory analysis of the chemical space is an important task in the field of cheminformatics. For example, in drug discovery research, chemists investigate sets of thousands of chemical compounds in order to identify novel yet structurally similar synthetic compounds to replace natural products. Manually exploring the chemical space inhabited by all possible molecules and chemical compounds is impractical, and therefore presents a challenge. To fill this gap, we present ChemoGraph, a novel visual analytics technique for interactively exploring related chemicals. In ChemoGraph, we formalize a chemical space as a hypergraph and apply novel machine learning models to compute related chemical compounds. It uses a database to find related compounds from a known space and a machine learning model to generate new ones, which helps enlarge the known space. Moreover, ChemoGraph highlights interactive features that support users in viewing, comparing, and organizing computationally identified related chemicals. With a drug discovery usage scenario and initial expert feedback from a case study, we demonstrate the usefulness of ChemoGraph.  相似文献   

3.
The increasing demand for low power consumption and high computational performance is outpacing available technological improvements in embedded systems. Approximate computing is a novel design paradigm trying to bridge this gap by leveraging the inherent error resilience of certain applications and trading in quality to achieve reductions in resource usage. Numerous approximation methods have emerged in this research field. While these methods are commonly demonstrated in isolation, their combination can increase the achieved benefits in complex systems. However, the propagation of errors throughout the system necessitates a global optimization of parameters, leading to an exponentially growing design space. Additionally, the parameterization of approximated components must consider potential cross-dependencies between them. This work proposes a systematic approach to integrate and optimally configure parameterizable approximate components in FPGA-based applications, focusing on low-level but high-bandwidth image processing pipelines. The design space is explored by a multi-objective genetic algorithm which takes parameter dependencies between different components into account. During the exploration, appropriate models are used to estimate the quality-resource trade-off for probed solutions without the need for time-consuming synthesis. We demonstrate and evaluate the effectiveness of our approach on two image processing applications that employ multiple approximations. The experimental results show that the proposed methods are able to produce a wide range of Pareto-optimal solutions, offering various choices regarding the desired quality-resource trade-off.  相似文献   

4.
多变量系统状态空间模型的递阶辨识   总被引:11,自引:1,他引:11  
丁锋  萧德云 《控制与决策》2005,20(8):848-853
研究多变量系统状态空间模型的递阶辨识问题,推广了作者提出的标量系统状态和参数联合辨识算法.当状态可量测时,利用最小二乘原理直接辨识状态空间模型的参数矩阵;当状态不可测时,利用递阶辨识原理提出了状态空间模型递阶辨识方法,使用系统输入输出数据来估计系统的未知状态和参数.状态空间模型递阶辨识方法分为两步:首先假设系统状态是已知的(即参数估计算法中的未知系统状态用其估计代替),基于状态估计和系统输入输出数据递归计算系统参数估计;然后基于系统输入输出数据和获得的参数估计,递归计算系统的状态估计.  相似文献   

5.
Image analysis algorithms are often highly parameterized and much human input is needed to optimize parameter settings. This incurs a time cost of up to several days. We analyze and characterize the conventional parameter optimization process for image analysis and formulate user requirements. With this as input, we propose a change in paradigm by optimizing parameters based on parameter sampling and interactive visual exploration. To save time and reduce memory load, users are only involved in the first step--initialization of sampling--and the last step--visual analysis of output. This helps users to more thoroughly explore the parameter space and produce higher quality results. We describe a custom sampling plug-in we developed for CellProfiler--a popular biomedical image analysis framework. Our main focus is the development of an interactive visualization technique that enables users to analyze the relationships between sampled input parameters and corresponding output. We implemented this in a prototype called Paramorama. It provides users with a visual overview of parameters and their sampled values. User-defined areas of interest are presented in a structured way that includes image-based output and a novel layout algorithm. To find optimal parameter settings, users can tag high- and low-quality results to refine their search. We include two case studies to illustrate the utility of this approach.  相似文献   

6.
Constructing and analyzing large biological pathway models is a significant challenge. We propose a general approach that exploits the structure of a pathway to identify pathway components, constructs the component models, and finally assembles the component models into a global pathway model. Specifically, we apply this approach to pathway parameter estimation, a main step in pathway model construction. A large biological pathway often involves many unknown parameters and the resulting high-dimensional search space poses a major computational difficulty. By exploiting the structure of a pathway and the distribution of available experimental data over the pathway, we decompose a pathway into components and perform parameter estimation for each component. However, some parameters may belong to multiple components. Independent parameter estimates from different components may be in conflict for such parameters. To reconcile these conflicts, we represent each component as a factor graph, a standard probabilistic graphical model. We then combine the resulting factor graphs and use a probabilistic inference technique called belief propagation to obtain the maximally likely parameter values that are globally consistent. We validate our approach on a synthetic pathway model based on the Akt-MAPK signaling pathways. The results indicate that the approach can potentially scale up to large pathway models.  相似文献   

7.
Diorama artists produce a spectacular 3D effect in a confined space by generating depth illusions that are faithful to the ordering of the objects in a large real or imaginary scene. Indeed, cognitive scientists have discovered that depth perception is mostly affected by depth order and precedence among objects. Motivated by these findings, we employ ordinal cues to construct a model from a single image that similarly to Dioramas, intensifies the depth perception. We demonstrate that such models are sufficient for the creation of realistic 3D visual experiences. The initial step of our technique extracts several relative depth cues that are well known to exist in the human visual system. Next, we integrate the resulting cues to create a coherent surface. We introduce wide slits in the surface, thus generalizing the concept of cardboard cutout layers. Lastly, the surface geometry and texture are extended alongside the slits, to allow small changes in the viewpoint which enriches the depth illusion.  相似文献   

8.
In a previous paper (Blair et al. 2001), the authors showed that the mechanism underlying Logic Programming can be extended to handle the situation where the atoms are interpreted as subsets of a given space X. The view of a logic program as a one-step consequence operator along with the concepts of supported and stable model can be transferred to such situations. In this paper, we show that we can further extend this paradigm by creating a new one-step consequence operator by composing the old one-step consequence operator with a monotonic idempotent operator (miop) in the space of all subsets of X, 2 X . We call this extension set based logic programming. We show that such a set based formalism for logic programming naturally supports a variety of options. For example, if the underlying space has a topology, one can insist that the new one-step consequence operator always produces a closed set or always produces an open set. The flexibility inherent in the semantics of set based logic programs is due to both the range of natural choices available for specifying the semantics of negation, as well as the role of monotonic idempotent operators (miops) as parameters in the semantics. This leads to a natural type of polymorphism for logic programming, i.e. the same logic program can produce a variety of outcomes depending on the miop associated with the semantics. We develop a general framework for set based programming involving miops. Among the applications, we obtain integer-based representations of real continuous functions as stable models of a set based logic program.   相似文献   

9.
Word vector embeddings are an emerging tool for natural language processing. They have proven beneficial for a wide variety of language processing tasks. Their utility stems from the ability to encode word relationships within the vector space. Applications range from components in natural language processing systems to tools for linguistic analysis in the study of language and literature. In many of these applications, interpreting embeddings and understanding the encoded grammatical and semantic relations between words is useful, but challenging. Visualization can aid in such interpretation of embeddings. In this paper, we examine the role for visualization in working with word vector embeddings. We provide a literature survey to catalogue the range of tasks where the embeddings are employed across a broad range of applications. Based on this survey, we identify key tasks and their characteristics. Then, we present visual interactive designs that address many of these tasks. The designs integrate into an exploration and analysis environment for embeddings. Finally, we provide example use cases for them and discuss domain user feedback.  相似文献   

10.
The identifiability of parameters in a model with known structure and no noise is the problem of accurate determination of the number of parameter space points which are solutions to the identification problem. Using the norm-coerciveness theorem, we demonstrate a result on global identifiability. We introduce the idea of a strong separator of the parameter space, which divides this space into various connected domains; in each of these domains there is one and only one solution to the problem of identification of parameters. To simplify the presentation, notations and examples of linear compartmental models are used here, but the main result (Theorem 3) is valid for all linear systems. Unfortunately, it is not always possible to use this result because the assumptions of this theorem are strong.  相似文献   

11.
High-level program optimizations, such as loop transformations, are critical for high performance on multi-core targets. However, complex sequences of loop transformations are often required to expose parallelism (both coarse-grain and fine-grain) and improve data locality. The polyhedral compilation framework has proved to be very effective at representing these complex sequences and restructuring compute-intensive applications, seamlessly handling perfectly and imperfectly nested loops. It models arbitrarily complex sequences of loop transformations in a unified mathematical framework, dramatically increasing the expressiveness (and expected effectiveness) of the loop optimization stage. Nevertheless identifying the most effective loop transformations remains a major challenge: current state-of-the-art heuristics in polyhedral frameworks simply fail to expose good performance over a wide range of numerical applications. Their lack of effectiveness is mainly due to simplistic performance models that do not reflect the complexity today’s processors (CPU, cache behavior, etc.). We address the problem of selecting the best polyhedral optimizations with dedicated machine learning models, trained specifically on the target machine. We show that these models can quickly select high-performance optimizations with very limited iterative search. We decouple the problem of selecting good complex sequences of optimizations in two stages: (1) we narrow the set of candidate optimizations using static cost models to select the loop transformations that implement specific high-level optimizations (e.g., tiling, parallelism, etc.); (2) we predict the performance of each high-level complex optimization sequence with trained models that take as input a performance-counter characterization of the original program. Our end-to-end framework is validated using numerous benchmarks on two modern multi-core platforms. We investigate a variety of different machine learning algorithms and hardware counters, and we obtain performance improvements over productions compilers ranging on average from $3.2\times $ to $8.7\times $ , by running not more than $6$ program variants from a polyhedral optimization space.  相似文献   

12.
We present two definitions of space complexity for Schönhage′s pointer machine (PM): a uniform measure, mass, and a logarithmic measure, capacity. We consider how each space measure affects the time and space relationships between pointer machines and the more classical models of computation. For example, we show that a Turing machine of space complexity s can be simulated by a pointer machine of mass complexity O(s/log s) in real time. This is an improvement of a result of van Emde Boas. We show that space compression is possible for pointer machines, and we show that the time and space hierarchies for pointer machines are tight. We also present a simulation of an alternating pointer machine of time complexity t by a deterministic pointer machine of mass complexity O(t/log t).  相似文献   

13.
基于OpenGL的战斗部结构可视化系统   总被引:4,自引:1,他引:3  
该文首先建立三种不同头部形状的半穿甲战斗部特征模型,通过这些模型能够实现战斗部参数化设计,然后结合OpenGL技术,分析了半穿甲战斗部可视化系统的设计和实现,包括可视化系统的总体框架和可视化建模技术,特别给出了旋转结构体的可视化建模技术。最后,利用数值积分方法,对半穿甲战斗部的特征参数进行计算,其特征参数主要包括壳体和炸药的质量、质心和体积。并实现了系统可视化,给出了某一种战斗部的可视化示例和相应的验证模型。  相似文献   

14.
Recently, the Isomap procedure [10] was proposed as a new way to recover a low-dimensional parametrization of data lying on a low-dimensional submanifold in high-dimensional space. The method assumes that the submanifold, viewed as a Riemannian submanifold of the ambient high-dimensional space, is isometric to a convex subset of Euclidean space. This naturally raises the question: what datasets can reasonably be modeled by this condition? In this paper, we consider a special kind of image data: families of images generated by articulation of one or several objects in a scene—for example, images of a black disk on a white background with center placed at a range of locations. The collection of all images in such an articulation family, as the parameters of the articulation vary, makes up an articulation manifold, a submanifold of L 2. We study the properties of such articulation manifolds, in particular, their lack of differentiability when the images have edges. Under these conditions, we show that there exists a natural renormalization of geodesic distance which yields a well-defined metric. We exhibit a list of articulation models where the corresponding manifold equipped with this new metric is indeed isometric to a convex subset of Euclidean space. Examples include translations of a symmetric object, rotations of a closed set, articulations of a horizon, and expressions of a cartoon face. The theoretical predictions from our study are borne out by empirical experiments with published Isomap code. We also note that in the case where several components of the image articulate independently, isometry may fail; for example, with several disks in an image avoiding contact, the underlying Riemannian manifold is locally isometric to an open, connected, but not convex subset of Euclidean space. Such a situation matches the assumptions of our recently-proposed Hessian Eigenmaps procedure, but not the original Isomap procedure.  相似文献   

15.
Content fingerprinting has been widely used for protecting the copyright of on-line digital media. By aggregating the perceptual attributes of digital media into an invariant digest, content fingerprinting enables user-generated-contents (UGC) networks to identify the unauthorized distribution of copyrighted contents. In this paper, we propose an image fingerprinting algorithm based on invariant generative model. The proposed work formulates fingerprinting algorithm as a hierarchy of parametric models. For better generalization performance, we first train the models to learn generic visual patterns from local image structures, which is accomplished by fitting the statistical distribution of local patches. The learned models are then fine-tuned to address the robustness and discriminability requirements of content fingerprinting. Moreover, our training scheme also regularizes the norm of gradients to force the models to learn visual features that are insensitive to distortion. The learned models are cascaded with a pooling operation to form the building block of fingerprinting algorithm. Considering the security requirement of copyright protection, we also develop a key-dependent scheme to randomize fingerprint computation. Experimental results validate that the proposed work can withstand a wide variety of distortions and achieve a higher content identification accuracy than competing algorithms.  相似文献   

16.
A common weathering effect is the appearance of cracks due to material fractures. Previous exemplar‐based aging and weathering methods have either reused images or sought to replicate observed patterns exactly. We introduce a new approach to exemplar‐based modeling that creates weathered patterns on synthetic objects by matching the statistics of fracture patterns in a photograph. We present a user study to determine which statistics are correlated to visual similarity and how they are perceived by the user. We then describe a revised physically‐based fracture model capable of producing a wide range of crack patterns at interactive rates. We demonstrate how a Bayesian optimization method can determine the parameters of this model so it can produce a pattern with the same key statistics as an exemplar. Finally, we present results using our approach and various exemplars to produce a variety of fracture effects in synthetic renderings of complex environments. The speed of the fracture simulation allows interactive previews of the fractured results and its application on large scale environments.  相似文献   

17.
张晓宇 《计算机科学》2012,39(7):175-177
针对传统SVM主动学习中批量采样方法的不足,提出了动态可行域划分算法.从特征空间与参数空间的对偶关系入手,深入分析SVM主动学习的本质,将特征空间中对样本的标注视为参数空间中对可行域的划分;通过综合利用当前分类模型和先前标注样本两方面信息,动态地优化可行域划分方案,以确保选取的样本对模型改进的价值,最终实现更为高效的选择性采样.实验结果表明,基于动态可行域划分的SVM主动学习算法能够显著提高所选样本的信息量,从而能够在有限的标注代价下大幅提高其分类性能.  相似文献   

18.
Parametric Feature Detection   总被引:7,自引:2,他引:7  
Most visual features are parametric in nature, including, edges, lines, corners, and junctions. We propose an algorithm to automatically construct detectors for arbitrary parametric features. To maximize robustness we use realistic multi-parameter feature models and incorporate optical and sensing effects. Each feature is represented as a densely sampled parametric manifold in a low dimensional subspace of a Hilbert space. During detection, the vector of intensity values in a window about each pixel in the image is projected into the subspace. If the projection lies sufficiently close to the feature manifold, the feature is detected and the location of the closest manifold point yields the feature parameters. The concepts of parameter reduction by normalization, dimension reduction, pattern rejection, and heuristic search are all employed to achieve the required efficiency. Detectors have been constructed for five features, namely, step edge (five parameters), roof edge (five parameters), line (six parameters), corner (five parameters), and circular disc (six parameters). The results of detailed experiments are presented which demonstrate the robustness of feature detection and the accuracy of parameter estimation.  相似文献   

19.
解决具有连续动作空间的问题是当前强化学习领域的一个研究热点和难点.在处理这类问题时,传统的强化学习算法通常利用先验信息对连续动作空间进行离散化处理,然后再求解最优策略.然而,在很多实际应用中,由于缺乏用于离散化处理的先验信息,算法效果会变差甚至算法失效.针对这类问题,提出了一种最小二乘行动者-评论家方法(least square actor-critic algorithm, LSAC),使用函数逼近器近似表示值函数及策略,利用最小二乘法在线动态求解近似值函数参数及近似策略参数,以近似值函数作为评论家指导近似策略参数的求解.将LSAC算法用于解决经典的具有连续动作空间的小车平衡杆问题和mountain car问题,并与Cacla(continuous actor-critic learning automaton)算法和eNAC(episodic natural actor-critic)算法进行比较.结果表明,LSAC算法能有效地解决连续动作空间问题,并具有较优的执行性能.  相似文献   

20.
This paper describes a graph-based approach to image processing, intended for use with images obtained from sensors having space variant sampling grids. The connectivity graph (CG) is presented as a fundamental framework for posing image operations in any kind of space variant sensor. Partially motivated by the observation that human vision is strongly space variant, a number of research groups have been experimenting with space variant sensors. Such systems cover wide solid angles yet maintain high acuity in their central regions. Implementation of space variant systems pose at least two outstanding problems. First, such a system must be active, in order to utilize its high acuity region; second, there are significant image processing problems introduced by the non-uniform pixel size, shape and connectivity. Familiar image processing operations such as connected components, convolution, template matching, and even image translation, take on new and different forms when defined on space variant images. The present paper provides a general method for space variant image processing, based on a connectivity graph which represents the neighbor-relations in an arbitrarily structured sensor. We illustrate this approach with the following applications: (1) Connected components is reduced to its graph theoretic counterpart. We illustrate this on a logmap sensor, which possesses a difficult topology due to the branch cut associated with the complex logarithm function. (2) We show how to write local image operators in the connectivity graph that are independent of the sensor geometry. (3) We relate the connectivity graph to pyramids over irregular tessalations, and implement a local binarization operator in a 2-level pyramid. (4) Finally, we expand the connectivity graph into a structure we call a transformation graph, which represents the effects of geometric transformations in space variant image sensors. Using the transformation graph, we define an efficient algorithm for matching in the logmap images and solve the template matching problem for space variant images. Because of the very small number of pixels typical of logarithmic structured space variant arrays, the connectivity graph approach to image processing is suitable for real-time implementation, and provides a generic solution to a wide range of image processing applications with space variant sensors.This research was supported by DARPA/ONR #N00014-90-C-0049 and AFOSR Life Sciences #88-0275. Please address all correspondence to Richard S. Wallace, NYU Robotics Research Laboratory, 715 Broadway 12th Floor, New York, NY 10003. This report is copyright ©1993 by the authors. This report is a revised draft of a report published as New York University Courant Institute of Mathematical Sciences Computer Science Technical Report (No. 589 and Robotics Report No. 256), October, 1991. Last revised October.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号