首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到10条相似文献,搜索用时 93 毫秒
1.
操作系统内核作为软件系统的基础组件, 其安全可靠是构造高可信软件系统的重要环节, 但是, 在实际的验证工作中, 操作系统内核中全局性质的不变式定义, 复杂数据结构程序的形式化描述和验证仍存在很多困难. 本文针对操作系统内核中满足的全局性质, 在代码层以函数为单位, 用全局不变式进行定义, 并在不同的函数中进行形式化验证, 从而证明各个函数符合操作系统内核的全局性质; 针对操作系统内核中经常使用的复杂数据结构程序, 本文通过扩展形状图理论, 提出一种使用嵌套形状图逻辑的方法来形式化描述复杂数据结构程序, 并对该方法进行了正确性证明, 最终成功验证操作系统内核中关于任务创建与调度, 消息队列创建与操作相关的代码.  相似文献   

2.
Kernel methods are a class of well established and successful algorithms for pattern analysis thanks to their mathematical elegance and good performance. Numerous nonlinear extensions of pattern recognition techniques have been proposed so far based on the so-called kernel trick. The objective of this paper is twofold. First, we derive an additional kernel tool that is still missing, namely kernel quadratic discriminant (KQD). We discuss different formulations of KQD based on the regularized kernel Mahalanobis distance in both complete and class-related subspaces. Secondly, we propose suitable extensions of kernel linear and quadratic discriminants to indefinite kernels. We provide classifiers that are applicable to kernels defined by any symmetric similarity measure. This is important in practice because problem-suited proximity measures often violate the requirement of positive definiteness. As in the traditional case, KQD can be advantageous for data with unequal class spreads in the kernel-induced spaces, which cannot be well separated by a linear discriminant. We illustrate this on artificial and real data for both positive definite and indefinite kernels.  相似文献   

3.
The classical redistribution problem aims at optimally scheduling communications when reshuffling from an initial data distribution to a target data distribution. This target data distribution is usually chosen to optimize some objective for the algorithmic kernel under study (good computational balance or low communication volume or cost), and therefore to provide high efficiency for that kernel. However, the choice of a distribution minimizing the target objective is not unique. This leads to generalizing the redistribution problem as follows: find a re-mapping of data items onto processors such that the data redistribution cost is minimal, and the operation remains as efficient. This paper studies the complexity of this generalized problem. We compute optimal solutions and evaluate, through simulations, their gain over classical redistribution. We also show the NP-hardness of the problem to find the optimal data partition and processor permutation (defined by new subsets) that minimize the cost of redistribution followed by a simple computational kernel. Finally, experimental validation of the new redistribution algorithms are conducted on a multicore cluster, for both a 1D-stencil kernel and a more compute-intensive dense linear algebra routine.  相似文献   

4.
Kernel algorithms for large data-sets are now an active research area motivated by the many real world problems producing very large numbers of data points. However, standard kernel methods scale poorly with the size of the data-set. We propose a mathematically motivated approach to sparse function estimation that utilises the uniform continuity properties of functions in continuous reproducing kernel Hilbert spaces (RKHS) defined on compact domains. Using the uniform continuity properties of the function a similarity measure on data points is defined that allows data to be pre-clustered. Unlike previous methods for sparse function estimation using clustering the proposed approach is supervised and relies on well-defined mathematical concepts. The cluster centres are used to form a sparse RKHS subspace within which the function is estimated. The greedy pre-clustering algorithms and sparse model based on pre-clustering and committee machine techniques are presented. Results are presented to demonstrate the application of the proposed algorithms on function approximation problems.  相似文献   

5.
The tensor kernel has been used across the machine learning literature for a number of purposes and applications, due to its ability to incorporate samples from multiple sources into a joint kernel defined feature space. Despite these uses, there have been no attempts made towards investigating the resulting tensor weight in respect to the contribution of the individual tensor sources. Motivated by the increase in the current availability of Neuroscience data, specifically for two-source analyses, we propose a novel approach for decomposing the resulting tensor weight into its two components without accessing the feature space. We demonstrate our method and give experimental results on paired fMRI image-stimuli data.  相似文献   

6.
Elmwood is an object-oriented, multiprocessor operating system designed and implemented during a graduate seminar. It consists of a minimal kernel and a collection of user-implemented services. The kernel provides two major abstractions: objects, which consist of code and data, and processes, which represent asynchronous activity. Objects, like programs, are passive. To operate on an abstraction or to request a service, processes invoke an entry procedure defined by the corresponding object. Objects implement their own protection and synchronization policies using minimal kernel mechanisms. We describe the Elmwood kernel interface, an implementation on the BBN Butterfly parallel processor, and our experiences in developing a multiprocessor operating system under rigid time constraints. These experiences illustrate several general lessons regarding kernel design and trade-offs for implementation expedience.  相似文献   

7.
Graph-based representations have been proved powerful in computer vision. The challenge that arises with large amounts of graph data is that of computationally burdensome edit distance computation. Graph kernels can be used to formulate efficient algorithms to deal with high dimensional data, and have been proved an elegant way to overcome this computational bottleneck. In this paper, we investigate whether the Jensen-Shannon divergence can be used as a means of establishing a graph kernel. The Jensen-Shannon kernel is nonextensive information theoretic kernel, and is defined using the entropy and mutual information computed from probability distributions over the structures being compared. To establish a Jensen-Shannon graph kernel, we explore two different approaches. The first of these is based on the von Neumann entropy associated with a graph. The second approach uses the Shannon entropy associated with the probability state vector for a steady state random walk on a graph. We compare the two resulting graph kernels for the problem of graph clustering. We use kernel principle components analysis (kPCA) to embed graphs into a feature space. Experimental results reveal that the method gives good classification results on graphs extracted both from an object recognition database and from an application in bioinformation.  相似文献   

8.
Efficient SVM Regression Training with SMO   总被引:30,自引:0,他引:30  
The sequential minimal optimization algorithm (SMO) has been shown to be an effective method for training support vector machines (SVMs) on classification tasks defined on sparse data sets. SMO differs from most SVM algorithms in that it does not require a quadratic programming solver. In this work, we generalize SMO so that it can handle regression problems. However, one problem with SMO is that its rate of convergence slows down dramatically when data is non-sparse and when there are many support vectors in the solution—as is often the case in regression—because kernel function evaluations tend to dominate the runtime in this case. Moreover, caching kernel function outputs can easily degrade SMO's performance even more because SMO tends to access kernel function outputs in an unstructured manner. We address these problems with several modifications that enable caching to be effectively used with SMO. For regression problems, our modifications improve convergence time by over an order of magnitude.  相似文献   

9.
Gaussian mean-shift is an EM algorithm   总被引:2,自引:0,他引:2  
The mean-shift algorithm, based on ideas proposed by Fukunaga and Hosteller, is a hill-climbing algorithm on the density defined by a finite mixture or a kernel density estimate. Mean-shift can be used as a nonparametric clustering method and has attracted recent attention in computer vision applications such as image segmentation or tracking. We show that, when the kernel is Gaussian, mean-shift is an expectation-maximization (EM) algorithm and, when the kernel is non-Gaussian, mean-shift is a generalized EM algorithm. This implies that mean-shift converges from almost any starting point and that, in general, its convergence is of linear order. For Gaussian mean-shift, we show: 1) the rate of linear convergence approaches 0 (superlinear convergence) for very narrow or very wide kernels, but is often close to 1 (thus, extremely slow) for intermediate widths and exactly 1 (sublinear convergence) for widths at which modes merge, 2) the iterates approach the mode along the local principal component of the data points from the inside of the convex hull of the data points, and 3) the convergence domains are nonconvex and can be disconnected and show fractal behavior. We suggest ways of accelerating mean-shift based on the EM interpretation  相似文献   

10.
徐鲲鹏  陈黎飞  孙浩军  王备战 《软件学报》2020,31(11):3492-3505
现有的类属型数据子空间聚类方法大多基于特征间相互独立假设,未考虑属性间存在的线性或非线性相关性.提出一种类属型数据核子空间聚类方法.首先引入原作用于连续型数据的核函数将类属型数据投影到核空间,定义了核空间中特征加权的类属型数据相似性度量.其次,基于该度量推导了类属型数据核子空间聚类目标函数,并提出一种高效求解该目标函数的优化方法.最后,定义了一种类属型数据核子空间聚类算法.该算法不仅在非线性空间中考虑了属性间的关系,而且在聚类过程中赋予每个属性衡量其与簇类相关程度的特征权重,实现了类属型属性的嵌入式特征选择.还定义了一个聚类有效性指标,以评价类属型数据聚类结果的质量.在合成数据和实际数据集上的实验结果表明,与现有子空间聚类算法相比,核子空间聚类算法可以发掘类属型属性间的非线性关系,并有效提高了聚类结果的质量.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号