共查询到20条相似文献,搜索用时 15 毫秒
2.
Multimedia Tools and Applications - Nowadays, there are plenty of works introducing convolutional neural networks (CNNs) to the steganalysis and exceeding conventional steganalysis algorithms.... 相似文献
3.
We coin the term geoMobile data to emphasize datasets that exhibit geo-spatial features reflective of human behaviors. We propose and develop an EPIC framework to mine latent patterns from geoMobile data and provide meaningful interpretations: we first ‘E’xtract latent features from high dimensional geoMobile datasets via Laplacian Eigenmaps and perform clustering in this latent feature space; we then use a state-of-the-art visualization technique to ‘P’roject these latent features into 2D space; and finally we obtain meaningful ‘I’nterpretations by ‘C’ulling cluster-specific significant feature-set. We illustrate that the local space contraction property of our approach is most superior than other major dimension reduction techniques. Using diverse real-world geoMobile datasets, we show the efficacy of our framework via three case studies. 相似文献
4.
Traditional text data mining techniques are not directly applicable to image data which contain spatial information and are characterized by high-dimensional visual features. It is not a trivial task to discover meaningful visual patterns from images because the content variations and spatial dependence in visual data greatly challenge most existing data mining methods. This paper presents a novel approach to coping with these difficulties for mining visual collocation patterns. Specifically, the novelty of this work lies in the following new contributions: 1) a principled solution to the discovery of visual collocation patterns based on frequent itemset mining and 2) a self-supervised subspace learning method to refine the visual codebook by feeding back discovered patterns via subspace learning. The experimental results show that our method can discover semantically meaningful patterns efficiently and effectively. 相似文献
5.
Applied Intelligence - There are some missing values in the data when the data is acquired from the sensors or other equipments. This makes it difficult for performing the analysis based on the... 相似文献
6.
In this study, we propose and compare stochastic variants of the extra-gradient alternating direction method, named the stochastic extra-gradient alternating direction method with Lagrangian function (SEGL) and the stochastic extra-gradient alternating direction method with augmented Lagrangian function (SEGAL), to minimize the graph-guided optimization problems, which are composited with two convex objective functions in large scale. A number of important applications in machine learning follow the graph-guided optimization formulation, such as linear regression, logistic regression, Lasso, structured extensions of Lasso, and structured regularized logistic regression. We conduct experiments on fused logistic regression and graph-guided regularized regression. Experimental results on several genres of datasets demonstrate that the proposed algorithm outperforms other competing algorithms, and SEGAL has better performance than SEGL in practical use. 相似文献
7.
Multimedia Tools and Applications - Recently, most existing learning-based fusion methods are not fully end-to-end, which still predict the decision map and recover the fused image by the refined... 相似文献
8.
Data Mining and Knowledge Discovery - One fundamental problem in causal inference is the treatment effect estimation in observational studies, and its key challenge is to handle the confounding... 相似文献
9.
We consider the problem of learning a linear factor model. We propose a regularized form of principal component analysis (PCA) and demonstrate through experiments with synthetic and real data the superiority of resulting estimates to those produced by pre-existing factor analysis approaches. We also establish theoretical results that explain how our algorithm corrects the biases induced by conventional approaches. An important feature of our algorithm is that its computational requirements are similar to those of PCA, which enjoys wide use in large part due to its efficiency. 相似文献
10.
In content-based image retrieval (CBIR), relevant images are identified based on their similarities to query images. Most CBIR algorithms are hindered by the semantic gap between the low-level image features used for computing image similarity and the high-level semantic concepts conveyed in images. One way to reduce the semantic gap is to utilize the log data of users' feedback that has been collected by CBIR systems in history, which is also called “collaborative image retrieval.” In this paper, we present a novel metric learning approach, named “regularized metric learning,” for collaborative image retrieval, which learns a distance metric by exploring the correlation between low-level image features and the log data of users' relevance judgments. Compared to the previous research, a regularization mechanism is used in our algorithm to effectively prevent overfitting. Meanwhile, we formulate the proposed learning algorithm into a semidefinite programming problem, which can be solved very efficiently by existing software packages and is scalable to the size of log data. An extensive set of experiments has been conducted to show that the new algorithm can substantially improve the retrieval accuracy of a baseline CBIR system using Euclidean distance metric, even with a modest amount of log data. The experiment also indicates that the new algorithm is more effective and more efficient than two alternative algorithms, which exploit log data for image retrieval. 相似文献
11.
We consider the following problem (and variants thereof) that has important applications in the construction and evaluation of phylogenetic trees: Two rooted unordered binary trees with the same number of leaves have to be embedded in two layers in the plane such that the leaves are aligned in two adjacent layers. Additional matching edges between the leaves give a one-to-one correspondence between pairs of leaves of the different trees. Our goal is to find two planar embeddings of the two trees (drawn without crossings) that minimize the number of crossings of the matching edges. We derive both (classical) complexity results and (parameterized) algorithms for this problem (and some variants thereof). 1 相似文献
12.
In linear image restoration, the point spread function of the degrading system is assumed known even though this information is usually not available in real applications. As a result, both blur identification and image restoration must be performed from the observed noisy blurred image. This paper presents a computationally simple iterative blind image deconvolution method which is based on non-linear adaptive filtering. The new method is applicable to minimum as well as mixed phase blurs. The noisy blurred image is assumed to be the output of a two-dimensional linear shift-invariant system with an unknown point spread function contaminated by an additive noise. The method passes the noisy blurred image through a two-dimensional finite impulse response adaptive filter whose parameters are updated by minimizing the dispersion. When convergence occurs, the adaptive filter provides an approximate inverse of the point spread function. Moreover, its output is an estimate of the unobserved true image. Experimental results are provided. 相似文献
13.
Similarity functions are a fundamental component of many learning algorithms. When dealing with string or tree-structured data, measures based on the edit distance are widely used, and there exist a few methods for learning them from data. However, these methods offer no theoretical guarantee as to the generalization ability and discriminative power of the learned similarities. In this paper, we propose an approach to edit similarity learning based on loss minimization, called GESL. It is driven by the notion of ( ?, ??, ??)-goodness, a theory that bridges the gap between the properties of a similarity function and its performance in classification. Using the notion of uniform stability, we derive generalization guarantees that hold for a large class of loss functions. We also provide experimental results on two real-world datasets which show that edit similarities learned with GESL induce more accurate and sparser classifiers than other (standard or learned) edit similarities. 相似文献
14.
Multisurface proximal support vector machine via generalized eigenvalues (GEPSVM), being an effective classification tool for supervised learning, tries to seek two nonparallel planes that are determined by solving two generalized eigenvalue problems (GEPs). The GEPs may lead to an instable classification performance, due to matrix singularity. Proximal support vector machine using local information (LIPSVM), as a variant of GEPSVM, attempts to avoid the above shortcoming through adopting a similar formulation to the Maximum Margin Criterion (MMC). The solution to an LIPSVM follows directly from solving two standard eigenvalue problems. Actually, an LIPSVM can be viewed as a reduced algorithm, because it uses the selectively generated points to train the classifier. A major advantage of an LIPSVM is that it is resistant to outliers. In this paper, following the geometric intuition of an LIPSVM, a novel multi-plane learning approach called Localized Twin SVM via Convex Minimization (LCTSVM) is proposed. This approach determines two nonparallel planes by solving two newly formed SVM-type problems. In addition to keeping the superior characteristics of an LIPSVM, an LCTSVM still has its additional edges: (1) it has similar or better classification capability compared to LIPSVM, TWSVM and LSTSVM; (2) each plane is generated from a quadratic programming problem (QPP) instead of a special convex difference optimization arising from an LIPSVM; (3) the solution can be reduced to solving two systems of linear equations, resulting in considerably lesser computational cost; and (4) it can find the global minimum. Experiments carried out on both toy and real-world problems disclose the effectiveness of an LCTSVM. 相似文献
15.
In this paper, a regularized correntropy criterion (RCC) for extreme learning machine (ELM) is proposed to deal with the training set with noises or outliers. In RCC, the Gaussian kernel function is utilized to substitute Euclidean norm of the mean square error (MSE) criterion. Replacing MSE by RCC can enhance the anti-noise ability of ELM. Moreover, the optimal weights connecting the hidden and output layers together with the optimal bias terms can be promptly obtained by the half-quadratic (HQ) optimization technique with an iterative manner. Experimental results on the four synthetic data sets and the fourteen benchmark data sets demonstrate that the proposed method is superior to the traditional ELM and the regularized ELM both trained by the MSE criterion. 相似文献
16.
This paper describes CMS (constrained minimization synthesis), a fast, robust texture synthesis algorithm that creates output textures while satisfying constraints. We show that constrained texture synthesis can be posed in a principled way as an energy minimization problem that requires balancing two measures of quality: constraint satisfaction and texture seamlessness. We then present an efficient algorithm for finding good solutions to this problem using an adaptation of graphcut energy minimization. CMS is particularly well suited to detail synthesis, the process of adding high-resolution detail to low-resolution images. It also supports the full image analogies framework, while providing superior image quality and performance. CMS is easily extended to handle multiple constraints on a single output, thus enabling novel applications that combine both user-specified and image-based control 相似文献
17.
A regularized method for solution of minimization problems with imperfect output data in a Hilbert space is proposed and investigated. The method is based on a continuous second-order projection method with a variable metric. The sufficient convergence conditions are given and the stopping rule of the method is constructed.Translated from Kibernetika i Sistemnyi Analiz, No. 5, pp. 150–159, September–October 2004. 相似文献
19.
Two patterns are matched by putting one on top of the other and iteratively moving their individual parts until most of their corresponding parts are aligned. An energy function and a neighborhood of influence are defined for each iteration. Initially, a large neighborhood is used such that the movements result in global features being coarsely aligned. The neighborhood size is gradually reduced in successive iterations so that finer and finer details are aligned. Encouraging results have been obtained when applied to match complex Chinese characters. It has been observed that computation increases with the square of the number of moving parts which is quite favorable compared with other algorithms. The method was applied to the recognition of handwritten Chinese characters. After performing the iterative matching, a set of similarity measures are used to measure the similarity in topological features between the input and template characters. An overall recognition rate of 96.1% is achieved. 相似文献
20.
Graph cuts have become an increasingly important tool for solving a number of energy minimization problems in computer vision and other fields. In this paper, the graph cut problem is reformulated as an unconstrained $\ell_1$ norm minimization which can be solved effectively using interior point methods. This reformulation exposes connections between the graph cuts and other related continuous optimization problems. Eventually the problem is reduced to solving a sequence of sparse linear systems involving the Laplacian of the underlying graph. The proposed procedure exploits the structure of these linear systems in a manner that is easily amenable to parallel implementations. Experimental results obtained by applying the procedure to graphs derived from image processing problems are provided. 相似文献
|