共查询到20条相似文献,搜索用时 10 毫秒
1.
For motion compensated de-interlace, the accuracy and reliability of the motion vectors have a significant impact on the performance
of the motion compensated interpolation. In order to improve the robustness of motion vector, a novel motion estimation algorithm
with center-biased diamond search and its parallel VLSI architecture are proposed in this paper. Experiments show that it
works better than conventional motion estimation algorithms in terms of motion compensation error and robustness, and its
architecture overcomes the irregular data flow and achieves high efficiency. It also efficiently reuses data and reduces the
control overhead. So, it is highly suitable for HDTV applications. 相似文献
2.
We consider the problem of preemptive scheduling on uniformly related machines. We present a semi-online algorithm which,
if the optimal makespan is given in advance, produces an optimal schedule. Using the standard doubling technique, this yields
a 4-competitive deterministic and an e≈2.71-competitive randomized online algorithm. In addition, it matches the performance of the previously known algorithms
for the offline case, with a considerably simpler proof. Finally, we study the performance of greedy heuristics for the same
problem. 相似文献
3.
Fernando José Mateus da Silva Juan Manuel Sánchez Pérez Juan Antonio Gómez Pulido Miguel A. Vega Rodríguez 《Applied Intelligence》2010,32(2):164-172
The alignment and comparison of DNA, RNA and Protein sequences is one of the most common and important tasks in Bioinformatics. However, due to the size and complexity of the search space involved, the search for the best possible alignment for a set of sequences is not trivial. Genetic Algorithms have a predisposition for optimizing general combinatorial problems and therefore are serious candidates for solving multiple sequence alignment tasks. Local search optimization can be used to refine the solutions explored by Genetic Algorithms. We have designed a Genetic Algorithm which incorporates local search for this purpose: AlineaGA. We have tested AlineaGA with representative sequence sets of the globin family. We also compare the achieved results with the results provided by T-COFFEE. 相似文献
4.
Keyword search enables inexperienced users to easily search XML database with no specific knowledge of complex structured query languages and XML data schemas. Existing work has addressed the problem of selecting data nodes that match keywords and connecting them in a meaningful way, e.g., SLCA and ELCA. However, it is time-consuming and unnecessary to serve all the connected subtrees to the users because in general the users are only interested in part of the relevant results. In this paper, we propose a new keyword search approach which basically utilizes the statistics of underlying XML data to decide the promising result types and then quickly retrieves the corresponding results with the help of selected promising result types. To guarantee the quality of the selected promising result types, we measure the correlations between result types and a keyword query by analyzing the distribution of relevant keywords and their structures within the XML data to be searched. In addition, relevant result types can be efficiently computed without keyword query evaluation and any schema information. To directly return top-k keyword search results that conform to the suggested promising result types, we design two new algorithms to adapt to the structural sensitivity of the keyword nodes over the keyword search results. Lastly, we implement all proposed approaches and present the relevant experimental results to show the effectiveness of our approach. 相似文献
5.
Image annotation datasets are becoming larger and larger, with tens of millions of images and tens of thousands of possible annotations. We propose a strongly performing method that scales to such datasets by simultaneously learning to optimize precision at k of the ranked list of annotations for a given image and learning a low-dimensional joint embedding space for both images and annotations. Our method both outperforms several baseline methods and, in comparison to them, is faster and consumes less memory. We also demonstrate how our method learns an interpretable model, where annotations with alternate spellings or even languages are close in the embedding space. Hence, even when our model does not predict the exact annotation given by a human labeler, it often predicts similar annotations, a fact that we try to quantify by measuring the newly introduced “sibling” precision metric, where our method also obtains excellent results. 相似文献
6.
Pre-processing is one of the vital steps for developing robust and efficient recognition system. Better pre-processing not
only aid in better data selection but also in significant reduction of computational complexity. Further an efficient frame
selection technique can improve the overall performance of the system. Pre-quantization (PQ) is the technique of selecting
less number of frames in the pre-processing stage to reduce the computational burden in the post processing stages of speaker
identification (SI). In this paper, we develop PQ techniques based on spectral entropy and spectral shape to pick suitable
frames containing speaker specific information that varies from frame to frame depending on spoken text and environmental
conditions. The attempt is to exploit the statistical properties of distributions of speech frames at the pre-processing stage
of speaker recognition. Our aim is not only to reduce the frame rate but also to maintain identification accuracy reasonably
high. Further we have also analyzed the robustness of our proposed techniques on noisy utterances. To establish the efficacy
of our proposed methods, we used two different databases, POLYCOST (telephone speech) and YOHO (microphone speech). 相似文献
7.
《Behaviour & Information Technology》2012,31(3):261-272
The perceived interactions, induction and assimilation between colours presented on a computer screen were investigated for seven participants who gave estimates on the perceived colours. A method based on memory estimation was used. In one experiment, a red – green scale was used, while in a second experiment a white – green scale was used. The distance between objects, shape of objects and colour of objects was varied. A distance effect of colour interaction was found in both experiments, but stronger for the red – green scale. For objects adjacent to each other the interaction effects were statistically significant. For objects not adjacent to each other some smaller effects occurred. No shape effects were found. Assimilation effects were shown for the red – green colour combinations. The participants seemed to use their own internal memory scale for their judgements. A theoretical model for distance effects of colour interaction is also presented. 相似文献
8.
In this paper, we study adaptive finite element approximation schemes for a constrained optimal control problem. We derive
the equivalent a posteriori error estimators for both the state and the control approximation, which particularly suit an
adaptive multi-mesh finite element scheme. The error estimators are then implemented and tested with promising numerical results. 相似文献
9.
Mehrtash T. Harandi Majid Nili Ahmadabadi Babak N. Araabi 《International Journal of Computer Vision》2009,81(2):191-204
This paper presents a novel learning approach for Face Recognition by introducing Optimal Local Basis. Optimal local bases
are a set of basis derived by reinforcement learning to represent the face space locally. The reinforcement signal is designed
to be correlated to the recognition accuracy. The optimal local bases are derived then by finding the most discriminant features
for different parts of the face space, which represents either different individuals or different expressions, orientations,
poses, illuminations, and other variants of the same individual. Therefore, unlike most of the existing approaches that solve
the recognition problem by using a single basis for all individuals, our proposed method benefits from local information by
incorporating different bases for its decision. We also introduce a novel classification scheme that uses reinforcement signal
to build a similarity measure in a non-metric space.
Experiments on AR, PIE, ORL and YALE databases indicate that the proposed method facilitates robust face recognition under
pose, illumination and expression variations. The performance of our method is compared with that of Eigenface, Fisherface,
Subclass Discriminant Analysis, and Random Subspace LDA methods as well. 相似文献
10.
The star graph is viewed as an attractive alternative to the hypercube. In this paper, we investigate the Hamiltonicity of
an n-dimensional star graph. We show that for any n-dimensional star graph (n≥4) with at most 3n−10 faulty edges in which each node is incident with at least two fault-free edges, there exists a fault-free Hamiltonian
cycle. Our result improves on the previously best known result for the case where the number of tolerable faulty edges is
bounded by 2n−7. We also demonstrate that our result is optimal with respect to the worst case scenario, where every other node of a cycle
of length 6 is incident with exactly n−3 faulty noncycle edges. 相似文献
11.
We consider the problem of maintaining polynomial and exponential decay aggregates of a data stream, where the weight of values
seen from the stream diminishes as time elapses. These types of aggregation were discussed by Cohen and Strauss (J. Algorithms
1(59), 2006), and can be used in many applications in which the relative value of streaming data decreases since the time the data was
seen. Some recent work and space efficient algorithms were developed for time-decaying aggregations, and in particular polynomial
and exponential decaying aggregations. All of the work done so far has maintained multiplicative approximations for the aggregates.
In this paper we present the first O(log N) space algorithm for the polynomial decay under a multiplicative approximation, matching a lower bound. In addition, we explore
and develop algorithms and lower bounds for approximations allowing an additive error in addition to the multiplicative error.
We show that in some cases, allowing an additive error can decrease the amount of space required, while in other cases we
cannot do any better than a solution without additive error. 相似文献
12.
Pablo San Segundo Diego Rodríguez-Losada Fernando Matía Ramón Galán 《Applied Intelligence》2010,32(3):311-329
The problem of finding the optimal correspondence between two sets of geometric entities or features is known to be NP-hard in the worst case. This problem appears in many real scenarios such as fingerprint comparisons, image matching and global localization of mobile robots. The inherent complexity of the problem can be avoided by suboptimal solutions, but these could fail with high noise or corrupted data. The correspondence problem has an interesting equivalent formulation in finding a maximum clique in an association graph. We have developed a novel algorithm to solve the correspondence problem between two sets of features based on an efficient solution to the Maximum Clique Problem using bit parallelism. It outperforms an equivalent non bit parallel algorithm in a number of experiments with simulated and real data from two different correspondence problems. This article validates for the first time, to the best of our knowledge, that bit parallel optimization techniques can greatly reduce computational cost, thus making feasible the use of an exact solution in real correspondence search problems despite their inherent NP computational complexity. 相似文献
13.
Weifeng Xia Qian Ma Junwei Lu Guangming Zhuang 《International journal of systems science》2017,48(12):2644-2657
This paper deals with the problem of reliable filter with extended dissipativity for uncertain systems with discrete and distributed delays, and sensor-failure model of this system is assumed to be concerned with Markovian behaviour. First, based on a novel Lyapunov–Krasovskii functional, a sufficient condition, which ensures that the filtering error system is stochastically stable and extended dissipative, is obtained. Second, mode-dependent conditions for the solvability to the reliable filter with extended dissipativity problem are given in terms of linear matrix inequalities (LMIs). The desired filter parameters can be derived by using feasible solutions to the presented LMIs. Finally, two numerical examples are given to illustrate the effectiveness of the filter design method. 相似文献
14.
In this paper, an automatic image–text alignment algorithm is developed to achieve more effective indexing and retrieval of large-scale web images by aligning web images with their most relevant auxiliary text terms or phrases. First, a large number of cross-media web pages (which contain web images and their auxiliary texts) are crawled and segmented into a set of image–text pairs (informative web images and their associated text terms or phrases). Second, near-duplicate image clustering is used to group large-scale web images into a set of clusters of near-duplicate images according to their visual similarities. The near-duplicate web images in the same cluster share similar semantics and are simultaneously associated with a same or similar set of auxiliary text terms or phrases which co-occur frequently in the relevant text blocks, thus performing near-duplicate image clustering can significantly reduce the uncertainty on the relatedness between the semantics of web images and their auxiliary text terms or phrases. Finally, random walk is performed over a phrase correlation network to achieve more precise image–text alignment by refining the relevance scores between the web images and their auxiliary text terms or phrases. Our experiments on algorithm evaluation have achieved very positive results on large-scale cross-media web pages. 相似文献
15.
This paper introduces a tabu search heuristic for a production scheduling problem with sequence-dependent and time-dependent
setup times on a single machine. The problem consists in scheduling a set of dependent jobs, where the transition between
two jobs comprises an unrestricted setup that can be performed at any time, and a restricted setup that must be performed
outside of a given time interval which repeats daily in the same position. The setup time between two jobs is thus a function
of the completion time of the first job. The tabu search heuristic relies on shift and swap moves, and a surrogate objective
function is used to speed-up the neighborhood evaluation. Computational experiments show that the proposed heuristic consistently
finds better solutions in less computation time than a recent branch-and-cut algorithm. Furthermore, on instances where the
branch-and-cut algorithm cannot find the optimal solution, the heuristic always identifies a better solution. 相似文献
16.
Giovanni Bellettini Valentina Beorchia Maurizio Paolini 《Journal of Mathematical Imaging and Vision》2008,32(3):265-291
We introduce and study a two-dimensional variational model for the reconstruction of a smooth generic solid shape E, which may handle the self-occlusions and that can be considered as an improvement of the 2.1D sketch of Nitzberg and Mumford
(Proceedings of the Third International Conference on Computer Vision, Osaka, 1990). We characterize from the topological viewpoint the apparent contour of E, namely, we characterize those planar graphs that are apparent contours of some shape E. This is the classical problem of recovering a three-dimensional layered shape from its apparent contour, which is of interest
in theoretical computer vision. We make use of the so-called Huffman labeling (Machine Intelligence, vol. 6, Am. Elsevier,
New York, 1971), see also the papers of Williams (Ph.D. Dissertation, 1994 and Int. J. Comput. Vis. 23:93–108, 1997) and the paper of Karpenko and Hughes (Preprint, 2006) for related results. Moreover, we show that if E and F are two shapes having the same apparent contour, then E and F differ by a global homeomorphism which is strictly increasing on each fiber along the direction of the eye of the observer.
These two topological theorems allow to find the domain of the functional ℱ describing the model. Compactness, semicontinuity
and relaxation properties of ℱ are then studied, as well as connections of our model with the problem of completion of hidden
contours.
相似文献
Maurizio PaoliniEmail: |
17.
18.
We describe a method of representing human activities that allows a collection of motions to be queried without examples,
using a simple and effective query language. Our approach is based on units of activity at segments of the body, that can
be composed across space and across the body to produce complex queries. The presence of search units is inferred automatically
by tracking the body, lifting the tracks to 3D and comparing to models trained using motion capture data. Our models of short
time scale limb behaviour are built using labelled motion capture set. We show results for a large range of queries applied
to a collection of complex motion and activity. We compare with discriminative methods applied to tracker data; our method
offers significantly improved performance. We show experimental evidence that our method is robust to view direction and is
unaffected by some important changes of clothing. 相似文献
19.
Vítor A. Coutinho Renato J. Cintra Fábio M. Bayer Sunera Kulasekera Arjuna Madanayake 《Journal of Real-Time Image Processing》2016,11(2):247-249
A multiplierless pruned approximate eight-point discrete cosine transform (DCT) requiring only ten additions is introduced. The proposed algorithm was assessed in image and video compression, showing competitive performance with state-of-the-art methods. Digital synthesis in 45 nm CMOS technology up to place-and-route level indicates clock speed of 288 MHz at a 1.1 V supply. The \(8\times 8\) block rate is 36 MHz. The DCT approximation was embedded into HEVC reference software; resulting video frames, at up to 327 Hz for 8-bit RGB HEVC, presented negligible image degradation. 相似文献
20.
We provide optimal parameter estimates and a priori error bounds for symmetric discontinuous Galerkin (DG) discretisations of the second-order indefinite time-harmonic Maxwell equations. More specifically, we consider two variations of symmetric DG methods: the interior penalty DG (IP-DG) method and one that makes use of the local lifting operator in the flux formulation. As a novelty, our parameter estimates and error bounds are (i) valid in the pre-asymptotic regime; (ii) solely depend on the geometry and the polynomial order; and (iii) are free of unspecified constants. Such estimates are particularly important in three-dimensional (3D) simulations because in practice many 3D computations occur in the pre-asymptotic regime. Therefore, it is vital that our numerical experiments that accompany the theoretical results are also in 3D. They are carried out on tetrahedral meshes with high-order (p=1, 2, 3, 4) hierarchic H(curl)-conforming polynomial basis functions. 相似文献