首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 497 毫秒
1.
In this paper we perform a comparative study of the forward and backward Liouville mapping applied to the modeling of ring-shaped and non-gyrotropic velocity distribution functions of particles injected in a sheared electromagnetic field. The test-kinetic method is used to compute the velocity distribution function in various areas of a proton cloud moving in the vicinity of a region with a sharp transition of the magnetic field and a non-uniform electric field. In the forward approach the velocity distribution function is computed for a two-dimensional spatial bin, while in the backward approach the distribution function is averaged over a spatial bin with the same size as for the forward method and using a two-dimensional trapezoidal integration scheme. It is shown that the two approaches lead to similar results for spatial bins where the velocity distribution function varies smoothly. On the other hand, with bins covering regions of configuration space characterized by sharp spatial gradients of the velocity distribution function, the forward and backward approaches will generally provide different results.  相似文献   

2.
In this paper an effort has been made to improve the time complexity of existing geometric hashing based indexing approach for iris biometrics [1]. In the conventional approach, the annular iris image is used for the extraction of keypoints using Scale Invariant Feature Transform [2]. Further, geometric hashing [3] is used to index the database using extracted keypoints. The existing approach performs with an accuracy of 98.5% with improvement in time. However, to further improve time complexity, existing geometric hashing approach is made parallel during indexing as well as retrieval phase. In the proposed approach, the extracted keypoints are mapped to the processors of the hypercube through shared global memory. The geometric invariants are obtained for each basis pair allotted to individual processors in parallel. During indexing phase, these invariants are stored in the hash table. For iris retrieval, the invariants are obtained and the corresponding entries in the hash table receive a vote. The time complexity of the proposed approach is O(Mn 2) for M iris images each having n keypoints, in comparison to existing approach with time complexity of O(Mn 3). This marks the suitability of proposed approach for real-time applications.  相似文献   

3.
当前主流的Web图像检索方法仅考虑了视觉特征,没有充分利用Web图像附带的文本信息,并忽略了相关文本中涉及的有价值的语义,从而导致其图像表达能力不强。针对这一问题,提出了一种新的无监督图像哈希方法——基于语义迁移的深度图像哈希(semantic transfer deep visual hashing,STDVH)。该方法首先利用谱聚类挖掘训练文本的语义信息;然后构建深度卷积神经网络将文本语义信息迁移到图像哈希码的学习中;最后在统一框架中训练得到图像的哈希码和哈希函数,在低维汉明空间中完成对大规模Web图像数据的有效检索。通过在Wiki和MIR Flickr这两个公开的Web图像集上进行实验,证明了该方法相比其他先进的哈希算法的优越性。  相似文献   

4.
With the advance of internet and multimedia technologies, large-scale multi-modal representation techniques such as cross-modal hashing, are increasingly demanded for multimedia retrieval. In cross-modal hashing, three essential problems should be seriously considered. The first is that effective cross-modal relationship should be learned from training data with scarce label information. The second is that appropriate weights should be assigned for different modalities to reflect their importance. The last is the scalability of training process which is usually ignored by previous methods. In this paper, we propose Multi-graph Cross-modal Hashing (MGCMH) by comprehensively considering these three points. MGCMH is unsupervised method which integrates multi-graph learning and hash function learning into a joint framework, to learn unified hash space for all modalities. In MGCMH, different modalities are assigned with proper weights for the generation of multi-graph and hash codes respectively. As a result, more precise cross-modal relationship can be preserved in the hash space. Then Nyström approximation approach is leveraged to efficiently construct the graphs. Finally an alternating learning algorithm is proposed to jointly optimize the modality weights, hash codes and functions. Experiments conducted on two real-world multi-modal datasets demonstrate the effectiveness of our method, in comparison with several representative cross-modal hashing methods.  相似文献   

5.
刘冶  潘炎  夏榕楷  刘荻  印鉴 《计算机科学》2016,43(9):39-46, 51
在大数据时代,图像检索技术在大规模数据上的应用是一个热门的研究领域。近年来,大规模图像检索系统中, 图像哈希算法 由于具备提高图像的检索效率同时减少储存空间的优点而受到广泛的关注。现有的有监督学习哈希算法存在一些问题,主流的有监督的哈希算法需要通过图像特征提取器获取人为构造的图像特征表示,这种做法带来的图像特征损失影响了哈希算法的效果,也不能较好地处理图像数据集中语义的相似性问题。随着深度学习在大规模数据上研究的兴起,一些相关研究尝试通过深度神经网络进行有监督的哈希函数学习,提升了哈希函数的效果,但这类方法需要针对数据集人为设计复杂的深度神经网络,增大了哈希函数设计的难度,而且深度神经网络的训练需要较多的数据和较长的时间,这些问题影响了基于深度学习的哈希算法在大规模数据集上的应用。针对这些问题,提出了一种基于深度卷积神经网络的快速图像哈希算法,该算法通过设计优化问题的求解方法以及使用预训练的大规模深度神经网络,提高了哈希算法的效果,同时明显地缩短了复杂神经网络的训练时间。根据在不同图像数据集上的实验结果分析可知, 与现有的基准算法相比,提出的算法在哈希函数训练效果和训练时间上都具有较大的提高。  相似文献   

6.
基于积分不变量的断裂面匹配算法   总被引:1,自引:0,他引:1       下载免费PDF全文
提出一种基于积分不变量的断裂面匹配算法。根据在多尺度下特征点的体积积分不变量,得到初始匹配点对,利用相容性约束比较点的相似程度,排除伪匹配点对,并组成匹配点对列表,对于匹配列表中的每一点对,计算出将其法矢方向映射为一致的所有三维空间变换集合,通过双层几何哈希,为匹配点对及其对应的空间进行投票,当得票数大于给定阈值时,两断裂面匹配。实验结果表明,该算法能实现断裂面部分和完全匹配。  相似文献   

7.
The online bin packing problem is a well-known bin packing variant and which requires immediate decisions to be made for the placement of a lengthy sequence of arriving items of various sizes one at a time into fixed capacity bins without any overflow. The overall goal is maximising the average bin fullness. We investigate a ‘policy matrix’ representation, which assigns a score for each decision option independently and the option with the highest value is chosen, for one-dimensional online bin packing. A policy matrix might also be considered as a heuristic with many parameters, where each parameter value is a score. We hence effectively investigate a framework which can be used for creating heuristics via many parameters. The proposed framework combines a Genetic Algorithm optimiser, which searches the space of heuristics in policy matrix form, and an online bin packing simulator, which acts as the evaluation function. The empirical results indicate the success of the proposed approach, providing the best solutions for almost all item sequence generators used during the experiments. We also present a novel fitness landscape analysis on the search space of policies. This study hence gives evidence of the potential for automated discovery by intelligent systems of powerful heuristics for online problems; reducing the need for expensive use of human expertise.  相似文献   

8.

The robustness of a visual servoing task depends mainly on the efficiency of visual selections captured from a sensor at each robot’s position. A task function could be described as a regulation of the values sent via the control law to the camera velocities. In this paper we propose a new approach that does not depend on matching and tracking results. Thus, we replaced the classical minimization cost by a new function based on probability distributions and Bhattacharyya distance. To guarantee more robustness, the information related to the observed images was expressed using a combination of orientation selections. The new visual selections are computed by referring to the disposition of Histograms of Oriented Gradients (HOG) bins. For each bin we assign a random variable representing gradient vectors in a particular direction. The new entries will not be used to establish equations of visual motion but they will be directly inserted into the control loop. A new formulation of the interaction matrix has been presented according to the optical flow constraint and using an interpolation function which leads to a more efficient control behaviour and to more positioning accuracy. Experiments demonstrate the robustness of the proposed approach with respect to varying work space conditions.

  相似文献   

9.
弱时频正交性条件下的混合矩阵盲估计   总被引:1,自引:0,他引:1  
针对语音信号的弱时频正交性,提出一种基于主分量分析的混合矩阵估计方法.在时频域中,允许每个时频点存在任意多个源信号,通过对每个时频点进行主分量分析,检测只有一个源信号存在的时频点,此类时频点最大特征值对应的特征向量即为混合向量的一个估计,因此对所有估计出的混合向量进行K均值聚类,将聚类中心作为混合矩阵的估计.实验仿真表明,提出的方法提高了混合矩阵的估计精度,特别适用于估计欠定情况下的混合矩阵.  相似文献   

10.
11.
This paper proposes an efficient indexing scheme for palmprint-based identification system. The proposed system uses geometric hashing of SURF key-points to index the palmprint into hash table and makes score level fusion of voting strategy based on geometric hashing and SURF score to identify the live palmprint. All ordered pairs of SURF key-points of the palmprint are scaled and mapped to a predefined coordinate system and all other points are similarity transformed. The new location after transformation serves as the index of the hash table. During identification, all ordered pairs of key-points of live palmprint are scaled and mapped to the coordinate system while remaining points are similarity transformed. A vote is casted to all images in the corresponding bins. Images having votes more than certain threshold are considered as candidate images of the live palmprint. SURF features of the live palmprint and the candidate images are compared for matching. Matching scores which are based on SURF key-points and vote of the corresponding candidate image are fused using weighted sum rule. The candidate image with the highest fused score is considered as the best match. The system is tested on IITK, CASIA and PolyU datasets. It has been observed that penetration rate of the proposed system is less than 30% for 0% bin miss rate (BMR) and has the identification accuracy of more than 97% for all three datasets. Further, the system is evaluated for robustness on downscaled and rotated. It has been found that the identification accuracy of the system for top best match is more than 90% for images downscaled up to 49% and accuracy is more than 85% when images are rotated at any angle.  相似文献   

12.
As manufacturing geometries continue to shrink and circuit performance increases, fast fault detection and semiconductor yield improvement is of increasing concern. Circuits must be controlled to reduce parametric yield loss, and the resulting circuits tested to guarantee that they meet specifications. In this paper, a hybrid approach that integrates the Self-Organizing Map and Support Vector Machine for wafer bin map classification is proposed. The log odds ratio test is employed as a spatial clustering measurement preprocessor to distinguish between the systematic and random wafer bin map distribution. After the smoothing step is performed on the wafer bin map, features such as co-occurrence matrix and moment invariants are extracted. The wafer bin maps are then clustered with the Self-Organizing Map using the aforementioned features. The Support Vector Machine is then applied to classify the wafer bin maps to identify the manufacturing defects. The proposed method can transform a large number of wafer bin maps into a small group of specific failure patterns and thus shorten the time and scope for troubleshooting to yield improvement. Real data on over 3000 wafers were applied to the proposed approach. The experimental results show that our approach can obtain over 90% classification accuracy and outperform back-propagation neural network.  相似文献   

13.
Studies the computation of projective invariants in pairs of images from uncalibrated cameras and presents a detailed study of the projective and permutation invariants for configurations of points and/or lines. Two basic computational approaches are given, one algebraic and one geometric. In each case, invariants are computed in projective space or directly from image measurements. Finally, we develop combinations of those projective invariants which are insensitive to permutations of the geometric primitives of each of the basic configurations  相似文献   

14.
Multimedia-based hashing is considered an important technique for achieving authentication and copy detection in digital contents. However, 3D model hashing has not been as widely used as image or video hashing. In this study, we develop a robust 3D mesh-model hashing scheme based on a heat kernel signature (HKS) that can describe a multi-scale shape curve and is robust against isometric modifications. We further discuss the robustness, uniqueness, security, and spaciousness of the method for 3D model hashing. In the proposed hashing scheme, we calculate the local and global HKS coefficients of vertices through time scales and 2D cell coefficients by clustering HKS coefficients with variable bin sizes based on an estimated L2 risk function, and generate the binary hash through binarization of the intermediate hash values by combining the cell values and the random values. In addition, we use two parameters, bin center points and cell amplitudes, which are obtained through an iterative refinement process, to improve the robustness, uniqueness, security, and spaciousness further, and combine them in a hash with a key. By evaluating the robustness, uniqueness, and spaciousness experimentally, and through a security analysis based on the differential entropy, we verify that our hashing scheme outperforms conventional hashing schemes.  相似文献   

15.
Blind source separation (BSS) is a challenging problem in real-world environments where sources are time delayed and convolved. The problem becomes more difficult in very reverberant conditions, with an increasing number of sources, and geometric configurations of the sources such that finding directionality is not sufficient for source separation. In this paper, we propose a new algorithm that exploits higher order frequency dependencies of source signals in order to separate them when they are mixed. In the frequency domain, this formulation assumes that dependencies exist between frequency bins instead of defining independence for each frequency bin. In this manner, we can avoid the well-known frequency permutation problem. To derive the learning algorithm, we define a cost function, which is an extension of mutual information between multivariate random variables. By introducing a source prior that models the inherent frequency dependencies, we obtain a simple form of a multivariate score function. In experiments, we generate simulated data with various kinds of sources in various environments. We evaluate the performances and compare it with other well-known algorithms. The results show the proposed algorithm outperforms the others in most cases. The algorithm is also able to accurately recover six sources with six microphones. In this case, we can obtain about 16-dB signal-to-interference ratio (SIR) improvement. Similar performance is observed in real conference room recordings with three human speakers reading sentences and one loudspeaker playing music  相似文献   

16.
We propose a new distance called Hierarchical Semantic-Based Distance (HSBD), devoted to the comparison of nominal histograms equipped with a dissimilarity matrix providing the semantic correlations between the bins. The computation of this distance is based on a hierarchical strategy, progressively merging the considered instances (and their bins) according to their semantic proximity. For each level of this hierarchy, a standard bin-to-bin distance is computed between the corresponding pair of histograms. In order to obtain the proposed distance, these bin-to-bin distances are then fused by taking into account the semantic coherency of their associated level. From this modus operandi, the proposed distance can handle histograms which are generally compared thanks to cross-bin distances. It preserves the advantages of such cross-bin distances (namely robustness to histogram translation and histogram bin size issues), while inheriting the low computational cost of bin-to-bin distances. Validations in the context of geographical data classification emphasize the relevance and usefulness of the proposed distance.  相似文献   

17.
18.
In this paper, a robust hash technique for image content authentication using histogram is proposed. The histogram based hash techniques reported in the literature are robust against Content Preserving Manipulations as well as incidental distortion. The major drawback of these techniques is that, they are not sensitive to Content Changing Manipulations and also un-altered histogram image modifications. To overcome these drawbacks, we present a novel hash technique which divides the image into non-overlapped blocks and distributes histogram bins of the image block into larger containers based on the Partial Sum of pixel count of histogram bins. An intermediate hash is produced by computing the ratio of pixel count between two neighbouring containers. The intermediate image hash is obtained by concatenating intermediate hashes of image blocks. Finally, the intermediate image hash is normalized and randomly permuted with a secret key to produce a robust and secure hash. The results shows that, the proposed method performs better when compared to the existing methods against the Content Preserving manipulations. Besides, the proposed method is more sensitive to Content Changing manipulations as well as un-altered histogram image modifications. The performance results on image authentication indicate that, the proposed method has high discriminative capability and strong robustness.  相似文献   

19.
多约束尺寸可变的装箱问题作为经典装箱问题的扩展,具有极为广泛的应用背景。在以货车运输为主的物流公司的装载环节中,运输成本不仅仅由车厢的空间利用率决定。分析了该类装箱问题与传统的集装箱装载问题的区别,并据此给出了一种新的尺寸可变装箱问题的定义。除了经典装箱问题中物品体积这一参数,还引入了物品类型、箱子类型等参数,建立了数学模型,将经典的FFD(First Fit Decreasing)算法进行了推广,提出了新的算法MFFD,并分析了相关的算法复杂性。最后对FF、FFD以及MFFD算法进行了模拟实验,实验结果表明,在相关参数符合均匀分布的条件下,MFFD算法效果较好。  相似文献   

20.
结合双混沌系统以及传统散列函数的优点,提出一种新的带密钥单向散列函数的构造方法。该方法将帐篷映射和Logistic混沌映射结合组成双混沌系统生成混沌序列,作为动态参数代替传统散列算法中的固定参数参与轮函数的运算并生成散列摘要。结果表明,所提方法具有较大的密钥空间,很好的单向性,初值和密钥敏感性。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号