首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 468 毫秒
1.
Thedistance transform(DT) is an image computation tool which can be used to extract the information about the shape and the position of the foreground pixels relative to each other. It converts a binary image into a grey-level image, where each pixel has a value corresponding to the distance to the nearest foreground pixel. The time complexity for computing the distance transform is fully dependent on the different distance metrics. Especially, the more exact the distance transform is, the worse execution time reached will be. Nowadays, quite often thousands of images are processed in a limited time. It seems quite impossible for a sequential computer to do such a computation for the distance transform in real time. In order to provide efficient distance transform computation, it is considerably desirable to develop a parallel algorithm for this operation. In this paper, based on the diagonal propagation approach, we first provide anO(N2) time sequential algorithm to compute thechessboard distance transform(CDT) of anN×Nimage, which is a DT using the chessboard distance metrics. Based on the proposed sequential algorithm, the CDT of a 2D binary image array of sizeN×Ncan be computed inO(logN) time on the EREW PRAM model usingO(N2/logN) processors,O(log logN) time on the CRCW PRAM model usingO(N2/log logN) processors, andO(logN) time on the hypercube computer usingO(N2/logN) processors. Following the mapping as proposed by Lee and Horng, the algorithm for the medial axis transform is also efficiently derived. The medial axis transform of a 2D binary image array of sizeN×Ncan be computed inO(logN) time on the EREW PRAM model usingO(N2/logN) processors,O(log logN) time on the CRCW PRAM model usingO(N2/log logN) processors, andO(logN) time on the hypercube computer usingO(N2/logN) processors. The proposed parallel algorithms are composed of a set of prefix operations. In each prefix operation phase, only increase (add-one) operation and minimum operation are employed. So, the algorithms are especially efficient in practical applications.  相似文献   

2.
Image segmentation is an important and fundamental task for image and vision understanding. This paper describes a linear programming (LP) approach for segmenting a color image into multiple regions. Compared with the recently proposed semi-definite programming (SDP)-based approach, our approach has a simpler mathematical formulation, and a far lower computational complexity. In particular, to segment an image of M × N pixels into k classes, our method requires only O((M N k) m ) complexity—a sharp contrast to the complexity of O((M N k)2n ) if the SDP method is adopted, where m and n are the polynomial complexity of the corresponding LP solver and SDP solver, respectively (in general we have mn). Such a significant reduction in computation readily enables our algorithm to process color images of reasonable sizes. For example, while the existing SDP relaxation algorithm is only able to segment a toy-size image of, e.g., 10 × 10 to 30 × 30 pixels in hours time, our algorithm can process larger color image of, say, 100 × 100 to 500 × 500 image in much shorter time.  相似文献   

3.
Content based image retrieval is an active area of research. Many approaches have been proposed to retrieve images based on matching of some features derived from the image content. Color is an important feature of image content. The problem with many traditional matching-based retrieval methods is that the search time for retrieving similar images for a given query image increases linearly with the size of the image database. We present an efficient color indexing scheme for similarity-based retrieval which has a search time that increases logarithmically with the database size.In our approach, the color features are extracted automatically using a color clustering algorithm. Then the cluster centroids are used as representatives of the images in 3-dimensional color space and are indexed using a spatial indexing method that usesR-tree. The worst case search time complexity of this approach isOn q log(N* navg)), whereN is the number of images in the database, andn q andn avg are the number of colors in the query image and the average number of colors per image in the database respectively. We present the experimental results for the proposed approach on two databases consisting of 337 Trademark images and 200 Flag images.  相似文献   

4.
欧阳显斌  邵利平  乐志芳 《软件学报》2017,28(12):3306-3346
传统有意义图像分存方案存在认证能力偏低、攻击后不具备修复能力或修复能力整体较弱以及嵌入掩体视觉质量不高等问题.针对以上问题,提出一种结合非等量备份和双认证自修复有限域图像分存方案.所提方案包含分存和恢复阶段.在分存阶段,首先对密图做1级离散小波变换,取LL子带按密钥置乱,并对置乱后LL子带每个系数比特按比特位重要程度分组进行非等量备份来构造与密图等大备份图;然后对密图和备份图每个像素及其对应7K-13位认证信息在GF(27)有限域进行(K,N)分存,将产生的7位分存信息和使用密钥产生的1位认证信息使用优化LSB法嵌入到N个掩体2×2分块中;最后对密钥进行(K,N)分存,将子密钥对应MD5值公开到第3方公信方并将子密钥和嵌入掩体分发给参与者.在恢复阶段,首先对参与者提供的子密钥真实性进行检验,利用检验通过子密钥对密钥进行恢复;其次对分发掩体2×2分块嵌入的分存信息和1位认证信息使用密钥进行第1重认证,利用第1重认证通过分存信息重建GF(27)有限域分存多项式,提取出密图和备份图每个像素及其对应的7K-13位认证信息并对其进行第2重检验和构造初步密图、备份图以及认证图;再次由备份图和认证图重构密图LL子带,然后对其做逆置乱和逆离散小波变换得到密图修复参考图;最后对认证图每一个认证不通过秘密像素,根据其周围像素认证情况选择多项式插值拟合或进行修复参考图像素替代修复.理论和实验表明,同现有方法相比,所提方法具备更好认证能力,并能充分使用双认证和自然图像邻近像素相关性来提升其攻击后修复能力,且分发掩体具备较高视觉质量.  相似文献   

5.
The image template matching problem is one of the fundamental problems of and has many practical applications in image processing, pattern recognition, and computer vision. It is a useful operation for filtering, edge detection, image registration, and object detection [13]. In this paper, we first design twoO[(M2/p2)log logM] andO[(M2/p2)+(M/p)log logp] time parallel image template matching algorithms on a 3-D processor array with a reconfigurable bus system usingp2N2processors with each processor containingO(1) andO(M/p) restricted memory for 1 ≤pMN, respectively, for anN×Ndigital image and anM×Mtemplate. By increasing the number of processors, these two proposed algorithms can be run inO(M2/p2) time for speeding up the time complexity usingp2M1/cN2andp2+1/cN2processors, respectively, wherecis a constant andc≥1. Furthermore, anO(1) time can be also obtained from these two proposed algorithms by usingM2+1/cN2processors. These results improve the best known bounds and achieve both optimal and optimal speed-up in their time and processor complexities.  相似文献   

6.
目的 基于马尔可夫随机场(MRF)的变分光流计算是一种较为鲁棒的光流计算方法,但是计算效率很低。置信传播算法(BP) 是一种针对MRF较为高效的全局优化算法。本文提出一种MRF变分光流计算模型并采用并行BP方法实现,极大提高计算效率。方法 提出的MRF变分光流计算模型中的数据项采用了Horn等人根据灰度守恒假设得到的光流基本约束方程,并采用非平方惩罚函数进行调整以平滑边界影响。为在CUDA平台上实现高效并行处理,本文提出了一种优化的基于置信传播的MRF并行光流计算方法。该优化方法在采用置信传播最小化MRF光流能量函数时,采用了一种4层的3维网络结构进行并行计算,每层对应MRF4邻域模型中的一个方向的信息传播,同时在每层中为每个像素分配多个线程采用并行降维法计算所要传递的信息,大大降低单线程计算负荷,大幅度提高计算效率。结果 采用旋转小球图像序列进行实验,计算效率提高314倍;采用旋转小球、Yosemite山谷和RubberWhale 3种不同图像序列,与Horn算法、Weickert算法、Hossen并行Lucas算法、Grauer-Gray并行MRF算法进行对比实验,本文方法得到最低的平均端点误差(AEE),分别为0.13、0.55和0.34。结论 本文提出了一种新的MRF光流计算模型,并在CUDA平台上实现了并行优化计算。实验结果表明,本文提出的并行计算方法在保持计算精度的同时极大提高了计算效率。本文方法对内存需求巨大,在处理高分辨率图像时,限制了采样点数,难以计算大位移。  相似文献   

7.
A new, fast template-matching method using the Singular Value Decomposition (SVD) is presented. This approach involves a two-stage algorithm, which can be used to increase the speed of the matching process. In the first stage, the reference image is orthogonally separated by the SVD and then low-cost pseudo-correlation values are calculated. This reduces the number of computations to 2*N*L instead of N2L2, where L × L is the size of the reference image and N × N is the original image size. At the second stage, a small group of values near the maximum pseudo-correlation is selected. the true correlation for the small number of pixels in this group is them computed precisely in the second stage. Experimental and analytic results are presented to show how the computation complexity is greatly improved.  相似文献   

8.
Embedding of confidential data in the least significant bit of an image is still an attractive method of steganography. Utilizing the full capacity of cover images by embedding one bit of data per pixel, using methods such as LSB flipping or LSB matching, usually decreases the security, making the algorithm vulnerable to steganalytic attacks. In this paper, we propose a novel efficient high payload ±1 steganographic method based on a special two variable binary function. This function uses the information of the least two significant bit planes of the cover image for the embedding and extraction purposes. Embedding efficiency, defined as the number of embeddable bits per one change in the cover medium, is a good criterion for concurrent evaluation of the capacity and security. Rather than randomly selecting +1 or −1, we achieve higher embedding efficiencies by choosing the correct modification component. In the generalized form of the proposed method, n bits of data are embedded in n pixels of the cover medium, by causing one unit change in only one third of these pixels. Analytical and experimental results demonstrated that the proposed method provides higher embedding efficiency than the other LSB embedding schemes. The proposed method is also applicable to other digital cover media.  相似文献   

9.
In the present study, an adaptation of the Markov Random Field (MRF) segmentation model, by means of the stationary wavelet transform (SWT), applied to complementary DNA (cDNA) microarray images is proposed (WMRF). A 3-level decomposition scheme of the initial microarray image was performed, followed by a soft thresholding filtering technique. With the inverse process, a Denoised image was created. In addition, by using the Amplitudes of the filtered wavelet Horizontal and Vertical images at each level, three different Magnitudes were formed. These images were combined with the Denoised one to create the proposed SMRF segmentation model. For numerical evaluation of the segmentation accuracy, the segmentation matching factor (SMF), the Coefficient of Determination (r2), and the concordance correlation (pc) were calculated on the simulated images. In addition, the SMRF performance was contrasted to the Fuzzy C Means (FCM), Gaussian Mixture Models (GMM), Fuzzy GMM (FGMM), and the conventional MRF techniques. Indirect accuracy performances were also tested on the experimental images by means of the Mean Absolute Error (MAE) and the Coefficient of Variation (CV). In the latter case, SPOT and SCANALYZE software results were also tested. In the former case, SMRF attained the best SMF, r2, and pc (92.66%, 0.923, and 0.88, respectively) scores, whereas, in the latter case scored MAE and CV, 497 and 0.88, respectively. The results and support the performance superiority of the SMRF algorithm in segmenting cDNA images.  相似文献   

10.
Given an alphabet Σ={1,2,…,|Σ|} text string T∈Σ n and a pattern string P∈Σ m , for each i=1,2,…,nm+1 define L p (i) as the p-norm distance when the pattern is aligned below the text and starts at position i of the text. The problem of pattern matching with L p distance is to compute L p (i) for every i=1,2,…,nm+1. We discuss the problem for d=1,2,∞. First, in the case of L 1 matching (pattern matching with an L 1 distance) we show a reduction of the string matching with mismatches problem to the L 1 matching problem and we present an algorithm that approximates the L 1 matching up to a factor of 1+ε, which has an O(\frac1e2nlogmlog|S|)O(\frac{1}{\varepsilon^{2}}n\log m\log|\Sigma|) run time. Then, the L 2 matching problem (pattern matching with an L 2 distance) is solved with a simple O(nlog m) time algorithm. Finally, we provide an algorithm that approximates the L matching up to a factor of 1+ε with a run time of O(\frac1enlogmlog|S|)O(\frac{1}{\varepsilon}n\log m\log|\Sigma|) . We also generalize the problem of String Matching with mismatches to have weighted mismatches and present an O(nlog 4 m) algorithm that approximates the results of this problem up to a factor of O(log m) in the case that the weight function is a metric.  相似文献   

11.
In this paper we present optimal processor x time parallel algorithms for term matching and anti-unification of terms represented as trees. Term matching is the special case of unification in which one of the terms is restricted to contain no variables. It has wide applicability to logic programming, term rewriting systems and symbolic pattern matching. Anti-unification is the dual problem of unification in which one computes the most specific generalization of two terms. It has application to inductive inference and theorem proving. Our algorithms run in O(log2 N) time using N/log2 N processors on a shared-memory model of computation that prohibits simultaneous reads or writes (EREW PRAM). These algorithms are the first polylogarithmic-time EREW algorithms with a processor x time product of the same order as that of their sequential counterparts, thereby permitting optimal speed-ups using any number of processors up to N/log2 N. We also use the techniques developed in the paper to provide an N/log N-processor, O(log N)-time algorithm for a shared-memory model that allows both simultaneous reads and simultaneous writes (CRCW PRAM).Supported by NSF Grant IRI-88-09324 and NSF/DARPA Grant CCR-8908092.  相似文献   

12.
陶涛  张云 《中国图象图形学报》2015,20(12):1639-1651
目的 当前国际流行的SIFT算法及其改进算法在检测与描述特征点时基于高斯差分函数,存在损失图像高频信息的缺陷,从而导致图像匹配时其性能随着图像变形的增加而出现急剧下降。针对SIFT算法及其改进算法的这一缺陷,本研究提出了一种新的无图像信息损失的、在对数极坐标系下的尺度不变特征点检测与描述算法。方法 本研究提出的尺度不变特征点检测与描述算法首先将直角坐标系下以采样点为中心的圆形图块转换为对数极坐标系下的矩形图块,并以此矩形图块为基础对采样点进行特征点检测与描述符提取;该算法使用固定宽度的窗口在采样点的对数极坐标径向梯度图像的logtr轴上进行移动以判断该点是否为特征点并计算该点的特征尺度,并在具有局部极大窗口响应的特征尺度位置处提取特征点的描述符。该算法的描述符基于对数极坐标系下的矩形图块的灰度梯度的幅值与角度,是一个192维向量,并具有对于尺度、旋转、光照等变化的不变性。结果 本研究采用INRIA数据组和Mikolajczyk提出的匹配性能指标对SIFT算法、SURF算法和提出的尺度不变特征点检测与描述算法进行比较。与SIFT算法和SURF算法相比,提出的尺度不变特征点检测与描述算法在对应点数、重复率、正确匹配点数和匹配率等方面均具有一定优势。结论 提出了一种基于对数极坐标系的图像匹配算法,即将直角坐标系下以采样点为中心的圆形图块转换为对数极坐标系下的矩形图块,这样在特征点的检测过程中,可以有效规避SIFT算法因为采用DoG函数而造成的高频信息损失;在描述符提取过程中,对数极坐标系可以有效地减少图像的变化量,从而提高了匹配性能。  相似文献   

13.
网格多处理机的一种改进的子网分配算法   总被引:7,自引:0,他引:7  
张艳  孙世新  彭文钦 《软件学报》2001,12(8):1250-1257
子网分配问题是指识别并分配一个空闲的、满足指定大小要求的节点机.首先,提出了网格结构中一种新的具有O(N2a·1og2Na)时间复杂度的空闲子网搜索算法,它优于现有的O(N3a)时间复杂度的搜索算法.然后,用该算法对基于保留因子的最佳匹配类子网分配算法——RF(reservation factor)算法进行了改进,得到了  相似文献   

14.
In this paper we study a first-order primal-dual algorithm for non-smooth convex optimization problems with known saddle-point structure. We prove convergence to a saddle-point with rate O(1/N) in finite dimensions for the complete class of problems. We further show accelerations of the proposed algorithm to yield improved rates on problems with some degree of smoothness. In particular we show that we can achieve O(1/N 2) convergence on problems, where the primal or the dual objective is uniformly convex, and we can show linear convergence, i.e. O(ω N ) for some ω∈(0,1), on smooth problems. The wide applicability of the proposed algorithm is demonstrated on several imaging problems such as image denoising, image deconvolution, image inpainting, motion estimation and multi-label image segmentation.  相似文献   

15.
Interval-valued fuzzy mathematical morphology is an extension of classical fuzzy mathematical morphology, which is in turn one of the extensions of binary morphology to greyscale morphology. The uncertainty that may exist concerning the grey value of a pixel due to technical limitations or bad recording circumstances, is taken into account by mapping the pixels in the image domain onto an interval to which the pixel’s grey value is expected to belong instead of one specific value. Such image representation corresponds to the representation of an interval-valued fuzzy set and thus techniques from interval-valued fuzzy set theory can be applied to extend greyscale mathematical morphology. In this paper, we study the decomposition of the interval-valued fuzzy morphological operators. We investigate in which cases the [α 1,α 2]-cuts of these operators can be written or approximated in terms of the corresponding binary operators. Such conversion into binary operators results in a reduction of the computation time and is further also theoretically interesting since it provides us a link between interval-valued fuzzy and binary morphology.  相似文献   

16.
In this paper we study the external memory planar point enclosure problem: Given N axis-parallel rectangles in the plane, construct a data structure on disk (an index) such that all K rectangles containing a query point can be reported I/O-efficiently. This problem has important applications in e.g. spatial and temporal databases, and is dual to the important and well-studied orthogonal range searching problem. Surprisingly, despite the fact that the problem can be solved optimally in internal memory with linear space and O(log N+K) query time, we show that one cannot construct a linear sized external memory point enclosure data structure that can be used to answer a query in O(log  B N+K/B) I/Os, where B is the disk block size. To obtain this bound, Ω(N/B 1−ε ) disk blocks are needed for some constant ε>0. With linear space, the best obtainable query bound is O(log 2 N+K/B) if a linear output term O(K/B) is desired. To show this we prove a general lower bound on the tradeoff between the size of the data structure and its query cost. We also develop a family of structures with matching space and query bounds. An extended abstract of this paper appeared in Proceedings of the 12th European Symposium on Algorithms (ESA’04), Bergen, Norway, September 2004, pp. 40–52. L. Arge’s research was supported in part by the National Science Foundation through RI grant EIA–9972879, CAREER grant CCR–9984099, ITR grant EIA–0112849, and U.S.-Germany Cooperative Research Program grant INT–0129182, as well as by the US Army Research Office through grant W911NF-04-01-0278, by an Ole Roemer Scholarship from the Danish National Science Research Council, a NABIIT grant from the Danish Strategic Research Council and by the Danish National Research Foundation. V. Samoladas’ research was supported in part by a grant co-funded by the European Social Fund and National Resources-EPEAEK II-PYTHAGORAS. K. Yi’s research was supported in part by the National Science Foundation through ITR grant EIA–0112849, U.S.-Germany Cooperative Research Program grant INT–0129182, and Hong Kong Direct Allocation Grant (DAG07/08).  相似文献   

17.
Given a compressed image in the restricted quadtree and shading format, this paper presents a fast algorithm for computing 2D discrete cosine transform (DCT) on the compressed grey image directly without the need to decompress the compressed image. The proposed new DCT algorithm takes O(K2logK+N2) time where the decompressed image is of size N×N and K denotes the number of nodes in the restricted quadtree. Since commonly K<N, the proposed algorithm is faster than the indirect method by decompressing the compressed image first, then applying the conventional DCT algorithm on the decompressed image. The indirect method takes O(N2logN) time.  相似文献   

18.
Li  Jie  Pan  Yi  Shen  Hong 《The Journal of supercomputing》2003,24(3):251-258
Topological sort of an acyclic graph has many applications such as job scheduling and network analysis. Due to its importance, it has been tackled on many models. Dekel et al. [3], proposed an algorithm for solving the problem in O(log2 N) time on the hypercube or shuffle-exchange networks with O(N 3) processors. Chaudhuri [2], gave an O(log N) algorithm using O(N 3) processors on a CRCW PRAM model. On the LARPBS (Linear Arrays with a Reconfigurable Pipelined Bus System) model, Li et al. [5] showed that the problem for a weighted directed graph with N vertices can be solved in O(log N) time by using N 3 processors. In this paper, a more efficient topological sort algorithm is proposed on the same LARPBS model. We show that the problem can be solved in O(log N) time by using N 3/log N processors. We show that the algorithm has better time and processor complexities than the best algorithm on the hypercube, and has the same time complexity but better processor complexity than the best algorithm on the CRCW PRAM model.  相似文献   

19.
A Fast Efficient Parallel Hough Transform Algorithm on LARPBS   总被引:2,自引:0,他引:2  
Chen  Ling  Chen  Hongjian  Pan  Yi  Chen  Yixin 《The Journal of supercomputing》2004,29(2):185-195
A parallel algorithm for Hough transform on a linear array with reconfigurable pipeline bus system (LARPBS) is presented. Suppose the number of -values to be considered is m, for an image with n × n pixels, the algorithm can complete Hough transform in O(1) time using mn 2 processors and achieve optimal speed and efficiency. We also illustrate how to partition data and perform the algorithm on a LARPBS with fewer than mn 2 processors, and hence show that the algorithm is highly scalable.  相似文献   

20.
We consider the problem of indexing a set of objects moving in d-dimensional spaces along linear trajectories. A simple external-memory indexing scheme is proposed to efficiently answer general range queries. The following are examples of the queries that can be answered by the proposed method: report all moving objects that will (i) pass between two given points within a specified time interval; (ii) become within a given distance from some or all of a given set of other moving objects. Our scheme is based on mapping the objects to a dual space, where queries about moving objects are transformed into polyhedral queries concerning their speeds and initial locations. We then present a simple method for answering such polyhedral queries, based on partitioning the space into disjoint regions and using a B+-tree to index the points in each region. By appropriately selecting the boundaries of each region, we guarantee an average search time that matches a known lower bound for the problem. Specifically, for a fixed d, if the coordinates of a given set of N points are statistically independent, the proposed technique answers polyhedral queries, on the average, in O((N/B)1−1/d⋅(log B N)1/d+K/B) I/O's using O(N/B) space, where B is the block size, and K is the number of reported points. Our approach is novel in that, while it provides a theoretical upper bound on the average query time, it avoids the use of complicated data structures, making it an effective candidate for practical applications. The proposed index is also dynamic in the sense that it allows object insertion and deletion in an amortized update cost of log B(N) I/O's. Experimental results are presented to show the superiority of the proposed index over other methods based on R-trees. recommend Ahmed Elmagarmid  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号