首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Thedistance transform(DT) is an image computation tool which can be used to extract the information about the shape and the position of the foreground pixels relative to each other. It converts a binary image into a grey-level image, where each pixel has a value corresponding to the distance to the nearest foreground pixel. The time complexity for computing the distance transform is fully dependent on the different distance metrics. Especially, the more exact the distance transform is, the worse execution time reached will be. Nowadays, quite often thousands of images are processed in a limited time. It seems quite impossible for a sequential computer to do such a computation for the distance transform in real time. In order to provide efficient distance transform computation, it is considerably desirable to develop a parallel algorithm for this operation. In this paper, based on the diagonal propagation approach, we first provide anO(N2) time sequential algorithm to compute thechessboard distance transform(CDT) of anN×Nimage, which is a DT using the chessboard distance metrics. Based on the proposed sequential algorithm, the CDT of a 2D binary image array of sizeN×Ncan be computed inO(logN) time on the EREW PRAM model usingO(N2/logN) processors,O(log logN) time on the CRCW PRAM model usingO(N2/log logN) processors, andO(logN) time on the hypercube computer usingO(N2/logN) processors. Following the mapping as proposed by Lee and Horng, the algorithm for the medial axis transform is also efficiently derived. The medial axis transform of a 2D binary image array of sizeN×Ncan be computed inO(logN) time on the EREW PRAM model usingO(N2/logN) processors,O(log logN) time on the CRCW PRAM model usingO(N2/log logN) processors, andO(logN) time on the hypercube computer usingO(N2/logN) processors. The proposed parallel algorithms are composed of a set of prefix operations. In each prefix operation phase, only increase (add-one) operation and minimum operation are employed. So, the algorithms are especially efficient in practical applications.  相似文献   

2.
Two studies explored the relationship between blogging and psychological empowerment among women. First, a survey (N = 340) revealed that personal journaling empowers users by inducing a strong sense of community whereas filter blogging does so by enhancing their sense of agency. Various user motivations were also shown to predict psychological empowerment. Next, a 2 (type of blog) X 2 (comments) X 2 (site visits) factorial experiment (N = 214) found that 2 site metrics—the number of site visits and number of comments—affect psychological empowerment through distinct mechanisms—the former through the sense of agency and the latter through the sense of community. These metrics are differentially motivating for bloggers depending on the type of blog maintained: filter or personal.  相似文献   

3.
F-metrics are metrics based on projective sets. In this paper, construction of optimal codes for a special F-metric associated with a generalized Vandermonde matrix is given. Encoding and fast decoding algorithms are described. A public-key cryptosystem is considered as an example of a possible application of codes constructed.  相似文献   

4.
IEEE 802.16竞争解决方案的性能分析   总被引:4,自引:0,他引:4  
目前,IEEE 802.16标准推荐采用基于截断二进制指数回退算法的竞争解决方案.分析了该方案同IEEE 802.11竞争机制的区别,给出了传送机会利用率u、带宽请求延时d以及带宽请求丢失率pd等性能指标的计算方法.通过性能模拟,讨论了初始化回退窗口W、用户站数目n以及单位时间帧内传送机会数目Nto等参数对性能指标的影响,进而得出基站调整性能参数的一般策略.这些策略对于基站进行上行带宽资源的分配具有指导意义.  相似文献   

5.
The (n + 1)-dimensional Einstein-Gauss-Bonnet (EGB) model is considered. For diagonal cosmological metrics, the equations of motion are written as a set of Lagrange equations with the effective Lagrangian containing two “minisuperspace” metrics on ℝ n : a 2-metric of pseudo-Euclidean signature and a Finslerian 4-metric proportional to the n-dimensional Berwald-Moor 4-metric. For the case of the “pure” Gauss-Bonnet model, two exact solutions are presented, those with power-law and exponential dependences of the scale factors (w.r.t. the synchronous time variable) are presented. (The power-law solution was considered earlier by N. Deruelle, A. Toporensky, P. Tretyakov, and S. Pavluchenko.) In the case of EGB cosmology, it is shown that for any nontrivial solution with an exponential dependence of scale factors, a i (τ) = A i exp(v i τ), there are no more than three different numbers among v 1, …, v n .  相似文献   

6.
The purpose of this study was to identify fault‐prone functions that are likely to contain faults in a given software system. Five metrics were used: Di, an internal design metric which incorporates factors related to a function's internal structure; De, an external design metric which focuses on a function's external relationships to the rest of the software system; D(G), a composite design metric which is a linear combination of Di and De; and the union and intersection of Di, De, and D(G). Since the system being considered was already developed, a very important aspect of our study was to extract the design information directly from the source code rather than from the corresponding design documentation which may not exist or, if it does exist, it may be incomplete, difficult to understand, or not updated. To make the analysis more accurate and efficient, a metric analysis tool (χMetrics) was implemented. We conducted experiments using χMetrics on part of a distributed software system, written in C, with a client–server architecture, and identified a small percentage of its functions as good candidates for fault‐proneness. Files containing these functions were then validated by the real defect data collected between a recent major release and its subsequent release for their fault‐proneness. The results indicate that our metrics are good indicators of fault‐prone functions. Two extra experiments were also conducted to show that function size cannot replace any of our metrics; and where the function size was factored out our metrics performed better than the normalized metrics. The important benefit of our metrics is that they help project managers determine where additional testing effort should be spent and possibly which fault‐prone functions should be assigned to more experienced programmers if modifications are required. Copyright © 2000 John Wiley & Sons, Ltd.  相似文献   

7.
A Boolean vector minimization problem for a threshold function is considered. A formula is obtained for the limit level of perturbations of partial criteria parameters in l 1 metrics that preserve the strict effectiveness of the solution.  相似文献   

8.
Despite the desire to utilize proactive safety metrics, research results indicate imbalances can arise between economic performance metrics and safety metrics. Imbalances can arise, first, because there are fewer proactive metrics available relative to the data an organization can compile to build reactive metrics. Second, there are a number of factors that lead organizations to discount proactive metrics when they conflict with shorter‐term and more definitive reactive metrics. This paper introduces the Q4‐Balance Framework to analyse economy‐safety trade‐offs. Plotting the sets of metrics used by an organization in the four‐quadrant visualization can be used to identify misalignments, overlap and false diversity. It results in a visualization of the set of metrics an organization uses and where these conflict or reinforce each other. The framework also provides a way to assess an organization's safety energy as a kind of analysis of an organization's capability to be proactive about safety.  相似文献   

9.
目的 客观评价作为图像融合的重要研究领域,是评价融合算法性能的有力工具。目前,已有几十种不同类型的评价指标,但各应用领域包括可见光与红外图像融合,仍缺少统一的选择依据。为了方便比较不同融合算法性能,提出一种客观评价指标的通用分析方法并应用于可见光与红外图像融合。方法 将可见光与红外图像基准数据集中的客观评价指标分为两类,分别是基于融合图像的评价指标与基于源图像和融合图像的评价指标。采用Kendall相关系数分析融合指标间的相关性,聚类得到指标分组;采用Borda计数排序法统计算法的综合排序,分析单一指标排序和综合排序的相关性,得到一致性较高的指标集合;采用离散系数分析指标均值随不同算法的波动程度,选择充分体现不同算法间差异的指标;综合相关性分析、一致性分析及离散系数分析,总结具有代表性的建议指标集合。结果 在13对彩色可见光与红外和8对灰度可见光与红外两组图像源中,分别统计分析不同图像融合算法的客观评价数据,得到可见光与红外图像融合的建议指标集(标准差、边缘保持度),作为融合算法性能评估的重要参考。相较于现有方法,实验覆盖20种融合算法和13种客观评价指标,并且不依赖主观评价结果。结论...  相似文献   

10.
11.
Software quality metrics have potential for helping to assure the quality of software on large projects such as the Space Shuttle flight software. It is feasible to validate metrics for controlling and predicting software quality during design by validating metrics against a quality factor. Quality factors, like reliability, are of more interest to customers than metrics, like complexity. However quality factors cannot be collected until late in a project. Therefore the need arises to validate metrics, which developers can collect early in a project, against a quality factor. We investigate the feasibility of validating metrics for controlling and predicting quality on the Space Shuttle. The key to the approach is the use of validated metrics for early identification and resolution of quality problems.  相似文献   

12.
We present geometric methods for uniformly discretizing the continuous N-qubit Hilbert space HN. When considered as the vertices of a geometrical figure, the resulting states form the equivalent of a Platonic solid. The discretization technique inherently describes a class of /2 rotations that connect neighboring states in the set, i.e., that leave the geometrical figures invariant. These rotations are shown to generate the Clifford group, a general group of discrete transformations on N qubits. Discretizing HN allows us to define its digital quantum information content, and we show that this information content grows as N2. While we believe the discrete sets are interesting because they allow extra-classical behavior—such as quantum entanglement and quantum parallelism—to be explored while circumventing the continuity of Hilbert space, we also show how they may be a useful tool for problems in traditional quantum computation. We describe in detail the discrete sets for one and two qubits.PACS: 03.67.Lx; 03.67.pp; 03.67.-a; 03.67.Mn.PACS: 03.67.Lx; 03.67.pp; 03.67.-a; 03.67.Mn.  相似文献   

13.
Y. M. El-Fattah 《Calcolo》1974,11(2):269-287
The paper considers linear distributed systems whose state belongs to a separable Banach space, namelyX. The statex of the system is assumed to be adequately approximated by its projectionx N on a certain finite dimensional subspaceX N (N given a priori), which is a member of a set of denumerable dense subspaces inX. A definition is given for the system's approximate observability. An algorithm is provided for computing an estimatex N ^ forx N . An example of a diffusion system with homogeneous (Dirichlet) boundary condition and one point (noise-free) measurement transducer is considered. The problem of numerical instability is studied. The effect of a number of important factors on the estimation problem is investigated. An extensive computational experience is presented.  相似文献   

14.
This paper focuses on BSR (Broadcasting with Selective Reduction) implementation of algorithms solving basic convex polygon problems. More precisely, constant time solutions on a linear number, max(N, M) (where N and M are the number of edges of the two considered polygons), of processors for computing the maximum distance between two convex polygons, finding critical support lines of two convex polygons, computing the diameter, the width of a convex polygon and the vector sum of two convex polygons are described. These solutions are based on the merging slopes technique using one criterion BSR operations.  相似文献   

15.
We review spherically symmetric solutions with a horizon in two models: (i) with scalar fields and fields of forms, and (ii) with a multi-component anisotropic fluid. The metrics of the solutions are defined on a manifold that contains a product of n − 1 Ricci-flat “internal” spaces. The solutions are governed by functions H s obeying nonlinear differential equations with certain boundary conditions. Simulation of black-brane solutions is considered, and the Hawking temperature is calculated. For the fluid solution, the post-Newtonian parameters β and Γ corresponding to the 4-dimensional section of the metric are found.  相似文献   

16.
On the basis of an analogy with the nonminimal SU (2)-symmetric Wu-Yang monopole with a regular metric, a solution describing a nonminimal U (1)-symmetric Dirac monopole is obtained. To take into account the curvature coupling of the gravitational and electromagnetic fields, we reconstruct the effective metrics of two types, the so-called associated and optical metrics. The optical metrics explicitly show that the curvature-induced birefringence effect takes place in the vicinity of a nonminimal Dirac monopole; these optical metrics are studied analytically and numerically. Talk given at the Russian School-Seminar on Modern Problems of Gravitation and Cosmology, Kazan-Yal’chik, 10–15 September 2007  相似文献   

17.
An N-periodic discrete-time system in z-domain given by an N-periodic collection of rational matrices is considered. Doubly coprime decomposition of these matrices is studied. For such decompositions, the block-ordered concept is used. Further, the eight matrices in the generalized Bezout identity, so that all of them are block-ordered, can be chosen. A relation among these matrices at consecutive times is established.  相似文献   

18.
Fast computation of sample entropy and approximate entropy in biomedicine   总被引:1,自引:0,他引:1  
Both sample entropy and approximate entropy are measurements of complexity. The two methods have received a great deal of attention in the last few years, and have been successfully verified and applied to biomedical applications and many others. However, the algorithms proposed in the literature require O(N2) execution time, which is not fast enough for online applications and for applications with long data sets. To accelerate computation, the authors of the present paper have developed a new algorithm that reduces the computational time to O(N3/2)) using O(N) storage. As biomedical data are often measured with integer-type data, the computation time can be further reduced to O(N) using O(N) storage. The execution times of the experimental results with ECG, EEG, RR, and DNA signals show a significant improvement of more than 100 times when compared with the conventional O(N2) method for N = 80,000 (N = length of the signal). Furthermore, an adaptive version of the new algorithm has been developed to speed up the computation for short data length. Experimental results show an improvement of more than 10 times when compared with the conventional method for N > 4000.  相似文献   

19.
《国际计算机数学杂志》2012,89(3-4):293-305
In this paper, we present cyclic reducation and FARC algorithms for solving spline collocation system. The costs of these algorithms are 0(N 2logN) and O(N 2log logN) respectively, for an N x N grid.  相似文献   

20.
Two significant two-dimensional decomposition rules for the Discrete Fourier Transform of a set ofN data (N=2 p ) are considered. It is shown that the two-dimensional processing performed according to such rules involves exactly the same operations on the same data as the one-dimensional processing. This means that, if the same rule is iteratively applied with arbitrary dimensions, always the same fast algorithm is obtained. This work has been sponsored by the Convention between Selenia S. p. A. and Consiglio Nazionale delle Ricerche, Italy.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号