共查询到20条相似文献,搜索用时 31 毫秒
1.
Two arrays of numbers sorted in nondecreasing order are given: an array A of size n and an array B of size m , where n <m . It is required to determine, for every element of A , the smallest element of B (if one exists) that is larger than or equal to it. It is shown how to solve this problem on the EREW PRAM (exclusive-read exclusive-write parallel random-access machine) in O (logm logn /log log m ) time using n processors. The solution is then extended to the case in which fewer than n processors are available. This yields an EREW PRAM algorithm for the problem whose cost is O (n log m , which is O (m )) for n ⩽m /log m . It is shown how the solution obtained leads to an improved parallel merging algorithm 相似文献
2.
A distributed knot detection algorithm for general graphs is presented. The knot detection algorithm uses at most O (n log n +m ) messages and O (m +n log n ) bits of memory to detect all knots' nodes in the network (where n is the number of nodes and m is the number of links). This is compared to O (n 2) messages needed in the best algorithm previously published. The knot detection algorithm makes use of efficient cycle detection and clustering techniques. Various applications for the knot detection algorithms are presented. In particular, its importance to deadlock detection in store and forward communication networks and in transaction systems is demonstrated 相似文献
3.
Wegner L.M. Teuhola J.I. 《IEEE transactions on pattern analysis and machine intelligence》1989,15(7):917-925
Heapsort is an internal sorting method which sorts an array of n records in place in O(n log n ) time. Heapsort is generally considered unsuitable for external random-access sorting. By replacing key comparisons with merge operations on pages, it is shown how to obtain an in-place external sort which requires O(m log m ) page references, where m is the number of pages which the file occupies. The new sort method (called Hillsort) has several useful properties for advanced database management systems. Not only does Hillsort operate in place, i.e., no additional external storage space is required assuming that the page table can be kept in core memory, but accesses to adjacent pages in the heap require one seek only if the pages are physically contiguous. The authors define the Hillsort model of computation for external random-access sorting, develop the complete algorithm and then prove it correct. The model is next refined and a buffer management concept is introduced so as to reduce the number of merge operations and page references, and make the method competitive to a basic balanced two-way external merge. Performance characteristics are noted such as the worst-case upper bound, which can be carried over from Heapsort, and the average-case behavior, deduced from experimental findings. It is shown that the refined version of the algorithm which is on a par with the external merge sort 相似文献
4.
The theorem states that every block square matrix satisfies its own m -D (m -dimensional, m ⩾1) matrix characteristic polynomial. The exact statement and a simple proof of this theorem are given. The theorem refers to a matrix A subdivided into m blocks, and hence having dimension at least m . The conclusion is that every square matrix A with dimension M satisfies several m -D characteristic matrix polynomials with degrees N 1 . . ., N m, such that N 1+ . . . +N m ⩽M 相似文献
5.
An efficiently computable metric for comparing polygonal shapes 总被引:9,自引:0,他引:9
Arkin E.M. Chew L.P. Huttenlocher D.P. Kedem K. Mitchell J.S.B. 《IEEE transactions on pattern analysis and machine intelligence》1991,13(3):209-216
A method for comparing polygons that is a metric, invariant under translation, rotation, and change of scale, reasonably easy to compute, and intuitive is presented. The method is based on the L 2 distance between the turning functions of the two polygons. It works for both convex and nonconvex polygons and runs in time O (mn log mn ), where m is the number of vertices in one polygon and n is the number of vertices in the other. Some examples showing that the method produces answers that are intuitively reasonable are presented 相似文献
6.
Venkatesh S.S. Psaltis D. 《IEEE transactions on pattern analysis and machine intelligence》1992,14(1):87-91
The authors investigate the computing capabilities of formal McCulloch-Pitts neurons when errors are permitted in decisions. They assume that m decisions are to be made on a randomly specified m set of points in n space and that an error tolerance of ϵm decision errors is allowed, with 0⩽ϵ<1/2. The authors are interested in how large an m can be selected such that the neuron makes reliable decisions within the prescribed error tolerance. Formal results for two protocols for error-tolerance-a random error protocol and an exhaustive error protocol-are obtained. The results demonstrate that a formal neuron has a computational capacity that is linear in n and that this rate of capacity growth persists even when errors are tolerated in the decisions 相似文献
7.
Parallel implementations of the extended square-root covariance filter (ESRCF) for tracking applications are developed. The decoupling technique and special properties used in the tracking Kalman filter (KF) are employed to reduce computational requirements and to increase parallelism. The application of the decoupling technique to the ESRCF results in the time and measurement updates of m decoupled (n /m )-dimensional matrices instead of one coupled n -dimensional matrix, where m denotes the tracking dimension and n denotes the number of state elements. The updates of m decoupled matrices are found to require approximately m fewer processing elements and clock cycles than the updates of one coupled matrix. The transformation of the Kalman gain which accounts for the decoupling is found to be straightforward to implement. The sparse nature of the measurement matrix and the sparse, band nature of the transition matrix are explored to simplify matrix multiplications 相似文献
8.
Kumar V. Place J. Yang G.-C. 《Knowledge and Data Engineering, IEEE Transactions on》1991,3(3):380-384
A nonsymmetric deadlock-free mutual exclusion algorithm for computer networks is presented. The algorithm requires O (m ) messages to synchronize m modes in a lightly loaded system, and the performance approaches a constant k dependent on m as the workload increases. In a medium to heavily loaded system, it outperforms other proposed algorithms and its performance is independent of network topology 相似文献
9.
A novel discrete relaxation architecture 总被引:1,自引:0,他引:1
The discrete relaxation algorithm (DRA) is a computational technique that enforces arc consistency (AC) in a constraint satisfaction problem (CSP). The original sequential AC-1 algorithm suffers from O (n 3m 3) time complexity, and even the optimal sequential AC-4 algorithm is O (n 2m 2) for an n -object and m -label DRA problem. Sample problem runs show that these algorithms are all too slow to meet the need for any useful, real-time CSP applications. A parallel DRA5 algorithm that reaches a lower bound of O (nm ) (where the number of processors is polynomial in the problem size) is given. A fine-grained, massively parallel hardware computer architecture has been designed for the DRA5 algorithm. For practical problems, many orders of magnitude of efficiency improvement can be reached on such a hardware architecture 相似文献
10.
Sankar P.V. Ferrari L.A. 《IEEE transactions on pattern analysis and machine intelligence》1988,10(2):271-276
It is proved that the Toeplitz binary value matrix inversion associated with m th-order B-spline interpolation can be implemented using only 2(m +1) additions. Pipelined architectures are developed for real-time B-spline interpolation based on simple running average filters. It is shown that an ideal interpolating function, which is approximated by a truncated sinc function with M half cycles, can be implemented using B-splines with M +2 multiplies. With insignificant loss of performance, the coefficients at the knots of the truncated sinc function can be approximated using coefficients which are powers of two. The resulting implementation requires only M +4m +6 additions. It is believed that the truncated sinc function approximated by zero-order B-spline functions actually achieves the best visual performance 相似文献
11.
A linear-time algorithm is developed to perform all odd (even) length circular shifts of data in an SIMD (single-instruction-stream, multiple-data-stream) hypercube. As an application, the algorithm is used to obtain an O(M 2+log N ) time and O(1) memory per processor algorithm to compute the two-dimensional convolution of an N ×N image and an M ×M template on an N 2 processor SIMD hypercube. This improves the previous best complexity of O(M 2 log M +log N ) 相似文献
12.
The problem of determining whether a polytope P of n ×n matrices is D -stable-i.e. whether each point in P has all its eigenvalues in a given nonempty, open, convex, conjugate-symmetric subset D of the complex plane-is discussed. An approach which checks the D -stability of certain faces of P is used. In particular, for each D and n the smallest integer m such that D -stability of every m -dimensional face guarantees D -stability of P is determined. It is shown that, without further information describing the particular structure of a polytope, either (2n -4)-dimensional or (2n -2)-dimensional faces need to be checked for D -stability, depending on the structure of D . Thus more work needs to be done before a computationally tractable algorithm for checking D -stability can be devised 相似文献
13.
The authors compare the performance of two-valued and m -valued ICs for MOS and bipolar technologies according to speed, chip area, and power dissipation. They present the basic concepts and the basic types of m -valued circuits that make a serious comparison possible. They cover Post algebra and m -valued circuitry, m -valued current-mode circuits, and m -valued memory cells 相似文献
14.
The problem of achieving consensus in a distributed system is discussed. Systems are treated in which either or both of two types of faults may occur: dormant (essentially omission and timing faults) and arbitrary (exhibiting arbitrary behavior, commonly referred to as Byzantine). Previous results showed that are number of dormant faults may be tolerated when there are no arbitrary faults and that, at most, [n-1/3] arbitrary faults may be tolerated when there are no dormant faults (n is the number of processors). A continuum is established between the previous results: an algorithm exists iff n >f max+2m max and c >f max+m max (where c is the system connectivity), when faults are constrained so that there are at most f max and at most m max of these that are arbitrary. An algorithm is given and compared to known algorithms. A method is given to establish virtual links so that the communications graph appears completely connected 相似文献
15.
The author considers an indirect adaptive unity feedback controller consisting of an m th-order SISO (single input, single output) compensator controlling an n th-order strictly proper SISO plant. It is shown that exponential convergence of the plant parameter estimation error as well as asymptotic time invariance and global exponential stability of the controlled closed-loop system can be guaranteed by requiring that the reference input has at least 2n +m points of spectral support 相似文献
16.
The number of distinct entries among the m 2n entries of the n th Kronecker power of an m ×m matrix is derived. An algorithm to find the value of each entry of the Kronecker power is presented 相似文献
17.
A unified analytical model for computing the task-based dependability (TDB) of hypercube architectures is presented. A hypercube is deemed operational as long as a task can be executed on the system. The technique can compute both reliability and availability for two types of task requirements-I -connected model and subcube model. The I -connected TBD assumes that a connected group of at least I working nodes is required for task execution. The subcube TBD needs at least an m -cube in an n -cube, m ⩽ n , for task execution. The dependability is computed by multiplying the probability that x nodes (x ⩾I or x ⩾2m) are working in an n -cube at time t by the conditional probability that the hypercube can satisfy any one of the two task requirements from x working nodes. Recursive models are proposed for the two types of task requirements to find the connection probability. The subcube requirement is extended to find multiple subcubes for analyzing multitask dependability. The analytical results are validated through extensive simulation 相似文献
18.
Algorithms are proposed for eigenvalue assignment (EVA) by constant as well as dynamic output feedback. The main algorithm is developed for single-input, multioutput systems and the results are then extended to multiinput, multioutput systems. In computing the feedback, use is made of the fact that the closed-loop eigenvalues can almost always be assigned arbitrarily close to the desired locations in the complex plane, provided the system satisfies the condition m + p >n , where m , p , and n are , respectively, the number of inputs, outputs and states of the system. The EVA problem has been treated as a converse of the algebraic eigenvalue problem. The proposed algorithms are based on the implicitly shifted QR algorithm for solving the algebraic eigenvalue problem. The performance of the algorithms is illustrated by several numerical examples 相似文献
19.
The eigenstructure assignment problem with output feedback is studied for systems satisfying the condition p +m > n . The main tool used is the concept of (C , A , B )-invariance and two coupled Sylvester equations, the solution of which leads to the computation of an output stabilizing feedback. A computationally efficient algorithm for the solution of these two coupled equations, which leads to the computation of a desired output feedback, is presented 相似文献
20.
An estimator for estimating the parameters of a Markov random field X from inaccurate observations is introduced. The author considers first a Markov (Gibbs) random field X ={X i,j} on a lattice L ={(i ,j ): i =1,2,. . .,n ; j =1,2,. . .,m }. The marginal distributions of (X i,j, X i+u,j+v) (u ,v =-1,0,1) are first estimated from an image. Then, random fields X * are simulated with the probability of X *i+u,j+v)=b nearly equal to the estimate of P {X i,j=X i+u,=b }. A simulation method similar to the Gibbs sampler is used. The parameters of the Markov random field model are estimated from the X *'s with the pseudolikelihood method 相似文献