首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   22篇
  免费   0篇
电工技术   1篇
无线电   1篇
冶金工业   7篇
自动化技术   13篇
  2023年   1篇
  2012年   2篇
  1998年   6篇
  1997年   3篇
  1993年   1篇
  1992年   1篇
  1991年   1篇
  1988年   1篇
  1985年   1篇
  1984年   2篇
  1983年   1篇
  1979年   1篇
  1976年   1篇
排序方式: 共有22条查询结果,搜索用时 15 毫秒
11.
Gradient Based Image Motion Estimation Without Computing Gradients   总被引:6,自引:0,他引:6  
Computing an optical flow field using the classical image motion constraint equation is difficult owing to the aperture problem and the need to compute the image intensity derivatives via numerical differentiation—an extremely unstable operation. We integrate the above constraint equation over a significant spatio-temporal support and use Gauss's Divergence theorem to replace the volume integrals by surface integrals, thereby eliminating the intensity derivatives and numerical differentiation. We tackle the aperture problem by fitting an affine flow field model to a small space-time window. Using this affine model our new integral motion constraint approach leads to a robust and accurate algorithm to compute the optical flow field. Extensive experimentation confirms that the algorithm is indeed robust and accurate.  相似文献   
12.
If the values of a multivariate function f(x 1,x 2,??,x N ) are given at only a finite number of points in the space of its arguments and an interpolation which employs continuous functions is considered standard multivariate routines may become cumbersome as the dimensionality grows. This urges us to develop a divide-and-conquer algorithm which approximates the function. The given multivariate data are partitioned into low-variate data. This approach is called High Dimensional Model Representation (HDMR). However, the method in its current form is not applicable to problems having huge volumes of data. With the increasing dimension number and the number of the corresponding nodes, the volume of data in question reaches such a high level that it is beyond the capacity of any individual PC because huge volume of data requires much higher RAM capacity. Another aspect is that the structure of equalities used in the calculation of HDMR terms varies according to the dimension number of the problem. The number of loops in the algorithm increases with the increasing dimension number. In this work, as a first step, the equations used are modified in such a way that their structure does not depend on the dimension number. With the newly obtained equalities, the method becomes appropriate for parallelization. Due to the parallelization, the RAM problem arising from problems with high volume of data is solved. Finally, the performance of the parallelized method is analyzed.  相似文献   
13.
14.
This paper discusses two general schemes for performing branch-and-bound (B&B) search in parallel. These schemes are applicable in principle to most of the problems which can be solved by B&B. The schemes are implemented for SSS*, a versatile algorithm having applications in game tree search, structural pattern analysis, and AND/OR graph search. The performance of parallel SSS* is studied in the context of AND/OR tree and game tree search. The paper concludes with comments on potential applications of these parallel implementations of SSS* in structural pattern analysis and game playing.  相似文献   
15.
In 1975 Fukunaga and Narendra proposed an efficient branch and bound algorithm for computing k-nearest neighbors. Their algorithm, after a hierarchical decomposition of the design set into disjoint subsets, employs two rules in order to eliminate the necessity of calculating many distances. This correspondence discusses the applicability of two additional rules for a further reduction of the number of distance computations. Experimental results using samples from bivariate Gaussian and uniform distributions suggest that the number of distance computations required by the modified is typicaly one fourth of that of the Fukunaga-Narendra algorithm.  相似文献   
16.
Different ways of representing probabilistic relationships among the attributes of a domain ar examined, and it is shown that the nature of domain relationships used in a representation affects the types of reasoning objectives that can be achieved. Two well-known formalisms for representing the probabilistic among attributes of a domain. These are the dependence tree formalism presented by C.K. Chow and C.N. Liu (1968) and the Bayesian networks methodology presented by J. Pearl (1986). An example is used to illustrate the nature of the relationships and the difference in the types of reasoning performed by these two representations. An abductive type of reasoning objective that requires use of the known qualitative relationships of the domain is demonstrated. A suitable way to represent such qualitative relationships along with the probabilistic knowledge is given, and how an explanation for a set of observed events may be constituted is discussed. An algorithm for learning the qualitative relationships from empirical data using an algorithm based on the minimization of conditional entropy is presented  相似文献   
17.
Computing an optimal solution to the knapsack problem is known to be NP-hard. Consequently, fast parallel algorithms for finding such a solution without using an exponential number of processors appear unlikely. An attractive alternative is to compute an approximate solution to this problem rapidly using a polynomial number of processors. In this paper, we present an efficient parallel algorithm for finding approximate solutions to the 0–1 knapsack problem. Our algorithm takes an , 0 < < 1, as a parameter and computes a solution such that the ratio of its deviation from the optimal solution is at most a fraction of the optimal solution. For a problem instance having n items, this computation uses O(n5/2/3/2) processors and requires O(log3n + log2nlog(1/)) time. The upper bound on the processor requirement of our algorithm is established by reducing it to a problem on weighted bipartite graphs. This processor complexity is a significant improvement over that of other known parallel algorithms for this problem.  相似文献   
18.
19.
The aim of this study was to evaluate the accuracy of four different motion correction techniques in SPECT imaging of the heart. METHODS: We evaluated three automated techniques: the cross-correlation (CC) method, diverging squares (DS) method and two-dimensional fit method and one manual shift technique (MS) using a cardiac phantom. The phantom was filled with organ concentrations of 99mTc closely matching those seen in patient studies. The phantom was placed on a small sliding platform connected to a computer-controlled stepping motor. Linear, random, sinusoidal and bounce motions of magnitude up to 2 cm in the axial direction were simulated. Both single- and dual-detector 90 degrees acquisitions were acquired using a dual 90 degrees detector system. Data were acquired over 180 degrees with 30 or 15 frames/detector (single-/dual-head) at 30 sec/frame in a 64x64 matrix. RESULTS: The simulated single-detector system, CC method, failed to accurately correct for any of the simulated motions. The DS technique overestimated the magnitude of phantom motion, particularly for images acquired between 45 degrees left anterior oblique and 45 degrees left posterior oblique. The two-dimensional and MS techniques accurately corrected for motion. The simulated dual 90 degrees detector system, CC method, only partially tracked random or bounce cardiac motion and failed to detect sinusoidal motion. The DS technique overestimated motion in the latter half of the study. Both the two-dimensional and MS techniques provided superior tracking, although no technique was able to accurately track the rapid changes in cardiac location simulated in the random motion study. Average absolute differences between true and calculated position of the heart on single- and dual 90 degrees -detectors were 1.7 mm and 1.5 mm for the two-dimensional and MS techniques, respectively. The corresponding values for the DS and CC techniques were 5.7 and 8.9 mm, respectively. CONCLUSION: Of the four techniques evaluated, manual correction by an experienced technologist proved to be the most accurate, although results were not significantly different from those observed with the two-dimensional method. Both techniques accurately determined cardiac location and permitted artifact-free reconstruction of the simulated cardiac studies.  相似文献   
20.
The human plasma contains small peptide molecules known as low molecular weight growth factors synergistically increasing certain biological actions of insulin-like growth factors. In the present work we isolated and characterized a hexapeptide with HWESAS as structure. This purified peptide was absolutely necessary for the sulfation activity of insulin-like growth factor-I on chick embryo pelvic cartilages and improved the mitogenic activity of both insulin-like growth factors. The effects of this hexapeptide were confirmed by using the homologous synthetic peptide, that exhibited similar biological effects. Other synthetic peptides with structure derived from hexapeptide were shown to be active: the pentapeptide HWESA appeared more potent than the tripeptide HWE, which is about 170 to 200 times less active than the hexapeptide. The sequence of hexapeptide HWESAS is identified in only one human protein that is C3f, a fragment of C3 complement.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号