首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   115篇
  免费   0篇
电工技术   2篇
机械仪表   7篇
轻工业   1篇
无线电   88篇
一般工业技术   4篇
冶金工业   2篇
自动化技术   11篇
  2017年   1篇
  2012年   6篇
  2011年   4篇
  2010年   2篇
  2009年   1篇
  2008年   7篇
  2007年   7篇
  2006年   7篇
  2005年   11篇
  2004年   8篇
  2003年   9篇
  2002年   6篇
  2001年   3篇
  2000年   5篇
  1999年   4篇
  1998年   7篇
  1997年   3篇
  1996年   2篇
  1995年   3篇
  1994年   2篇
  1993年   2篇
  1992年   1篇
  1991年   2篇
  1989年   4篇
  1987年   1篇
  1986年   2篇
  1985年   1篇
  1984年   2篇
  1983年   1篇
  1958年   1篇
排序方式: 共有115条查询结果,搜索用时 46 毫秒
31.
We present a new, robust, computational procedure for tracking fluorescent markers in time-lapse microscopy. The algorithm is optimized for finding the time-trajectory of single particles in very noisy dynamic (two- or three-dimensional) image sequences. It proceeds in three steps. First, the images are aligned to compensate for the movement of the biological structure under investigation. Second, the particle's signature is enhanced by applying a Mexican hat filter, which we show to be the optimal detector of a Gaussian-like spot in 1/omega2 noise. Finally, the optimal trajectory of the particle is extracted by applying a dynamic programming optimization procedure. We have used this software, which is implemented as a Java plug-in for the public-domain ImageJ software, to track the movement of chromosomal loci within nuclei of budding yeast cells. Besides reducing trajectory analysis time by several 100-fold, we achieve high reproducibility and accuracy of tracking. The application of the method to yeast chromatin dynamics reveals different classes of constraints on mobility of telomeres, reflecting differences in nuclear envelope association. The generic nature of the software allows application to a variety of similar biological imaging tasks that require the extraction and quantitation of a moving particle's trajectory.  相似文献   
32.
The quantitative assessment of cardiac motion is a fundamental concept to evaluate ventricular malfunction. We present a new optical-flow-based method for estimating heart motion from two-dimensional echocardiographic sequences. To account for typical heart motions, such as contraction/expansion and shear, we analyze the images locally by using a local-affine model for the velocity in space and a linear model in time. The regional motion parameters are estimated in the least-squares sense inside a sliding spatiotemporal B-spline window. Robustness and spatial adaptability is achieved by estimating the model parameters at multiple scales within a coarse-to-fine multiresoluion framework. We use a wavelet-like algorithm for computing B-spline-weighted inner products and moments at dyadic scales to increase computational efficiency. In order to characterize myocardial contractility and to simplify the detection of myocardial dysfunction, the radial component of the velocity with respect to a reference point is color coded and visualized inside a time-varying region of interest. The algorithm was first validated on synthetic data sets that simulate a beating heart with a speckle-like appearance of echocardiograms. The ability to estimate motion from real ultrasound sequences was demonstrated by a rotating phantom experiment. The method was also applied to a set of in vivo echocardiograms from an animal study. Motion estimation results were in good agreement with the expert echocardiographic reading.  相似文献   
33.
In this paper, we use polyharmonic B-splines to build multidimensional wavelet bases. These functions are nonseparable, multidimensional basis functions that are localized versions of radial basis functions. We show that Rabut's elementary polyharmonic B-splines do not converge to a Gaussian as the order parameter increases, as opposed to their separable B-spline counterparts. Therefore, we introduce a more isotropic localization operator that guarantees this convergence, resulting into the isotropic polyharmonic B-splines. Next, we focus on the two-dimensional quincunx subsampling scheme. This configuration is of particular interest for image processing because it yields a finer scale progression than the standard dyadic approach. However, up until now, the design of appropriate filters for the quincunx scheme has mainly been done using the McClellan transform. In our approach, we start from the scaling functions, which are the polyharmonic B-splines and, as such, explicitly known, and we derive a family of polyharmonic spline wavelets corresponding to different flavors of the semi-orthogonal wavelet transform; e.g., orthonormal, B-spline, and dual. The filters are automatically specified by the scaling relations satisfied by these functions. We prove that the isotropic polyharmonic B-spline wavelet converges to a combination of four Gabor atoms, which are well separated in the frequency domain. We also show that these wavelets are nearly isotropic and that they behave as an iterated Laplacian operator at low frequencies. We describe an efficient fast Fourier transform-based implementation of the discrete wavelet transform based on polyharmonic B-splines.  相似文献   
34.
For pt.I see ibid., vol.47, no.10, p.2783-95 (1999). In a previous paper, we proposed a general Fourier method that provides an accurate prediction of the approximation error, irrespective of the scaling properties of the approximating functions. Here, we apply our results when these functions satisfy the usual two-scale relation encountered in dyadic multiresolution analysis. As a consequence of this additional constraint, the quantities introduced in our previous paper can be computed explicitly as a function of the refinement filter. This is, in particular, true for the asymptotic expansion of the approximation error for biorthonormal wavelets as the scale tends to zero. One of the contributions of this paper is the computation of sharp, asymptotically optimal upper bounds for the least-squares approximation error. Another contribution is the application of these results to B-splines and Daubechies (1988, 1992) scaling functions, which yields explicit asymptotic developments and upper bounds. Thanks to these explicit expressions, we can quantify the improvement that can be obtained by using B-splines instead of Daubechies wavelets. In other words, we can use a coarser spline sampling and achieve the same reconstruction accuracy as Daubechies. Specifically, we show that this sampling gain converges to π as the order tends to infinity  相似文献   
35.
We have addressed the problem of optimizing procedures of multivariate statistical analysis (MSA) for identifying homogeneous sets of electron micrographs of biological macromolecules, with a view to averaging over consistent sets of images. Using pre-aligned images of negatively stained protein molecules - known a priori to fall into two subtly different classes - we compared how the capacity to discriminate between them was affected by the normalization procedure used, and by the choice of factorial representation. Specifically, these images were analyzed both after being scaled according to constant minimum and maximum (CMM) values, and after imposing constant values of image mean and variance (CMV). The factorial representations compared were correspondence analysis (CA) and the principal components (PC) formalism. When used with PC, CMM normalization was found to give rise to spurious inter-image fluctuations that were more pronounced than the genuine difference between the two kinds of images; even with CA, CMV proved to be a more satisfactory method of normalization. When CMV was used with CA or PC, both factorial representations yielded qualitatively similar results, although according to a quantitative measure of inter-set discrimination, the performance of PC was slightly superior. Even in the best case, however, the two classes of images - as mapped in factorial space - were not fully resolved. The implications of this observation are discussed with regard to potential ambiguities of image classification in practice.  相似文献   
36.
We introduce an extended class of cardinal L/sup */L-splines, where L is a pseudo-differential operator satisfying some admissibility conditions. We show that the L/sup */L-spline signal interpolation problem is well posed and that its solution is the unique minimizer of the spline energy functional /spl par/Ls/spl par//sub L2//sup 2/, subject to the interpolation constraint. Next, we consider the corresponding regularized least squares estimation problem, which is more appropriate for dealing with noisy data. The criterion to be minimized is the sum of a quadratic data term, which forces the solution to be close to the input samples, and a "smoothness" term that privileges solutions with small spline energies. Here, too, we find that the optimal solution, among all possible functions, is a cardinal L/sup */L-spline. We show that this smoothing spline estimator has a stable representation in a B-spline-like basis and that its coefficients can be computed by digital filtering of the input signal. We describe an efficient recursive filtering algorithm that is applicable whenever the transfer function of L is rational (which corresponds to the case of exponential splines). We justify these algorithms statistically by establishing an equivalence between L/sup */L smoothing splines and the minimum mean square error (MMSE) estimation of a stationary signal corrupted by white Gaussian noise. In this model-based formulation, the optimum operator L is the whitening filter of the process, and the regularization parameter is proportional to the noise variance. Thus, the proposed formalism yields the optimal discretization of the classical Wiener filter, together with a fast recursive algorithm. It extends the standard Wiener solution by providing the optimal interpolation space. We also present a Bayesian interpretation of the algorithm.  相似文献   
37.
Design of steerable filters for feature detection using canny-like criteria   总被引:6,自引:0,他引:6  
We propose a general approach for the design of 2D feature detectors from a class of steerable functions based on the optimization of a Canny-like criterion. In contrast with previous computational designs, our approach is truly 2D and provides filters that have closed-form expressions. It also yields operators that have a better orientation selectivity than the classical gradient or Hessian-based detectors. We illustrate the method with the design of operators for edge and ridge detection. We present some experimental results that demonstrate the performance improvement of these new feature detectors. We propose computationally efficient local optimization algorithms for the estimation of feature orientation. We also introduce the notion of shape-adaptable feature detection and use it for the detection of image corners.  相似文献   
38.
We perform elastic registration by an algorithm based on a pixelwise and regularized optimization criterion. We express the deformation field thanks to B-splines, which allows us to deal with a rich variety of deformations. The algorithm is able to handle soft landmark constraints, which is particularly useful when parts of the images contain very little information, or when it is unevenly distributed. We solve the problem by minimizing the distance between the target image and the warped source. We regularize this minimization problem by divergence and curl. We apply the proposed algorithm to the registration of the confocal scanning microscopy images of Drosophila embryos. The text was submitted by the authors in English.  相似文献   
39.
Least-squares image resizing using finite differences   总被引:4,自引:0,他引:4  
We present an optimal spline-based algorithm for the enlargement or reduction of digital images with arbitrary (noninteger) scaling factors. This projection-based approach can be realized thanks to a new finite difference method that allows the computation of inner products with analysis functions that are B-splines of any degree n. A noteworthy property of the algorithm is that the computational complexity per pixel does not depend on the scaling factor a. For a given choice of basis functions, the results of our method are consistently better than those of the standard interpolation procedure; the present scheme achieves a reduction of artifacts such as aliasing and blocking and a significant improvement of the signal-to-noise ratio. The method can be generalized to include other classes of piecewise polynomial functions, expressed as linear combinations of B-splines and their derivatives.  相似文献   
40.
We propose a new framework to extract the activity-related component in the BOLD functional magnetic resonance imaging (fMRI) signal. As opposed to traditional fMRI signal analysis techniques, we do not impose any prior knowledge of the event timing. Instead, our basic assumption is that the activation pattern is a sequence of short and sparsely distributed stimuli, as is the case in slow event-related fMRI.We introduce new wavelet bases, termed “activelets”, which sparsify the activity-related BOLD signal. These wavelets mimic the behavior of the differential operator underlying the hemodynamic system. To recover the sparse representation, we deploy a sparse-solution search algorithm.The feasibility of the method is evaluated using both synthetic and experimental fMRI data. The importance of the activelet basis and the non-linear sparse recovery algorithm is demonstrated by comparison against classical B-spline wavelets and linear regularization, respectively.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号