首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   115篇
  免费   0篇
电工技术   2篇
机械仪表   7篇
轻工业   1篇
无线电   88篇
一般工业技术   4篇
冶金工业   2篇
自动化技术   11篇
  2017年   1篇
  2012年   6篇
  2011年   4篇
  2010年   2篇
  2009年   1篇
  2008年   7篇
  2007年   7篇
  2006年   7篇
  2005年   11篇
  2004年   8篇
  2003年   9篇
  2002年   6篇
  2001年   3篇
  2000年   5篇
  1999年   4篇
  1998年   7篇
  1997年   3篇
  1996年   2篇
  1995年   3篇
  1994年   2篇
  1993年   2篇
  1992年   1篇
  1991年   2篇
  1989年   4篇
  1987年   1篇
  1986年   2篇
  1985年   1篇
  1984年   2篇
  1983年   1篇
  1958年   1篇
排序方式: 共有115条查询结果,搜索用时 31 毫秒
1.
This paper provides an overview of the main aspects of modern fluorescence microscopy. It covers the principles of fluorescence and highlights the key discoveries in the history of fluorescence microscopy. The paper also discusses the optics of fluorescence microscopes and examines the various types of detectors. It also discusses the signal and image processing challenges in fluorescence microscopy and highlights some of the present developments and future trends in the field.  相似文献   
2.
Least squares approximation problems that are regularized with specified highpass stabilizing kernels are discussed. For each problem, there is a family of discrete regularization filters (R-filters) which allow an efficient determination of the solutions. These operators are stable symmetric lowpass filters with an adjustable scale factor. Two decomposition theorems for the z-transform of such systems are presented. One facilitates the determination of their impulse response, while the other allows an efficient implementation through successive causal and anticausal recursive filtering. A case of special interest is the design of R-filters for the first- and second-order difference operators. These results are extended for two-dimensional signals and, for illustration purposes, are applied to the problem of edge detection. This leads to a very efficient implementation (8 multiplies+10 adds per pixel) of the optimal Canny edge detector based on the use of a separable second-order R-filter  相似文献   
3.
Wavelet theory demystified   总被引:5,自引:0,他引:5  
We revisit wavelet theory starting from the representation of a scaling function as the convolution of a B-spline (the regular part of it) and a distribution (the irregular or residual part). This formulation leads to some new insights on wavelets and makes it possible to rederive the main results of the classical theory - including some new extensions for fractional orders n a self-contained, accessible fashion. In particular, we prove that the B-spline component is entirely responsible for five key wavelet properties: order of approximation, reproduction of polynomials, vanishing moments, multiscale differentiation property, and smoothness (regularity) of the basis functions. We also investigate the interaction of wavelets with differential operators giving explicit time domain formulas for the fractional derivatives of the basis functions. This allows us to specify a corresponding dual wavelet basis and helps us understand why the wavelet transform provides a stable characterization of the derivatives of a signal. Additional results include a new peeling theory of smoothness, leading to the extended notion of wavelet differentiability in the L/sub p/-sense and a sharper theorem stating that smoothness implies order.  相似文献   
4.
Our interest is to characterize the spline-like integer-shift-invariant bases capable of reproducing exponential polynomial curves. We prove that any compact-support function that reproduces a subspace of the exponential polynomials can be expressed as the convolution of an exponential B-spline with a compact-support distribution. As a direct consequence of this factorization theorem, we show that the minimal-support basis functions of that subspace are linear combinations of derivatives of exponential B-splines. These minimal-support basis functions form a natural multiscale hierarchy, which we utilize to design fast multiresolution algorithms and subdivision schemes for the representation of closed geometric curves. This makes them attractive from a computational point of view. Finally, we illustrate our scheme by constructing minimal-support bases that reproduce ellipses and higher-order harmonic curves.  相似文献   
5.
A new resolution criterion based on spectral signal-to-noise ratios   总被引:6,自引:0,他引:6  
A new criterion for the "useful" resolution of electron micrographs of macromolecular particles is introduced. This criterion is based on estimation of the spatial frequency limit beyond which the spectral signal-to-noise ratio (SSNR) falls below an acceptable baseline. Applicable to both periodic and aperiodic specimens, this approach is particularly apposite for sets of correlation-averaged images. It represents a straightforward and intuitively appealing generalization of the traditional method of estimating the resolution of crystalline specimens from the spectral ranges of periodic reflections in their diffraction patterns. This method allows one to assess how closely the resolution of an averaged image based on N individual images approaches the ultimate resolution obtainable from an indefinitely large number of statistically equivalent images. Inter-relationships between the SSNR and two other measures of resolution, the differential phase residual and the Fourier ring correlation coefficient, are discussed, and their properties compared.  相似文献   
6.
Interpolation revisited   总被引:10,自引:0,他引:10  
Based on the theory of approximation, this paper presents a unified analysis of interpolation and resampling techniques. An important issue is the choice of adequate basis functions. We show that, contrary to the common belief, those that perform best are not interpolating. By opposition to traditional interpolation, we call their use generalized interpolation; they involve a prefiltering step when correctly applied. We explain why the approximation order inherent in any basis function is important to limit interpolation artifacts. The decomposition theorem states that any basis function endowed with approximation order can be expressed as the convolution of a B-spline of the same order with another function that has none. This motivates the use of splines and spline-based functions as a tunable way to keep artifacts in check without any significant cost penalty. We discuss implementation and performance issues, and we provide experimental evidence to support our claims.  相似文献   
7.
Causal exponentials play a fundamental role in classical system theory. Starting from those elementary building blocks, we propose a complete and self-contained signal processing formulation of exponential splines defined on a uniform grid. We specify the corresponding B-spline basis functions and investigate their reproduction properties (Green function and exponential polynomials); we also characterize their stability (Riesz bounds). We show that the exponential B-spline framework allows an exact implementation of continuous-time signal processing operators including convolution, differential operators, and modulation, by simple processing in the discrete B-spline domain. We derive efficient filtering algorithms for multiresolution signal extrapolation and approximation, extending earlier results for polynomial splines. Finally, we present a new asymptotic error formula that predicts the magnitude and the Nth-order decay of the L/sub 2/-approximation error as a function of the knot spacing T.  相似文献   
8.
Sampling-50 years after Shannon   总被引:22,自引:0,他引:22  
This paper presents an account of the current state of sampling, 50 years after Shannon's formulation of the sampling theorem. The emphasis is on regular sampling, where the grid is uniform. This topic has benefitted from a strong research revival during the past few years, thanks in part to the mathematical connections that were made with wavelet theory. To introduce the reader to the modern, Hilbert-space formulation, we reinterpret Shannon's sampling procedure as an orthogonal projection onto the subspace of band-limited functions. We then extend the standard sampling paradigm for a presentation of functions in the more general class of “shift-in-variant” function spaces, including splines and wavelets. Practically, this allows for simpler-and possibly more realistic-interpolation models, which can be used in conjunction with a much wider class of (anti-aliasing) prefilters that are not necessarily ideal low-pass. We summarize and discuss the results available for the determination of the approximation error and of the sampling rate when the input of the system is essentially arbitrary; e.g., nonbandlimited. We also review variations of sampling that can be understood from the same unifying perspective. These include wavelets, multiwavelets, Papoulis generalized sampling, finite elements, and frames. Irregular sampling and radial basis functions are briefly mentioned  相似文献   
9.
In many image processing applications, the discrete values of an image can be embedded in a continuous function. This type of representation can be useful for interpolation, geometrical transformations or special features extraction. Given a rectangular M × N discrete image (or sub-image), it is shown how to compute a continuous polynomial function that guarantees an exact fit at the considered pixel locations. The polynomials coefficients can be expressed as a linear one-to-one separable transform of the pixels. The transform matrices can be computed using a fast recursive algorithm which enables efficient inversion of a Vandermonde matrix. It is also shown that the least square polynomial approximation with M′ × N′ coefficients, in the separable formulation, involves the inversion of two M′ × M′ and N′ × N′ Hankel matrices.  相似文献   
10.
Quantitative structural analysis from electron micrographs of biological macromolecules inevitably requires the synthesis of data from many parts of the same micrograph and, ultimately, from multiple micrographs. Higher resolutions require the inclusion of progressively more data, and for the particles analyzed to be consistent to within ever more stringent limits. Disparities in magnification between micrographs or even within the field of one micrograph, arising from lens hysteresis or distortions, limit the resolution of such analyses. A quantitative assessment of this effect shows that its severity depends on the size of the particle under study: for particles that are 100 nm in diameter, for example, a 2% discrepancy in magnification restricts the resolution to approximately 5 nm. In this study, we derive and describe the properties of a family of algorithms designed for cross-calibrating the magnifications of particles from different micrographs, or from widely differing parts of the same micrograph. This approach is based on the assumption that all of the particles are of identical size: thus, it is applicable primarily to cryo-electron micrographs in which native dimensions are precisely preserved. As applied to icosahedral virus capsids, this procedure is accurate to within 0.1-0.2%, provided that at least five randomly oriented particles are included in the calculation. The algorithm is stable in the presence of noise levels typical of those encountered in practice, and is readily adaptable to non-isometric particles. It may also be used to discriminate subpopulations of subtly different sizes.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号