全文获取类型
收费全文 | 115篇 |
免费 | 0篇 |
专业分类
电工技术 | 2篇 |
机械仪表 | 7篇 |
轻工业 | 1篇 |
无线电 | 88篇 |
一般工业技术 | 4篇 |
冶金工业 | 2篇 |
自动化技术 | 11篇 |
出版年
2017年 | 1篇 |
2012年 | 6篇 |
2011年 | 4篇 |
2010年 | 2篇 |
2009年 | 1篇 |
2008年 | 7篇 |
2007年 | 7篇 |
2006年 | 7篇 |
2005年 | 11篇 |
2004年 | 8篇 |
2003年 | 9篇 |
2002年 | 6篇 |
2001年 | 3篇 |
2000年 | 5篇 |
1999年 | 4篇 |
1998年 | 7篇 |
1997年 | 3篇 |
1996年 | 2篇 |
1995年 | 3篇 |
1994年 | 2篇 |
1993年 | 2篇 |
1992年 | 1篇 |
1991年 | 2篇 |
1989年 | 4篇 |
1987年 | 1篇 |
1986年 | 2篇 |
1985年 | 1篇 |
1984年 | 2篇 |
1983年 | 1篇 |
1958年 | 1篇 |
排序方式: 共有115条查询结果,搜索用时 15 毫秒
11.
Ruttimann U.E. Unser M. Rawlings R.R. Rio D. Ramsey N.F. Mattay V.S. Hommer D.W. Frank J.A. Weinberger D.R. 《IEEE transactions on medical imaging》1998,17(2):142-154
The use of the wavelet transform is explored for the detection of differences between brain functional magnetic resonance images (fMRIs) acquired under two different experimental conditions. The method benefits from the fact that a smooth and spatially localized signal can be represented by a small set of localized wavelet coefficients, while the power of white noise is uniformly spread throughout the wavelet space. Hence, a statistical procedure is developed that uses the imposed decomposition orthogonality to locate wavelet-space partitions with large signal-to-noise ratio (SNR), and subsequently restricts the testing for significant wavelet coefficients to these partitions. This results in a higher SNR and a smaller number of statistical tests, yielding a lower detection threshold compared to spatial-domain testing and, thus, a higher detection sensitivity without increasing type I errors. The multiresolution approach of the wavelet method is particularly suited to applications where the signal bandwidth and/or the characteristics of an imaging modality cannot be well specified. The proposed method was applied to compare two different fMRI acquisition modalities, Differences of the respective useful signal bandwidths could be clearly demonstrated; the estimated signal, due to the smoothness of the wavelet representation, yielded more compact regions of neuroactivity than standard spatial-domain testing 相似文献
12.
The purpose of this paper is to derive optimal spline algorithms for the enlargement or reduction of digital images by arbitrary (noninteger) scaling factors. In our formulation, the original and rescaled signals are each represented by an interpolating polynomial spline of degree n with step size one and Delta, respectively. The change of scale is achieved by determining the spline with step size Delta that provides the closest approximation of the original signal in the L(2)-norm. We show that this approximation can be computed in three steps: (i) a digital prefilter that provides the B-spline coefficients of the input signal, (ii) a resampling using an expansion formula with a modified sampling kernel that depends explicitly on Delta, and (iii) a digital postfilter that maps the result back into the signal domain. We provide explicit formulas for n=0, 1, and 3 and propose solutions for the efficient implementation of these algorithms. We consider image processing examples and show that the present method compares favorably with standard interpolation techniques. Finally, we discuss some properties of this approach and its connection with the classical technique of bandlimiting a signal, which provides the asymptotic limit of our algorithm as the order of the spline tends to infinity. 相似文献
13.
Wavelet theory demystified 总被引:5,自引:0,他引:5
We revisit wavelet theory starting from the representation of a scaling function as the convolution of a B-spline (the regular part of it) and a distribution (the irregular or residual part). This formulation leads to some new insights on wavelets and makes it possible to rederive the main results of the classical theory - including some new extensions for fractional orders n a self-contained, accessible fashion. In particular, we prove that the B-spline component is entirely responsible for five key wavelet properties: order of approximation, reproduction of polynomials, vanishing moments, multiscale differentiation property, and smoothness (regularity) of the basis functions. We also investigate the interaction of wavelets with differential operators giving explicit time domain formulas for the fractional derivatives of the basis functions. This allows us to specify a corresponding dual wavelet basis and helps us understand why the wavelet transform provides a stable characterization of the derivatives of a signal. Additional results include a new peeling theory of smoothness, leading to the extended notion of wavelet differentiability in the L/sub p/-sense and a sharper theorem stating that smoothness implies order. 相似文献
14.
B-spline snakes: a flexible tool for parametric contour detection 总被引:19,自引:0,他引:19
We present a novel formulation for B-spline snakes that can be used as a tool for fast and intuitive contour outlining. We start with a theoretical argument in favor of splines in the traditional formulation by showing that the optimal, curvature-constrained snake is a cubic spline, irrespective of the form of the external energy field. Unfortunately, such regularized snakes suffer from slow convergence speed because of a large number of control points, as well as from difficulties in determining the weight factors associated to the internal energies of the curve. We therefore propose an alternative formulation in which the intrinsic scale of the spline model is adjusted a priori; this leads to a reduction of the number of parameters to be optimized and eliminates the need for internal energies (i.e., the regularization term). In other words, we are now controlling the elasticity of the spline implicitly and rather intuitively by varying the spacing between the spline knots. The theory is embedded into a multiresolution formulation demonstrating improved stability in noisy image environments. Validation results are presented, comparing the traditional snake using internal energies and the proposed approach without internal energies, showing the similar performance of the latter. Several biomedical examples of applications are included to illustrate the versatility of the method. 相似文献
15.
Sampling-50 years after Shannon 总被引:22,自引:0,他引:22
Unser M. 《Proceedings of the IEEE. Institute of Electrical and Electronics Engineers》2000,88(4):569-587
This paper presents an account of the current state of sampling, 50 years after Shannon's formulation of the sampling theorem. The emphasis is on regular sampling, where the grid is uniform. This topic has benefitted from a strong research revival during the past few years, thanks in part to the mathematical connections that were made with wavelet theory. To introduce the reader to the modern, Hilbert-space formulation, we reinterpret Shannon's sampling procedure as an orthogonal projection onto the subspace of band-limited functions. We then extend the standard sampling paradigm for a presentation of functions in the more general class of “shift-in-variant” function spaces, including splines and wavelets. Practically, this allows for simpler-and possibly more realistic-interpolation models, which can be used in conjunction with a much wider class of (anti-aliasing) prefilters that are not necessarily ideal low-pass. We summarize and discuss the results available for the determination of the approximation error and of the sampling rate when the input of the system is essentially arbitrary; e.g., nonbandlimited. We also review variations of sampling that can be understood from the same unifying perspective. These include wavelets, multiwavelets, Papoulis generalized sampling, finite elements, and frames. Irregular sampling and radial basis functions are briefly mentioned 相似文献
16.
Interpolation revisited 总被引:10,自引:0,他引:10
Based on the theory of approximation, this paper presents a unified analysis of interpolation and resampling techniques. An important issue is the choice of adequate basis functions. We show that, contrary to the common belief, those that perform best are not interpolating. By opposition to traditional interpolation, we call their use generalized interpolation; they involve a prefiltering step when correctly applied. We explain why the approximation order inherent in any basis function is important to limit interpolation artifacts. The decomposition theorem states that any basis function endowed with approximation order can be expressed as the convolution of a B-spline of the same order with another function that has none. This motivates the use of splines and spline-based functions as a tunable way to keep artifacts in check without any significant cost penalty. We discuss implementation and performance issues, and we provide experimental evidence to support our claims. 相似文献
17.
Our interest is to characterize the spline-like integer-shift-invariant bases capable of reproducing exponential polynomial curves. We prove that any compact-support function that reproduces a subspace of the exponential polynomials can be expressed as the convolution of an exponential B-spline with a compact-support distribution. As a direct consequence of this factorization theorem, we show that the minimal-support basis functions of that subspace are linear combinations of derivatives of exponential B-splines. These minimal-support basis functions form a natural multiscale hierarchy, which we utilize to design fast multiresolution algorithms and subdivision schemes for the representation of closed geometric curves. This makes them attractive from a computational point of view. Finally, we illustrate our scheme by constructing minimal-support bases that reproduce ellipses and higher-order harmonic curves. 相似文献
18.
Quantitative structural analysis from electron micrographs of biological macromolecules inevitably requires the synthesis of data from many parts of the same micrograph and, ultimately, from multiple micrographs. Higher resolutions require the inclusion of progressively more data, and for the particles analyzed to be consistent to within ever more stringent limits. Disparities in magnification between micrographs or even within the field of one micrograph, arising from lens hysteresis or distortions, limit the resolution of such analyses. A quantitative assessment of this effect shows that its severity depends on the size of the particle under study: for particles that are 100 nm in diameter, for example, a 2% discrepancy in magnification restricts the resolution to approximately 5 nm. In this study, we derive and describe the properties of a family of algorithms designed for cross-calibrating the magnifications of particles from different micrographs, or from widely differing parts of the same micrograph. This approach is based on the assumption that all of the particles are of identical size: thus, it is applicable primarily to cryo-electron micrographs in which native dimensions are precisely preserved. As applied to icosahedral virus capsids, this procedure is accurate to within 0.1-0.2%, provided that at least five randomly oriented particles are included in the calculation. The algorithm is stable in the presence of noise levels typical of those encountered in practice, and is readily adaptable to non-isometric particles. It may also be used to discriminate subpopulations of subtly different sizes. 相似文献
19.
Muthuvel Arigovindan Michael Sühling Patrick Hunziker Michael Unser 《IEEE transactions on image processing》2005,14(4):450-460
We propose a novel method for image reconstruction from nonuniform samples with no constraints on their locations. We adopt a variational approach where the reconstruction is formulated as the minimizer of a cost that is a weighted sum of two terms: (1) the sum of squared errors at the specified points and (2) a quadratic functional that penalizes the lack of smoothness. We search for a solution that is a uniform spline and show how it can be determined by solving a large, sparse system of linear equations. We interpret the solution of our approach as an approximation of the analytical solution that involves radial basis functions and demonstrate the computational advantages of our approach. Using the two-scale relation for B-splines, we derive an algebraic relation that links together the linear systems of equations specifying reconstructions at different levels of resolution. We use this relation to develop a fast multigrid algorithm. We demonstrate the effectiveness of our approach on some image reconstruction examples. 相似文献
20.
Edge-preserving smoothers need not be taxed by a severe computational cost. We present, in this paper, a lean algorithm that is inspired by the bi-exponential filter and preserves its structure-a pair of one-tap recursions. By a careful but simple local adaptation of the filter weights to the data, we are able to design an edge-preserving smoother that has a very low memory and computational footprint while requiring a trivial coding effort. We demonstrate that our filter (a bi-exponential edge-preserving smoother, or BEEPS) has formal links with the traditional bilateral filter. On a practical side, we observe that the BEEPS also produces images that are similar to those that would result from the bilateral filter, but at a much-reduced computational cost. The cost per pixel is constant and depends neither on the data nor on the filter parameters, not even on the degree of smoothing. 相似文献