全文获取类型
收费全文 | 115篇 |
免费 | 0篇 |
专业分类
电工技术 | 2篇 |
机械仪表 | 7篇 |
轻工业 | 1篇 |
无线电 | 88篇 |
一般工业技术 | 4篇 |
冶金工业 | 2篇 |
自动化技术 | 11篇 |
出版年
2017年 | 1篇 |
2012年 | 6篇 |
2011年 | 4篇 |
2010年 | 2篇 |
2009年 | 1篇 |
2008年 | 7篇 |
2007年 | 7篇 |
2006年 | 7篇 |
2005年 | 11篇 |
2004年 | 8篇 |
2003年 | 9篇 |
2002年 | 6篇 |
2001年 | 3篇 |
2000年 | 5篇 |
1999年 | 4篇 |
1998年 | 7篇 |
1997年 | 3篇 |
1996年 | 2篇 |
1995年 | 3篇 |
1994年 | 2篇 |
1993年 | 2篇 |
1992年 | 1篇 |
1991年 | 2篇 |
1989年 | 4篇 |
1987年 | 1篇 |
1986年 | 2篇 |
1985年 | 1篇 |
1984年 | 2篇 |
1983年 | 1篇 |
1958年 | 1篇 |
排序方式: 共有115条查询结果,搜索用时 15 毫秒
11.
Causal exponentials play a fundamental role in classical system theory. Starting from those elementary building blocks, we propose a complete and self-contained signal processing formulation of exponential splines defined on a uniform grid. We specify the corresponding B-spline basis functions and investigate their reproduction properties (Green function and exponential polynomials); we also characterize their stability (Riesz bounds). We show that the exponential B-spline framework allows an exact implementation of continuous-time signal processing operators including convolution, differential operators, and modulation, by simple processing in the discrete B-spline domain. We derive efficient filtering algorithms for multiresolution signal extrapolation and approximation, extending earlier results for polynomial splines. Finally, we present a new asymptotic error formula that predicts the magnitude and the Nth-order decay of the L/sub 2/-approximation error as a function of the knot spacing T. 相似文献
12.
Edge-preserving smoothers need not be taxed by a severe computational cost. We present, in this paper, a lean algorithm that is inspired by the bi-exponential filter and preserves its structure-a pair of one-tap recursions. By a careful but simple local adaptation of the filter weights to the data, we are able to design an edge-preserving smoother that has a very low memory and computational footprint while requiring a trivial coding effort. We demonstrate that our filter (a bi-exponential edge-preserving smoother, or BEEPS) has formal links with the traditional bilateral filter. On a practical side, we observe that the BEEPS also produces images that are similar to those that would result from the bilateral filter, but at a much-reduced computational cost. The cost per pixel is constant and depends neither on the data nor on the filter parameters, not even on the degree of smoothing. 相似文献
13.
B-spline snakes: a flexible tool for parametric contour detection 总被引:19,自引:0,他引:19
We present a novel formulation for B-spline snakes that can be used as a tool for fast and intuitive contour outlining. We start with a theoretical argument in favor of splines in the traditional formulation by showing that the optimal, curvature-constrained snake is a cubic spline, irrespective of the form of the external energy field. Unfortunately, such regularized snakes suffer from slow convergence speed because of a large number of control points, as well as from difficulties in determining the weight factors associated to the internal energies of the curve. We therefore propose an alternative formulation in which the intrinsic scale of the spline model is adjusted a priori; this leads to a reduction of the number of parameters to be optimized and eliminates the need for internal energies (i.e., the regularization term). In other words, we are now controlling the elasticity of the spline implicitly and rather intuitively by varying the spacing between the spline knots. The theory is embedded into a multiresolution formulation demonstrating improved stability in noisy image environments. Validation results are presented, comparing the traditional snake using internal energies and the proposed approach without internal energies, showing the similar performance of the latter. Several biomedical examples of applications are included to illustrate the versatility of the method. 相似文献
14.
Baritaux JC Hassler K Bucher M Sanyal S Unser M 《IEEE transactions on medical imaging》2011,30(5):1143-1153
In this paper we propose a method based on (2, 1)-mixed-norm penalization for incorporating a structural prior in FDOT image reconstruction. The effect of (2, 1)-mixed-norm penalization is twofold: first, a sparsifying effect which isolates few anatomical regions where the fluorescent probe has accumulated, and second, a regularization effect inside the selected anatomical regions. After formulating the reconstruction in a variational framework, we analyze the resulting optimization problem and derive a practical numerical method tailored to (2, 1)-mixed-norm regularization. The proposed method includes as particular cases other sparsity promoting regularization methods such as l(1)-norm penalization and total variation penalization. Results on synthetic and experimental data are presented. 相似文献
15.
Murray Eden Michael Unser Member EURASIP Riccardo Leonardi Member EURASIP 《Signal processing》1986,10(4)
In many image processing applications, the discrete values of an image can be embedded in a continuous function. This type of representation can be useful for interpolation, geometrical transformations or special features extraction. Given a rectangular M × N discrete image (or sub-image), it is shown how to compute a continuous polynomial function that guarantees an exact fit at the considered pixel locations. The polynomials coefficients can be expressed as a linear one-to-one separable transform of the pixels. The transform matrices can be computed using a fast recursive algorithm which enables efficient inversion of a Vandermonde matrix. It is also shown that the least square polynomial approximation with M′ × N′ coefficients, in the separable formulation, involves the inversion of two M′ × M′ and N′ × N′ Hankel matrices. 相似文献
16.
Interpolation revisited 总被引:10,自引:0,他引:10
Based on the theory of approximation, this paper presents a unified analysis of interpolation and resampling techniques. An important issue is the choice of adequate basis functions. We show that, contrary to the common belief, those that perform best are not interpolating. By opposition to traditional interpolation, we call their use generalized interpolation; they involve a prefiltering step when correctly applied. We explain why the approximation order inherent in any basis function is important to limit interpolation artifacts. The decomposition theorem states that any basis function endowed with approximation order can be expressed as the convolution of a B-spline of the same order with another function that has none. This motivates the use of splines and spline-based functions as a tunable way to keep artifacts in check without any significant cost penalty. We discuss implementation and performance issues, and we provide experimental evidence to support our claims. 相似文献
17.
Our interest is to characterize the spline-like integer-shift-invariant bases capable of reproducing exponential polynomial curves. We prove that any compact-support function that reproduces a subspace of the exponential polynomials can be expressed as the convolution of an exponential B-spline with a compact-support distribution. As a direct consequence of this factorization theorem, we show that the minimal-support basis functions of that subspace are linear combinations of derivatives of exponential B-splines. These minimal-support basis functions form a natural multiscale hierarchy, which we utilize to design fast multiresolution algorithms and subdivision schemes for the representation of closed geometric curves. This makes them attractive from a computational point of view. Finally, we illustrate our scheme by constructing minimal-support bases that reproduce ellipses and higher-order harmonic curves. 相似文献
18.
Sampling-50 years after Shannon 总被引:22,自引:0,他引:22
Unser M. 《Proceedings of the IEEE. Institute of Electrical and Electronics Engineers》2000,88(4):569-587
This paper presents an account of the current state of sampling, 50 years after Shannon's formulation of the sampling theorem. The emphasis is on regular sampling, where the grid is uniform. This topic has benefitted from a strong research revival during the past few years, thanks in part to the mathematical connections that were made with wavelet theory. To introduce the reader to the modern, Hilbert-space formulation, we reinterpret Shannon's sampling procedure as an orthogonal projection onto the subspace of band-limited functions. We then extend the standard sampling paradigm for a presentation of functions in the more general class of “shift-in-variant” function spaces, including splines and wavelets. Practically, this allows for simpler-and possibly more realistic-interpolation models, which can be used in conjunction with a much wider class of (anti-aliasing) prefilters that are not necessarily ideal low-pass. We summarize and discuss the results available for the determination of the approximation error and of the sampling rate when the input of the system is essentially arbitrary; e.g., nonbandlimited. We also review variations of sampling that can be understood from the same unifying perspective. These include wavelets, multiwavelets, Papoulis generalized sampling, finite elements, and frames. Irregular sampling and radial basis functions are briefly mentioned 相似文献
19.
Quantitative structural analysis from electron micrographs of biological macromolecules inevitably requires the synthesis of data from many parts of the same micrograph and, ultimately, from multiple micrographs. Higher resolutions require the inclusion of progressively more data, and for the particles analyzed to be consistent to within ever more stringent limits. Disparities in magnification between micrographs or even within the field of one micrograph, arising from lens hysteresis or distortions, limit the resolution of such analyses. A quantitative assessment of this effect shows that its severity depends on the size of the particle under study: for particles that are 100 nm in diameter, for example, a 2% discrepancy in magnification restricts the resolution to approximately 5 nm. In this study, we derive and describe the properties of a family of algorithms designed for cross-calibrating the magnifications of particles from different micrographs, or from widely differing parts of the same micrograph. This approach is based on the assumption that all of the particles are of identical size: thus, it is applicable primarily to cryo-electron micrographs in which native dimensions are precisely preserved. As applied to icosahedral virus capsids, this procedure is accurate to within 0.1-0.2%, provided that at least five randomly oriented particles are included in the calculation. The algorithm is stable in the presence of noise levels typical of those encountered in practice, and is readily adaptable to non-isometric particles. It may also be used to discriminate subpopulations of subtly different sizes. 相似文献
20.
Muthuvel Arigovindan Michael Sühling Patrick Hunziker Michael Unser 《IEEE transactions on image processing》2005,14(4):450-460
We propose a novel method for image reconstruction from nonuniform samples with no constraints on their locations. We adopt a variational approach where the reconstruction is formulated as the minimizer of a cost that is a weighted sum of two terms: (1) the sum of squared errors at the specified points and (2) a quadratic functional that penalizes the lack of smoothness. We search for a solution that is a uniform spline and show how it can be determined by solving a large, sparse system of linear equations. We interpret the solution of our approach as an approximation of the analytical solution that involves radial basis functions and demonstrate the computational advantages of our approach. Using the two-scale relation for B-splines, we derive an algebraic relation that links together the linear systems of equations specifying reconstructions at different levels of resolution. We use this relation to develop a fast multigrid algorithm. We demonstrate the effectiveness of our approach on some image reconstruction examples. 相似文献