首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   100篇
  免费   0篇
无线电   84篇
一般工业技术   4篇
自动化技术   12篇
  2013年   3篇
  2012年   6篇
  2011年   1篇
  2010年   2篇
  2009年   4篇
  2008年   4篇
  2007年   2篇
  2006年   8篇
  2005年   5篇
  2004年   6篇
  2003年   6篇
  2002年   3篇
  2001年   4篇
  2000年   3篇
  1999年   3篇
  1998年   8篇
  1997年   6篇
  1996年   1篇
  1995年   11篇
  1994年   3篇
  1993年   4篇
  1992年   4篇
  1991年   2篇
  1984年   1篇
排序方式: 共有100条查询结果,搜索用时 31 毫秒
91.
This article develops an iterative spatially adaptive regularized image restoration algorithm. The proposed algorithm is based on the minimization of a weighted smoothing functional. The weighting matrices are defined as functions of the partially restored image at each iteration step. As a result, no prior knowledge about the image and the noise is required, but the weighting matrices as well as the regularization parameter are updated based on the restored image at every step. Conditions for the convexity of the weighted smoothing functional and for the convergence of the iterative algorithm are established for a unique global solution which does not depend on initial conditions. Experimental results are shown with astronomical images which demonstrate the effectiveness of the proposed algorithm.  相似文献   
92.
There are a large number of applications requiring the compression of video at Very Low Bit Rates (VLBR). Such applications include wireless video conferencing, video over the internet, multimedia database retrieval and remote sensing and monitoring. Recently, the MPEG-4 standardization effort has been a motivating factor to find a solution to this challenging problem. The existing approaches to this problem can generally be grouped into block-based, model-based, and object-oriented. Block-based approaches follow the traditional strategy of decoupling the image sequence into blocks, model-based approaches rely on complex 3-D models for specific objects that are encoded, and object-oriented approaches rely on analyzing the scene into differently moving objects. All three approaches exhibit potential problems. Block-based approaches tend to generate artifacts at the boundaries of the blocks, as well as to limit the minimum achievable bit-rate due to the fixed analysis structure of the scene. Model-based codecs are limited by the complex 3-D models of the objects to be encoded. On the other hand, object-oriented codecs can generate a significant overhead due to the analysis of the scene which needs to be transmitted, which in turn can be the limiting factor in achieving the target bit-rates. In this paper, we propose a hybrid object-oriented codec in which the correlations among the three information fields, e.g., motion, segmentation and intensity fields, are exploited both spatially and temporally. In the proposed method, additional intelligence is given to the decoder, resulting in a reduction of the required bandwidth. The residual information is analyzed into three different categories, i.e., occlusion, model failures, and global refinement. The residual information is encoded and transmitted across the channel with other side information. Experimental results are presented which demonstrate the effectiveness of the proposed approach.  相似文献   
93.
The application of regularization to ill-conditioned problems necessitates the choice of a regularization parameter which trades fidelity to the data with smoothness of the solution. The value of the regularization parameter depends on the variance of the noise in the data. The problem of choosing the regularization parameter and estimating the noise variance in image restoration is examined. An error analysis based on an objective mean-square-error (MSE) criterion is used to motivate regularization. Two approaches for choosing the regularization parameter and estimating the noise variance are proposed. The proposed and existing methods are compared and their relationship to linear minimum-mean-square-error filtering is examined. Experiments are presented that verify the theoretical results.  相似文献   
94.
This paper considers the concept of robust estimation in regularized image restoration. Robust functionals are employed for the representation of both the noise and the signal statistics. Such functionals allow the efficient suppression of a wide variety of noise processes and permit the reconstruction of sharper edges than their quadratic counterparts. A new class of robust entropic functionals is introduced, which operates only on the high-frequency content of the signal and reflects sharp deviations in the signal distribution. This class of functionals can also incorporate prior structural information regarding the original image, in a way similar to the maximum information principle. The convergence properties of robust iterative algorithms are studied for continuously and noncontinuously differentiable functionals. The definition of the robust approach is completed by introducing a method for the optimal selection of the regularization parameter. This method utilizes the structure of robust estimators that lack analytic specification. The properties of robust algorithms are demonstrated through restoration examples in different noise environments.  相似文献   
95.
Following the hierarchical Bayesian framework for blind deconvolution problems, in this paper, we propose the use of simultaneous autoregressions as prior distributions for both the image and blur, and gamma distributions for the unknown parameters (hyperparameters) of the priors and the image formation noise. We show how the gamma distributions on the unknown hyperparameters can be used to prevent the proposed blind deconvolution method from converging to undesirable image and blur estimates and also how these distributions can be inferred in realistic situations. We apply variational methods to approximate the posterior probability of the unknown image, blur, and hyperparameters and propose two different approximations of the posterior distribution. One of these approximations coincides with a classical blind deconvolution method. The proposed algorithms are tested experimentally and compared with existing blind deconvolution methods.  相似文献   
96.
Demand for multimedia services, such as video streaming over wireless networks, has grown dramatically in recent years. The downlink transmission of multiple video sequences to multiple users over a shared resource-limited wireless channel, however, is a daunting task. Among the many challenges in this area are the time-varying channel conditions, limited available resources, such as bandwidth and power, and the different transmission requirements of different video content. This work takes into account the time-varying nature of the wireless channels, as well as the importance of individual video packets, to develop a cross-layer resource allocation and packet scheduling scheme for multiuser video streaming over lossy wireless packet access networks. Assuming that accurate channel feedback is not available at the scheduler, random channel losses combined with complex error concealment at the receiver make it impossible for the scheduler to determine the actual distortion of the sequence at the receiver. Therefore, the objective of the optimization is to minimize the expected distortion of the received sequence, where the expectation is calculated at the scheduler with respect to the packet loss probability in the channel. The expected distortion is used to order the packets in the transmission queue of each user, and then gradients of the expected distortion are used to efficiently allocate resources across users. Simulations show that the proposed scheme performs significantly better than a conventional content-independent scheme for video transmission.  相似文献   
97.
We consider the transmission of a Gaussian source through a block fading channel. Assuming each block is decoded independently, the received distortion depends on the tradeoff between quantization accuracy and probability of outage. Namely, higher quantization accuracy requires a higher channel code rate, which increases the probability of outage. We first treat an outage as an erasure, and evaluate the received mean distortion with erasure coding across blocks as a function of the code length. We then evaluate the performance of scalable, or multi-resolution coding in which coded layers are superimposed within a coherence block, and the layers are sequentially decoded. Both the rate and power allocated to each layer are optimized. In addition to analyzing the performance with a finite number of layers, we evaluate the mean distortion at high signal-to-noise ratios as the number of layers becomes infinite. As the block length of the erasure code increases to infinity, the received distortion converges to a deterministic limit, which is less than the mean distortion with an infinite-layer scalable coding scheme. However, for the same standard deviation in received distortion, infinite layer scalable coding performs slightly better than erasure coding, and with much less decoding delay.  相似文献   
98.
A nonlinear regularized iterative image restoration algorithm is proposed, according to which only the noise variance is assumed to be known in advance. The algorithm results from a set theoretic regularization approach, where a bound of the stabilizing functional, and therefore the regularization parameter, are updated at each iteration step. Sufficient conditions for the convergence of the algorithm are derived and experimental results are shown  相似文献   
99.
Digital image restoration   总被引:8,自引:0,他引:8  
The article introduces digital image restoration to the reader who is just beginning in this field, and provides a review and analysis for the reader who may already be well-versed in image restoration. The perspective on the topic is one that comes primarily from work done in the field of signal processing. Thus, many of the techniques and works cited relate to classical signal processing approaches to estimation theory, filtering, and numerical analysis. In particular, the emphasis is placed primarily on digital image restoration algorithms that grow out of an area known as “regularized least squares” methods. It should be noted, however, that digital image restoration is a very broad field, as we discuss, and thus contains many other successful approaches that have been developed from different perspectives, such as optics, astronomy, and medical imaging, just to name a few. In the process of reviewing this topic, we address a number of very important issues in this field that are not typically discussed in the technical literature  相似文献   
100.
A recursive model-based algorithm for obtaining the maximum a posteriori (MAP) estimate of the displacement vector field (DVF) from successive image frames of an image sequence is presented. To model the DVF, we develop a nonstationary vector field model called the vector coupled Gauss-Markov (VCGM) model. The VCGM model consists of two levels: an upper level, which is made up of several submodels with various characteristics, and a lower level or line process, which governs the transitions between the submodels. A detailed line process is proposed. The VCGM model is well suited for estimating the DVF since the resulting estimates preserve the boundaries between the differently moving areas in an image sequence. A Kalman type estimator results, followed by a decision criterion for choosing the appropriate line process. Several experiments demonstrate the superior performance of the proposed algorithm with respect to prediction error, interpolation error, and robustness to noise.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号