首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   100篇
  免费   0篇
无线电   84篇
一般工业技术   4篇
自动化技术   12篇
  2013年   3篇
  2012年   6篇
  2011年   1篇
  2010年   2篇
  2009年   4篇
  2008年   4篇
  2007年   2篇
  2006年   8篇
  2005年   5篇
  2004年   6篇
  2003年   6篇
  2002年   3篇
  2001年   4篇
  2000年   3篇
  1999年   3篇
  1998年   8篇
  1997年   6篇
  1996年   1篇
  1995年   11篇
  1994年   3篇
  1993年   4篇
  1992年   4篇
  1991年   2篇
  1984年   1篇
排序方式: 共有100条查询结果,搜索用时 140 毫秒
71.
We introduce a new methodology for signal-to-noise ratio (SNR) video scalability based on the partitioning of the DCT coefficients. The DCT coefficients of the displaced frame difference (DFD) for inter-blocks or the intensity for intra-blocks are partitioned into a base layer and one or more enhancement layers, thus, producing an embedded bitstream. Subsets of this bitstream can be transmitted with increasing video quality as measured by the SNR. Given a bit budget for the base and enhancement layers the partitioning of the DCT coefficients is done in a way that is optimal in the operational rate-distortion sense. The optimization is performed using Lagrangian relaxation and dynamic programming (DP). Experimental results are presented and conclusions are drawn  相似文献   
72.
Every user of multimedia technology expects good image and video visual quality independently of the particular characteristics of the receiver or the communication networks employed. Unfortunately, due to factors like processing power limitations and channel capabilities, images or video sequences are often downsampled and/or transmitted or stored at low bitrates, resulting in a degradation of their final visual quality. In this paper, we propose a region-based framework for intentionally introducing downsampling of the high resolution (HR) image sequences before compression and then utilizing super resolution (SR) techniques for generating an HR video sequence at the decoder. Segmentation is performed at the encoder on groups of images to classify their blocks into three different types according to their motion and texture. The obtained segmentation is used to define the downsampling process at the encoder and it is encoded and provided to the decoder as side information in order to guide the SR process. All the components of the proposed framework are analyzed in detail. A particular implementation of it is described and tested experimentally. The experimental results validate the usefulness of the proposed method.  相似文献   
73.
Transmitting video over wireless channels from mobile devices has gained increased popularity in a wide range of applications. A major obstacle in these types of applications is the limited energy supply in mobile device batteries. For this reason, efficiently utilizing energy is a critical issue in designing wireless video communication systems. This article highlights recent advances in joint source coding and optimal energy allocation. We present a general framework that takes into account multiple factors, including source coding, channel resource allocation, and error concealment, for the design of energy-efficient wireless video communication systems. This framework can take various forms and be applied to achieve the optimal trade-off between energy consumption and video delivery quality during wireless video transmission.  相似文献   
74.
Image and video coding algorithms have found a number of applications ranging from video telephony on the public switched telephone networks (PSTN) to HDTV. However, as the bit rate is lowered, most of the existing techniques, as well as current standards, such as JPEG, H. 261, and MPEG-1 produce highly visible degradations in the reconstructed images primarily due to the information loss caused by the quantization process. In this paper, we propose an iterative technique to reduce the unwanted degradations, such as blocking and mosquito artifacts while keeping the necessary detail present in the original image. The proposed technique makes use of a priori information about the original image through a nonstationary Gauss-Markov model. Utilizing this model, a maximum a posteriori (MAP) estimate is obtained iteratively using mean field annealing. The fidelity to the data is preserved by projecting the image onto a constraint set defined by the quantizer at each iteration. The proposed solution represents an implementation of a paradigm we advocate, according to which the decoder is not simply undoing the operations performed by the encoder, but instead it solves an estimation problem based on the available bitstream and any prior knowledge about the source image. The performance of the proposed algorithm was tested on a JPEG, as well as on an H.261-type video codec. It is shown to be effective in removing the coding artifacts present in low bit rate compression  相似文献   
75.
An adaptive regularized recursive displacement estimation algorithm is presented. An estimate of the displacement vector field (DVF) is obtained by minimizing the linearized displaced frame difference (DFD) using nu subsets (submasks) of a set of points that belong to a causal neighborhood (mask) around the working point. Assuming that the displacement vector is constant at all points inside the mask, nu systems of equations are formed based on the corresponding submasks. A set theoretic regularization approach is followed for solving this system of equations by using information about the noise and the solution. An expression for the variance of the linearization error is derived in quantifying the information about the noise. Prior information about the solution is incorporated into the algorithm using a causal oriented smoothness constraint (OSC) which also provides a spatially adaptive prediction model for the estimation DVF. It is shown that certain existing regularized recursive algorithms are special cases of the proposed algorithm, if a single mask is considered. Based on experiments with typical videoconferencing scenes, the improved performance of the proposed algorithm with respect to accuracy, robustness to occlusion and smoothness of the estimated DVF is demonstrated.  相似文献   
76.
77.
Bayesian resolution enhancement of compressed video   总被引:16,自引:0,他引:16  
Super-resolution algorithms recover high-frequency information from a sequence of low-resolution observations. In this paper, we consider the impact of video compression on the super-resolution task. Hybrid motion-compensation and transform coding schemes are the focus, as these methods provide observations of the underlying displacement values as well as a variable noise process. We utilize the Bayesian framework to incorporate this information and fuse the super-resolution and post-processing problems. A tractable solution is defined, and relationships between algorithm parameters and information in the compressed bitstream are established. The association between resolution recovery and compression ratio is also explored. Simulations illustrate the performance of the procedure with both synthetic and nonsynthetic sequences.  相似文献   
78.
The performance of an automatic facial expression recognition system can be significantly improved by modeling the reliability of different streams of facial expression information utilizing multistream hidden Markov models (HMMs). In this paper, we present an automatic multistream HMM facial expression recognition system and analyze its performance. The proposed system utilizes facial animation parameters (FAPs), supported by the MPEG-4 standard, as features for facial expression classification. Specifically, the FAPs describing the movement of the outer-lip contours and eyebrows are used as observations. Experiments are first performed employing single-stream HMMs under several different scenarios, utilizing outer-lip and eyebrow FAPs individually and jointly. A multistream HMM approach is proposed for introducing facial expression and FAP group dependent stream reliability weights. The stream weights are determined based on the facial expression recognition results obtained when FAP streams are utilized individually. The proposed multistream HMM facial expression system, which utilizes stream reliability weights, achieves relative reduction of the facial expression recognition error of 44% compared to the single-stream HMM system.  相似文献   
79.
We propose an iterative algorithm for enhancing the resolution of monochrome and color image sequences. Various approaches toward motion estimation are investigated and compared. Improving the spatial resolution of an image sequence critically depends upon the accuracy of the motion estimator. The problem is complicated by the fact that the motion field is prone to significant errors since the original high-resolution images are not available. Improved motion estimates may be obtained by using a more robust and accurate motion estimator, such as a pel-recursive scheme instead of block matching, in processing color image sequences, there is the added advantage of having more flexibility in how the final motion estimates are obtained, and further improvement in the accuracy of the motion field is therefore possible. This is because there are three different intensity fields (channels) conveying the same motion information. In this paper, the choice of which motion estimator to use versus how the final estimates are obtained is weighed to see which issue is more critical in improving the estimated high-resolution sequences. Toward this end, an iterative algorithm is proposed, and two sets of experiments are presented. First, several different experiments using the same motion estimator but three different data fusion approaches to merge the individual motion fields were performed. Second, estimated high-resolution images using the block matching estimator were compared to those obtained by employing a pel-recursive scheme. Experiments were performed on a real color image sequence, and performance was measured by the peak signal to noise ratio (PSNR).  相似文献   
80.
In this paper, we review a general framework for the optimal bit allocation among dependent quantizers based on the minimum maximum (MINMAX) distortion criterion. The pros and cons of this optimization criterion are discussed and compared to the well-known Lagrange multiplier method for the minimum average (MINAVE) distortion criterion. We argue that, in many applications, the MINMAX criterion is more appropriate than the more popular MINAVE criterion. We discuss the algorithms for solving the optimal bit allocation problem among dependent quantizers for both criteria and highlight the similarities and differences. We point out that any problem which can be solved with the MINAVE criterion can also be solved with the MINMAX criterion, since both approaches are based on the same assumptions. We discuss uniqueness of the MINMAX solution and the way both criteria can be applied simultaneously within the same optimization framework. Furthermore, we show how the discussed MINMAX approach can be directly extended to result in the lexicographically optimal solution. Finally, we apply the discussed MINMAX solution methods to still image compression, intermode frame compression of H.263, and shape coding applications  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号