首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   17篇
  免费   0篇
化学工业   2篇
机械仪表   1篇
无线电   9篇
一般工业技术   1篇
自动化技术   4篇
  2015年   1篇
  2012年   1篇
  2011年   1篇
  2009年   1篇
  2008年   1篇
  2007年   1篇
  2006年   1篇
  2005年   1篇
  2004年   1篇
  2003年   3篇
  2002年   1篇
  2000年   1篇
  1999年   2篇
  1998年   1篇
排序方式: 共有17条查询结果,搜索用时 15 毫秒
1.
A reliable speech presence probability (SPP) estimator is important to many frequency domain speech enhancement algorithms. It is known that a good estimate of SPP can be obtained by having a smooth a-posteriori signal to noise ratio (SNR) function, which can be achieved by reducing the noise variance when estimating the speech power spectrum. Recently, the wavelet denoising with multitaper spectrum (MTS) estimation technique was suggested for such purpose. However, traditional approaches directly make use of the wavelet shrinkage denoiser which has not been fully optimized for denoising the MTS of noisy speech signals. In this paper, we firstly propose a two-stage wavelet denoising algorithm for estimating the speech power spectrum. First, we apply the wavelet transform to the periodogram of a noisy speech signal. Using the resulting wavelet coefficients, an oracle is developed to indicate the approximate locations of the noise floor in the periodogram. Second, we make use of the oracle developed in stage 1 to selectively remove the wavelet coefficients of the noise floor in the log MTS of the noisy speech. The wavelet coefficients that remained are then used to reconstruct a denoised MTS and in turn generate a smooth a-posteriori SNR function. To adapt to the enhanced a-posteriori SNR function, we further propose a new method to estimate the generalized likelihood ratio (GLR), which is an essential parameter for SPP estimation. Simulation results show that the new SPP estimator outperforms the traditional approaches and enables an improvement in both the quality and intelligibility of the enhanced speeches.  相似文献   
2.
In overlay networks, the network characteristics before and after a vertical handoff would be drastically different. Consequently, in this paper, we propose an end‐to‐end based scheme to support protocol and application adaptation in vertical handoffs. First, we proposed a Vertical‐handoff Aware TCP, called VA‐TCP. VA‐TCP can identify the packet losses caused by vertical handoffs. If segments losses are due to vertical handoffs, VA‐TCP only retransmits the missing segments but does not invoke the congestion control procedure. Moreover, VA‐TCP dynamically estimates the bandwidth and round‐trip time in a new network. Based on the estimated bandwidth and round‐trip time, VA‐TCP adjusts its parameters to respond to the new network environment. Second, during a vertical handoff, applications also need to be adapted accordingly. Therefore, we design a programming interface that allows applications to be notified upon and adapt to changing network environments. To support our interface, we utilize the signal mechanism to achieve kernel‐to‐user notification. Nevertheless, signals cannot carry information. Thus, we implement the shared memory mechanism between applications and the kernel to facilitate parameters exchange. Finally, we also provide a handoff‐aware CPU scheduler so that tasks that are interested in the vertical‐handoff event are given preference over other processes to attain a prompt response for new network conditions. We have implemented a prototype system on the Linux kernel 2.6. From the experimental results, our proposed protocol and application adaptation mechanisms are shown to effectively improve the performance of TCP and applications during vertical handoffs. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   
3.
In nowadays World Wide Web topology, it is not difficult to find the presence of proxy servers. They reduce network traffic through the cut down of repetitive information. However, traditional proxy server does not support multimedia streaming. One of the reasons is that general scheduling strategy adopted by most of the traditional proxy servers does not provide real-time support to multimedia services. Based on the concept of contractual scheduling, we have developed a proxy server that supports real-time multimedia applications. Moreover, we developed the group scheduling mechanism to enable processing power transfer between tasks that can hardly be achieved by traditional schedulers. They result in a substantially improved performance particularly when both time-constrained and non-time-constrained processes coexist within the proxy server. In this paper, the design and implementation of this proxy server and the proposed scheduler are detailed. Wai-Kong Cheuk received the B.Eng. (Hons.) and M. Phil. degrees in 1996 and 2001, respectively, from the Hong Kong Polytechnic University, where he is currently pursuing the Ph.D. degree. His main research interests include distributed operating systems and video streaming. Tai-Chiu Hsung (M'93) received the B.Eng. (Hons.) and Ph.D. degrees in electronic and information engineering in 1993 and 1998, respectively, from the Hong Kong Polytechnic University, Hong Kong. In 1999, he joined the Hong Kong Polytechnic University as a Research Fellow. His research interests include wavelet theory and applications, tomography, and fast algorithms. Dr. Hsung is also a member of IEE. Daniel Pak-Kong Lun (M'91) received his B.Sc. (Hons.) degree from the University of Essex, Essex, U.K., and the Ph.D. degree from the Hong Kong Polytechnic University, Hung Hom, Hong Kong, in 1988 and 1991, respectively. He is currently an Associate Professor and the Associate Head of the Department of Electronic and Information Engineering, the Hong Kong Polytechnic University. His research interests include digital signal processing, wavelets, multimedia technology, and Internet technology. Dr. Lun was the Secretary, Treasurer, Vice-Chairman, and Chairman of the IEEE Hong Kong Chapter of Signal Processing in 1994, 1995–1996, 1997–1998, 1999–2000, respectively. He was the Finance Chair of 2003 IEEE International Conference on Acoustics, Speech and Signal Processing, held in Hong Kong, in April 2003. He is a Chartered Engineer and a Corporate member of the IEE.  相似文献   
4.
This paper describes the performance of the MPEG-4 still texture image codec in coding noisy images. As will be shown, when using the MPEG-4 still texture image codec to compress a noisy image, increasing the compression rate does not necessarily imply reducing the peak-signal-to-noise ratio (PSNR) of the decoded image. An optimal operating point having the highest PSNR can be obtained within the low bit rate region. Nevertheless, the visual quality of the decoded noisy image at this optimal operating point is greatly degraded by the so-called "cross" shape artifact. In this paper, we analyze the reason for the existence of the optimal operating point and the "cross" shape artifact when using the MPEG-4 still texture image codec to compress noisy images. We then propose an adaptive thresholding technique to remove the "cross" shape artifact of the decoded images. It requires only a slight modification to the quantization process of the traditional MPEG-4 encoder while the decoder remains unchanged. Finally, an analytical study is performed for the selection and validation of the threshold value used in the adaptive thresholding technique. It is shown that, the visual quality and PSNR of the decoded images are much improved by using the proposed technique comparing with the traditional MPEG-4 still texture image codec in coding noisy images.  相似文献   
5.
We introduce a deblocking algorithm for Joint Photographic Experts Group (JPEG) decoded images using the wavelet transform modulus maxima (WTMM) representation. Under the WTMM representation, we can characterize the blocking effect of a JPEG decoded image as: (1) small modulus maxima at block boundaries over smooth regions; (2) noise or irregular structures near strong edges; and (3) corrupted edges across block boundaries. The WTMM representation not only provides characterization of the blocking effect, but also enables simple and local operations to reduce the adverse effect due to this problem. The proposed algorithm first performs a segmentation on a JPEG decoded image to identify the texture regions by noting that their WTMM have small variation in regularity. We do not process the modulus maxima of these regions, to avoid the image texture being "oversmoothed" by the algorithm. Then, the singularities in the remaining regions of the blocky image and the small modulus maxima at block boundaries are removed. We link up the corrupted edges, and regularize the phase of modulus maxima as well as the magnitude of strong edges. Finally, the image is reconstructed using the projection onto convex set (POCS) technique on the processed WTMM of that JPEG decoded image. This simple algorithm improves the quality of a JPEG decoded image in the senses of the signal-to-noise ratio (SNR) as well as the visual quality. We also compare the performance of our algorithm to the previous approaches, such as CLS and POCS methods. The most remarkable advantage of the WTMM deblocking algorithm is that we can directly process the edges and texture of an image using its WTMM representation.  相似文献   
6.
Prefilters are generally applied to the discrete multiwavelet transform (DMWT) for processing scalar signals. To fully utilize the benefit offered by DMWT, it is important to have the prefilter designed appropriately so as to preserve the important properties of multiwavelets. To this end, we have recently shown that it is possible to have the prefilter designed to be maximally decimated, yet preserve the linear phase and orthogonal properties as well as the approximation power of multiwavelets. However, such design requires the point of symmetry of each channel of the prefilter to match with the scaling functions of the target multiwavelet system. It can be very difficult to find a compatible filter bank structure; and in some cases, such filter structure simply does not exist, e.g., for multiwavelets of multiplicity 2. In this paper, we suggest a new DMWT structure in which the prefilter is combined with the first stage of DMWT. The advantage of the new structure is twofold. First, since the prefiltering stage is embedded into DMWT, the computational complexity can be greatly reduced. Experimental results show that an over 20% saving in arithmetic operations can be achieved comparing with the traditional DMWT realizations. Second, the new structure provides additional design freedom that allows the resulting prefilters to be maximally decimated, orthogonal and symmetric even for multiwavelets of low multiplicity. We evaluated the new DMWT structure in terms of computational complexity, energy compaction ratio as well as the compression performance when applying to a VQ based image coding system. Satisfactory results are obtained in all cases comparing with the traditional approaches.  相似文献   
7.
Denoising by singularity detection   总被引:10,自引:0,他引:10  
A new algorithm for noise reduction using the wavelet transform is proposed. Similar to Mallat's (1992) wavelet transform modulus maxima denoising approach, we estimate the regularity of a signal from the evolution of its wavelet transform coefficients across scales. However, we do not perform maxima detection and processing; therefore, complicated reconstruction is avoided. Instead, the local regularities of a signal are estimated by computing the sum of the modulus of its wavelet coefficients inside the corresponding “cone of influence”, and the coefficients that correspond to the regular part of the signal for reconstruction are selected. The algorithm gives an improved denoising result, as compared with the previous approaches, in terms of mean squared error and visual quality. The new denoising algorithm is also invariant to translation. It does not introduce spurious oscillations and requires very little a priori information of the signal or noise. Besides, we extend the method to two dimensions to estimate the regularity of an image by computing the sum of the modulus of its wavelet coefficients inside the so-called “directional cone of influence”. The denoising technique is applied to tomographic image reconstruction, where the improved performance of the new approach can clearly be observed  相似文献   
8.
Kiely  J.D.  Houston  J.E.  Mulder  J.A.  Hsung  R.P.  Zhu  X.‐Y. 《Tribology Letters》1999,7(2-3):103-107
Using interfacial force microscopy (IFM), we investigated the tribological behavior of hexadecanethiol monolayers on Au and films of octadecyltrichlorosilane (ODTS), perfluorodecyltrichlorosilane (PFTS) and dodecane on Si. We observe a strong correlation between hysteresis in a compression cycle (measured via nanoindentation) and friction. Additionally, we suggest that the amount of hysteresis and friction in each film is related to its detailed molecular structure, especially the degree of molecular packing. This revised version was published online in September 2006 with corrections to the Cover Date.  相似文献   
9.
Recently, a hybrid disk drive that integrates a small amount of flash memory within a mechanical drive has received significant attention. The hybrid drive extends the storage hierarchy by using flash memory to cache data from the mechanical disk. Unfortunately, current caching architectures fail to fully exploit the potential of the hybrid drive. Furthermore, current disk input/output (I/O) schedulers are optimized for rotational mechanical disk drives and thus must be re‐targeted for the hybrid disk drive. In this paper, we propose a new data caching scheme, called Profit Caching, for hybrid drives. Profit Caching is a self‐optimizing caching algorithm. It considers and seamlessly integrates all possible data characteristics that impact the performance of hybrid drives, including read count, write count, sequentiality, randomness, and recency, to determine the caching policy. Moreover, we propose a hybrid disk‐aware Completely Fair Queuing (HA‐CFQ) scheduler to avoid unnecessary I/O anticipations of the CFQ scheduler. We have implemented Profit Caching and HA‐CFQ scheduler in the Linux kernel. Coupled with a trace‐driven simulator, we have also conducted detailed experiments under a variety of workloads. Experimental results show that Profit Caching provides significantly improved performance compared with the previous schemes. In particular, the throughput of Profit Caching outperforms previous Random Access First and FlashCache caching schemes by factors of up to 1.8 and 7.6, respectively. In addition, the HA‐CFQ scheduler reduces the total execution time of the CFQ scheduler by up to 1.74%. Finally, the experimental results show that the runtime overhead of Profit Caching is extremely insignificant and can be ignored. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   
10.
Optimizing the multiwavelet shrinkage denoising   总被引:3,自引:0,他引:3  
Denoising methods based on wavelet domain thresholding or shrinkage have been found to be effective. Recent studies reveal that multivariate shrinkage on multiwavelet transform coefficients further improves the traditional wavelet methods. It is because multiwavelet transform, with appropriate initialization, provides better representation of signals so that their difference from noise can be clearly identified. We consider the multiwavelet denoising by using multivariate shrinkage function. We first suggest a simple second-order orthogonal prefilter design method for applying multiwavelet of higher multiplicities. We then study the corresponding thresholds selection using Stein's unbiased risk estimator (SURE) for each resolution level provided that we know the noise structure. Simulation results show that higher multiplicity wavelets usually give better denoising results and the proposed threshold estimator suggests good indication for optimal thresholds.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号