共查询到20条相似文献,搜索用时 15 毫秒
1.
We discuss the real time classification of high speed (⩾4800 b/s) voiceband data signals using blind or self-recovering equalization techniques. After equalization, the signals are classified, by means of their constellation magnitude, using a new classification scheme which computes the probability that the unknown test signal belongs to each of several predetermined classes. Heuristic rules are applied to these probability functions to perform the actual signal detection. Even in the presence of added channel impairments, the use of this new approach resulted in an overall classification accuracy of greater than 98%, measured using 5100 test segments collected from 6 classes of signals produced by V.29, V.32, and V.33 modems. Classification times range from median values of 0.090 s and 0.219 s for the V.29 modem at 4800 and 7200 b/s, respectively, up to a worse case time of 3.33 s for the 9600 b/s default class, chosen only if no other signal is detected within a prescribed amount of time 相似文献
2.
The problem of distinguishing speech from voiceband data is treated. A discrimination function based on the sign of the autocorrelation at lag two of the incoming signal and the second-order moment of the complex low-pass signal is presented. The algorithm has been applied to many types of voiceband data signals and, within a window of 32 ms, correctly classifies them. For the limited number of speech signals available, the amount of misclassification of speech as voiceband data was observed to be about 1% 相似文献
3.
《Solid-State Circuits, IEEE Journal of》1979,14(1):74-80
Describes the design of an analog attenuator integrated circuit having loss settings that can be determined by remote digital control. The circuit uses a weighted MOS capacitor array to effect losses of 0-16.5 dB in steps of 0.1 dB. Each loss setting is accurate to /spl plusmn/0.02 dB. Two approaches to the circuit realization are described: a CMOS version that includes a digital memory to permit retention of loss settings for a few hours, thereby bridging brief power failures, and a more compact NMOS version that contains all the analog components on a single chip. 相似文献
4.
Data communication has grown dramatically in the past two decades, both in technical sophistication and in usage. While channels and networks designed expressly for data use have emerged, the voiceband channels of the telephone network continue to be the major transmission medium; data sets, or modems, play a role analogous to that of the telephone in voice communication. Fundamental developments such as adaptive equalization along with bandwidth-conserving signal formats have allowed the modem to better match the characteristics of the analog channel, resulting in increased available throughput. The quest for further improvements, along with elegant implementations and attractive user features, provide a continuing challenge to communication engineers. When the need arose to communicate digital data at transmission rates substantially higher than telegraph speeds, first for defense about mid-century and then for industry in the late 1950's, all the ingredients required for successful implementation were at hand. Decades earlier, Nyquist had formulated the filtering or band shaping requirements to allow the independent transmission of a sequence of signal samples. Just prior to mid-century, Shannon had published his celebrated information theory, which showed engineers the maximum rate at which they could signal through a channel if only they were clever enough. The telephone network, especially in the United States, had reached a high state of development and widespread accessibility; it seemed like an ideal vehicle to carry the new data communication traffic. 相似文献
5.
Modern voiceband modems often use a symbol rate that is not a factor of their data rate. Combined with the possible use of multidimensional trellis-coded modulation, this means that modems must transmit a fractional number of bits per symbol. To allow for this, bits must be mapped in blocks of symbols. This raises the issue of how to synchronise these blocks. The author gives a fast and reliable means to do this.<> 相似文献
6.
The first stage of efficiently coding television signals--the nonreversible process of obtaining a discrete signal--is investigated. The process depends on the properties of the source and the receiver which in this case is the human sense of vision. Emphasis is given to the examination of the properties of the receiver and the selection of an appropriate criterion of performance. The criterion adopted is the probabilistic measure of viewer preference in a direct comparative judgment between the original and the coded-decoded version. For this criterion the precision with which picture components need be reproduced will depend primarily on the visual thresholds associated with the picture components. The "Optimum Decision" model of threshold vision is investigated using the criterion. As an example a practical encoder is discussed which is designed around the loss of sensitivity of the visual system adjacent to a change in luminance. High-quality pictures have been encoded having first-order entropies in the range 0.8 to 2.0 bits per picture element. 相似文献
7.
针对卫星通信信号传输隐蔽性不高的问题,提出了一种抗截获的信号波形设计方法。采用具有循环平稳特性的大信号掩盖不具有循环平稳特性的弱信号,含有重要信息的弱信号成为大信号的背景信号,实现卫星信号的隐蔽传输。弱信号包含正常业务数据和重要数据,利用随机跳码扩频和随机跳时将两者合并的同时,可将重要数据隐藏于弱信号中,并破坏弱信号的循环平稳特性,提高重要数据传输的安全性。对信号波形仿真分析,在大信号和弱信号的功率比高于7 dB时,盲检测不到弱信号的存在,表明设计波形具有抗截获性。最后对弱信号解调性能仿真分析,表明抗截获能力提高的同时,其传输性能仍在可接受范围内。 相似文献
8.
The letter describes an adaptive DPCM coding of composite SECAM signals where the nature of the chrominance modulation is entirely different to that of PAL and NTSC. The prediction method is based on the algorithm that the predictor can adapt itself to the frequency variation of the subcarrier. The simulation results demonstrate that this adaptive system has a significant advantage in terms of signal/noise ratio over a nonadaptive system. 相似文献
9.
PCM语音编解码系统中抗混叠滤波器的设计 总被引:1,自引:0,他引:1
PCM(脉冲编码调制)是一种编码方式,它可以将语音信号转换成数字信号。我们知道在音频系统中经常会出现信号混叠,基于这个原因,本文介绍了一种在PCM语音编解码系统所使用的抗混叠滤波。它是一个有源的、低通三阶巴特沃兹滤波器。 相似文献
10.
A technique based on bispectrum averaging is described for generally recovering the signal waveform from a set of noisy signals with variable signal delay. The technique does not require explicit time alignment of signals and any initial estimate of signal. The technique, however, does not yield estimates of the signal position. A comparison is made of two algorithms for recovering the Fourier amplitude and the Fourier phase from an averaged bispectrum. These algorithms are the recursive method and the least squares method. The methods are numerically investigated using computer-generated data and a physiological signal and noise. The advantages and disadvantages of these different algorithms are discussed. Some experimental results for the evoked potential studies that demonstrate the technique are given. The results show the effectiveness of the technique: various potential applications of the technique might be expected 相似文献
11.
A computationally efficient, although suboptimal, tree encoding method for the pitch and voicing parameters of an LPC (linear predictive coding) vocoder is presented. It is shown that when pitch and voicing signals are combined, it becomes difficult to take advantage of linear predictors and at the same time avoid any errors in voicing. The modified pitch tree coder presented here solves this problem by incorporating a branch leading to zero pitch value from each node of the tree. Only two bits per analysis frame are used to convey the combined pitch/voicing signal, with 1.585 bits being used to encode nonzero pitch values 相似文献
12.
Wassner J. Kaeslin H. Felber N. Fichtner W. 《Signal Processing, IEEE Transactions on》2003,51(6):1656-1661
This paper evaluates waveform coding techniques known from low bit-rate communication for their usefulness in low-power digital FIR filtering of speech signals. The encodings considered include linear PCM, PCM with adaptive and logarithmic quantization, and differential PCM, combined with two's-complement and sign-magnitude number representation. Selected implementation aspects for each alternative are discussed. Experimental results are presented to quantify potential power savings subject to statistical signal properties and operating conditions. Guidelines for the choice of encoding in application-specific digital signal processing of speech data are provided. 相似文献
13.
14.
This paper discusses several approaches to the redundancy reduction of clinical electroencephalographic (EEG) data. The encoders presented here are basically of two types. The first type compresses EEG data with very little visual or spectral information present in the original lost in the compression/decompression process. The other type compresses EEG data with some loss of visual reconstruction quality (although still acceptable) but with the advantage of achieving high compression ratios, and providing a convenient and natural way for subsequent automated EEG diagnosis. In deriving redundancy reduction encoders, the efficiency of several digital compression techniques has been compared by encoding EEG data. The general approach adopted, however, is not restricted to this class of data, but can be applied (with minor modifications) to other data of similar characteristics. 相似文献
15.
Sofic systems and encoding data 总被引:2,自引:0,他引:2
《IEEE transactions on information theory / Professional Technical Group on Information Theory》1985,31(3):366-377
Techniques of symbolic dynamics are applied to prove the existence of codes suitable for certain input-restricted channels. This generalizes the earlier work of Adler, Coppersmith, and Hassner on the same problem. 相似文献
16.
《Solid-State Circuits, IEEE Journal of》1986,21(6):964-970
A set of video signal processor VLSIs has been developed using 2-/spl mu/m p-well CMOS technology. These VLSIs perform bidirectional transformation between NTSC (National Television System Committee) composite video signals and component video signals, both of which contain luminance signals and two kinds of color signals. A special circuit configuration is used for the line memory, and an automatic layout program for uniformly structured circuits is developed for the layout design of the digital filters. A design effort of only 12 man-months, which includes logic, circuit, and layout design, is required for each VLSI using CAD systems. These VLSIs have proved effective in reducing the cost of video signal transmission systems. 相似文献
17.
Liu Shuangping Wen Xiang Jin Liang 《电子科学学刊(英文版)》2007,24(5):600-606
Many monographs point out that differential encoding and decoding is necessary for ef- fectual information transmission against phase ambiguity while seldom discuss the reason why phase ambiguity will emerge inevitably.Available algorithms are specially designed for certain modulation scheme;these algorithms cannot satisfy the requirement of soft-defined radio,which perhaps demands a uniform algorithm for different modulations.This paper proposes a new opinion on phase ambiguity from the view of probability.This opinion believes that modulating symbol sequence can affect,at optimum sampling epoch,the modulated waveform as oscillating carrier has done,and so the stochastic sequence leads to phase ambiguity.Based on a general signal model,this paper also puts forward a novel universal algorithm,which is suitable for different signals,even some new ones,by configuring several parameters. 相似文献
18.
Jia LiAuthor Vitae Xiao LiuAuthor VitaeYubin ZhangAuthor Vitae Yu HuAuthor VitaeXiaowei LiAuthor Vitae Qiang XuAuthor Vitae 《Integration, the VLSI Journal》2011,44(3):205-216
Ever-increasing test data volume and excessive test power are two of the main concerns of VLSI testing. The “don’t-care” bits (also known as X-bits) in given test cube can be exploited for test data compression and/or test power reduction, and these techniques may contradict to each other because the very same X-bits are likely to be used for different optimization objectives. This paper proposes a capture-power-aware test compression scheme that is able to keep capture-power under a safe limit with low test compression ratio loss. Experimental results on benchmark circuits validate the effectiveness of the proposed solution. 相似文献
19.
The authors show that the two-spiral problem can be easily solved using a standard back propagation neural network by properly encoding the raw input data. The authors also examine and compare several data encoding schemes for use in neural networks to improve the training efficiency and generalisation property 相似文献
20.
We propose a fully three-dimensional (3-D) wavelet-based coding system featuring 3-D encoding/two-dimensional (2-D) decoding functionalities. A fully 3-D transform is combined with context adaptive arithmetic coding; 2-D decoding is enabled by encoding every 2-D subband image independently. The system allows a finely graded up to lossless quality scalability on any 2-D image of the dataset. Fast access to 2-D images is obtained by decoding only the corresponding information thus avoiding the reconstruction of the entire volume. The performance has been evaluated on a set of volumetric data and compared to that provided by other 3-D as well as 2-D coding systems. Results show a substantial improvement in coding efficiency (up to 33%) on volumes featuring good correlation properties along the z axis. Even though we did not address the complexity issue, we expect a decoding time of the order of one second/image after optimization. In summary, the proposed 3-D/2-D multidimensional layered zero coding system provides the improvement in compression efficiency attainable with 3-D systems without sacrificing the effectiveness in accessing the single images characteristic of 2-D ones. 相似文献