首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 990 毫秒
1.
在Linux系统中,系统是把设备映射为一个特殊的设备文件,用户程序可以象对其它文件一样对此设备文件进行操作。我们可以根据这一点,构造一个不连接任何硬件的设备文件,即虚拟设备。利用这个设备,我们可以把一系列的底层操作转化为应用层操作,以减低上层工作量。  相似文献   

2.
Through accelerated life test in hydrogen, we have found, for the first time, that in addition to Pt metal, Ti metal in a Ti/Pt/Au-gate PHEMT can also induce a significant hydrogen effect by reacting with a small amount of hydrogen gas in the ambient. The hydrogen sensitivity of a PHEMT device caused by Ti gate metal is significantly less than that due to Pt. Since Ti is not a hydrogen catalyst, the resulting hydrogen sensitivity indicates that a catalytic reaction between the gate metal and hydrogen gas is not required to have a detrimental hydrogen effect. The data also show that the degradation evident in the PHEMT devices due to the Ti-H2 interaction is similar to that from the Pt-H2 interaction. It is clear from this work that attempting to solve the hydrogen degradation problem by eliminating the Pt gate metal in a PHEMT is ineffective  相似文献   

3.
The channelized receiver, which is optimal for the detection of unknown noncoherent frequency-hopped waveforms, bases its decisions on a fixed-length block of input data. A sequential method of interception is presented according to which whenever a new data element is collected, a decision is made as to the presence or nonpresence of a frequency-hopped waveform. If that decision is indeterminate, another data element is collected. An optimal sequential test is derived, under the assumption that the waveform signal-to-noise ratio (SNR) is known. It is shown that this sequential test requires less data, on average, than the fixed-length method to make a decision with the same reliability. A truncated sequential test is also derived where a decision is forced, if still indeterminate, after some fixed amount of data is collected. The truncated test is shown to improve the number of samples needed for a decision when the input SNR differs greatly from that assumed in the derivation of the test. Furthermore, it is shown that the truncated test yields a limited degree of robustness when the input SNR differs from that assumed.<>  相似文献   

4.
Maximizing sets and fuzzy Markoff algorithms   总被引:1,自引:0,他引:1  
A fuzzy algorithm is an ordered set of fuzzy instructions that upon execution yield an approximate solution to a given problem. Two unrelated aspects of fuzzy algorithms are considered in this paper. The first is concerned with the problem of maximization of a reward function. It is argued that the conventional notion of a maximizing value for a function is not sufficiently informative and that a more useful notion is that of a maximizing set. Essentially, a maximizing set serves to provide information not only concerning the point or points at which a function is maximized, but also about the extent to which the values of the reward function approximate to its supremum at other points in its range. The second is concerned with the formalization of the notion of a fuzzy algorithm. In this connection, the notion of a fuzzy Markoff algorithm is introduced and illustrated by an example. It is shown that the generation of strings by a fuzzy algorithm bears a resemblance to a birth-and-death process and that the execution of the algorithm terminates when no more “live” strings are left  相似文献   

5.
This paper describes the use of the wavelet transform for multiscale texture analysis. One of the basic problems is that texture measures have to adapt to the peculiarity of radar images that contain multiplicative speckle noise. In this paper, the focus is on the effect of speckle on the wavelet transform. The effect is first assessed analytically. It is shown that the wavelet coefficients are modulated by the multiplicative character of the speckle in a manner that is proportional to the target mean backscattering coefficient. The effect of speckle correlation is also demonstrated. Wavelet decomposition is then applied to a simulated radar image generated by a Monte Carlo approach and based on a statistical model. Modeling shows that the correlation properties of speckle have an effect up to a scale that corresponds to its granular size. The results also show that the main contribution to the wavelet transform for an homogeneous area is the first-order statistical distribution of speckle, which remains important even at large scales. The results are then compared to a ERS-1 synthetic aperture radar (SAR) image of a primary tropical forest region  相似文献   

6.
It is known that any scalar function f(p) of the complex frequency variable that is the admittance function of a passive finite network is in fact the admittance function of a network that can be realized without transformers. This paper shows that an m times m matrix-valued function Y(p), m ges 2, given that it is an admittance matrix, is the admittance of a network that contains no transformers if and only if it enjoys two further properties: 1) for each real p > 0 Y(p) is the admittance of a passive resistive network specific to p; and 2) a property defined as the null space property. It is shown that property 1) severely limits the class of m-terminal networks, m > 1, that can be realized without transformers. The author concludes that, for passive systems, transformers are here to stay.  相似文献   

7.
The K&H algorithm for finding the p.i.'s of a noncoherent fault tree is too long and complicated for the purpose. The example that was used by K&H to illustrate the algorithm is reworked with known methods to show that alternatives are available that require less work. This correspondence is a series of four letters published as a sequel to a paper by Kumamoto & Henley (K&H) that presented an ``algorithm based on top-down analysis particularly designed for noncoherent fault trees'. The first of these letters is by Locks, claiming that the article is too complicated and that a shorter procedure is available. This is followed by Reply #1 by Ogunbiyi and Reply #2 by K&H. The note is concluded with a rebuttal by Locks to both replies.  相似文献   

8.
A new scheme of synchronous CDMA is introduced in the paper. The new scheme is based on a code made by all the cyclical translations of a basic sequence having constant amplitude and white discrete spectrum. Such a code is proposed here for the first time as a code for CDMA. According to the proposed scheme, a cyclic prefix is appended to the multiplexed signal. The proposed scheme has a property that none of the known CDMA schemes has: in a multipath environment, it allows multiuser interference to become cyclic intersymbol interference. Noticeably, the memory of the finite state machine that describes the ISI model is equal to memory of the multipath channel. The main advantage of our proposed scheme is that optimal and suboptimal detectors can be obtained from detectors proposed in the past for the ISI channel, which are much easier to implement than conventional multiuser detectors of classical CDMA schemes. Another advantage of our scheme is that it leads naturally to a signal processing architecture similar to that of OFDM systems, hence based on the efficient FFT/IFFT algorithm.  相似文献   

9.
The capacity of multiple-antenna systems operating in Rayleigh flat fading is considered under the assumptions that channel state information (CSI) is available at both transmitter and receiver, and that the transmitter is subjected to an average power constraint. First, the capacity of such systems is derived for the special case of multiple transmit antennas and a single receive antenna. The optimal power-allocation scheme for such a system is shown to be a water-filling algorithm, and the corresponding capacity is seen to be the same as that of a system having multiple receive antennas (with a single transmitter antenna) whose outputs are combined via maximal ratio combining. A suboptimal adaptive transmission technique that transmits only over the antenna having the best channel is also proposed for this special case. It is shown that the capacity of such a system under the proposed suboptimal adaptive transmission scheme is the same as the capacity of a system having multiple receiver antennas (with a single transmitter antenna) combined via selection combining. Next, the capacity of a general system of multiple transmitter and receiver antennas is derived together with an equation that determines the cutoff value for such a system. The optimal power allocation scheme for such a multiple-antenna system is given by a matrix water-filling algorithm. In order to eliminate the need for cumbersome numerical techniques in solving the cutoff equation, approximate expressions for the cutoff transmission value are also provided. It is shown that, compared to the case in which there is only receiver CSI, large capacity gains are available with optimal power and rate adaptation schemes. The increased capacity is shown to come at the price of channel outage, and bounds are derived for this outage probability.  相似文献   

10.
A fan-beam tomographic reconstruction algorithm is developed for source points distributed along a straight line. It is shown that, in theory, a perfect reconstruction is possible from an infinitely long straight line. Using computer simulations it is verified that using a finite segment of a straight line, it is possible to reconstruct images with quality comparable to those obtained when the source points are distributed along a circle. It is shown that the two parameters that most affect the image quality are the length of source point line, and the distance from the object to the source point line. In addition, a postreconstruction technique is developed that substantially improves the quality of images reconstructed from the straight line.  相似文献   

11.
lt is pointed out that if the classical method of weak solution is to be used for the solution of the problem, then it is necessary to include a resistive element of a sufficient magnitude. This also is a feature of Landauer's work. The solution so obtained is accurate under well-defined conditions, and among others, it can be shown that energy losses associated with the shock front can be accounted for by that resistance. However, it is inconsequential to assume that as the value of the resistive element is reduced to zero, the energy balance continues to hold. This requires a separate proof. An exact analysis based on a series of experimental results and computer modeling shows that the classical discrepancy can be accounted for in a different way.  相似文献   

12.
A numerical evaluation of the integrals that constitute the formal solution of the problem of a vertical dipole, which is situated in a homogeneous, warm plasma half-space above a perfectly conducting plane, is considered. An asymptotic series expansion is obtained for these integrals by the double saddle point method of integration. The dominant terms of the solution in the far field are shown to consist of a surface wave, which arises from the residue of a pole, and a space wave, which is the leading term of the saddle point contribution. The space wave is identified as the geometrical ray approximation to the solution. It is demonstrated that the surface wave can propagate when the source frequency is either above or below the plasma frequency. The transfer of power from an incident acoustical (p) mode to a boundary-generated electromagnetic (e) mode, and from an incident e mode to a boundary-generatedpmode, is investigated at a plasma-conductor interface. It is evident in both situations that only a small percentage of the incident power is transformed into the boundary-generated mode. In the case of a vertical dipole, however, it is shown that, at source frequencies which are near to the plasma frequency, the power in the incidentpmode is much larger than that in theemode. Thus, the boundary-generatedemode, which is due to the incidentpmode, is as large as the reflectedemode due to an incidentemode. As a result of this effect, it is pointed out that one can represent the reflectedemode by two image sources.  相似文献   

13.
The sheer size of Atmospheric Infrared Sounder images, a type of ultraspectral cube that includes over two thousand spectral bands, is such that their compression is of critical importance. A traditional approach to this goal is by combining reversible preprocessing, where image redundancy is better exposed, with a pure prediction stage that performs compression at a cost of introducing some controlled distortion. In this paper we focus on the effect of using a prediction stage that integrates both, linear prediction (LP) and a search procedure, as a way to obtain better quality. Since it can be seen that this additional search stage does not affect the compression rate, its only drawback is from the computational point of view, making algorithm optimization a key factor. In addition, we introduce a mechanism to dynamically select the LP filter order such that when combined with two-stage prediction the overall rate distortion is greatly improved.  相似文献   

14.
Detector jitter, the random delay from the time a photon is incident on a single-photon-counting detector (SPD) to the time an electrical pulse is produced in response to that photon, is characterized for a number of SPDs. The jitter is modeled as a weighted sum of Gaussians. The performance in detector jitter is measured by determining the capacity of a communications channel utilizing a given detector. It is observed that the loss, measured as the ratio of the signal power required to achieve a specified capacity in the presence of jitter to that in the absence of jitter, goes as the square of the normalized jitter standard deviation (the standard deviation of the jitter in slotwidths). The loss is small when the normalized jitter is less than one, and becomes significant beyond that point. This loss must be taken into account when evaluating detectors for very high throughput channels.  相似文献   

15.
It is pointed out that there has long been unease about using a periodic function, such as a sinusoid, as an input to determine the response of a network or of a structure excited by an electromagnetic field. It has been argued that periodic functions have infinite energy and violate causality because there is no way that one cycle of a periodic function differs from any other. An argument that shows that the traditional method of using sinusoids or other periodic functions to design equipment is asymptotically correct is presented  相似文献   

16.
This paper investigates the problem of assessing the quality of video transmitted over IP networks. Our goal is to develop a methodology that is both reasonably accurate and simple enough to support the large-scale deployments that the increasing use of video over IP are likely to demand. For that purpose, we focus on developing an approach that is capable of mapping network statistics, e.g., packet losses, available from simple measurements, to the quality of video sequences reconstructed by receivers. A first step in that direction is a loss-distortion model that accounts for the impact of network losses on video quality, as a function of application-specific parameters such as video codec, loss recovery technique, coded bit rate, packetization, video characteristics, etc. The model, although accurate, is poorly suited to large-scale, on-line monitoring, because of its dependency on parameters that are difficult to estimate in real-time. As a result, we introduce a “relative quality” metric (rPSNR) that bypasses this problem by measuring video quality against a quality benchmark that the network is expected to provide. The approach offers a lightweight video quality monitoring solution that is suitable for large-scale deployments. We assess its feasibility and accuracy through extensive simulations and experiments.   相似文献   

17.
Blind super resolution is an interesting area in image processing that can restore high resolution (HR) image without requiring prior information of the volatile point spread function (PSF). In this paper, a novel framework is proposed for blind single-image super resolution (SISR) problem based on compressive sensing (CS) framework that is one of the first works that considers general PSFs. The fundamental idea in the proposed approach is to use sparsity on a known sparse transform domain as a powerful regularizer in both the image and blur domains. Therefore, a new cost function with respect to the unknown HR image patch and PSF kernel is presented and minimization is performed based on two subproblems that are modeled similar to that of CS. Simulation results demonstrate the effectiveness of the proposed algorithm that is competitive with methods that use multiple LR images to achieve a single HR image.  相似文献   

18.
The performance of a lattice-based fast vector quantization (VQ) method, which yields rate-distortion performance to that of an optimal VQ, is analyzed. The method, which is a special case of fine-coarse vector quantization (FCVQ), uses the cascade of a fine lattice quantizer and a coarse optimal VQ to encode a given source vector. The second stage is implemented in the form of a lookup table, which needs to be stored at the encoder. The arithmetic complexity of this method is essentially that of lattice VQ. Its distortion can be made arbitrarily close to that of an optimal VQ, provided sufficient storage for the table is available. It is shown that the distortion of lattice-based FCVQ is larger than that of full search quantization by an amount that decreases as the square of the diameter of the lattice cells, and it provides exact formulas for the asymptotic constant of proportionality in terms of the properties of the lattice, coarse codebook, and source density. It is shown that the excess distortion asymptotically equals that of the fine quantizer. Simulation results indicate how small the lattice cells must be in order for the asymptotic formulas to be applicable  相似文献   

19.
20.
The least-mean-squares (LMS) algorithm is analysed as a feedback control system. It is shown that despite the fact that LMS is a time-variant system that in fact it behaves much as a linear time-invariant (LTI) closed-loop control system. Therefore, it is possible to treat the LMS algorithm as a control system in the classical sense and define properties such as bandwidth to determine how fast a response (and hence convergence) is maximally possible. Similarly, the steady-state error response to a deterministic noise-free input can also be calculated. Moreover, it is then shown that classical control-based loop-shaping techniques can be used to improve the performance of the algorithm.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号