首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   7篇
  免费   0篇
无线电   6篇
冶金工业   1篇
  2006年   1篇
  2004年   3篇
  2002年   2篇
  1996年   1篇
排序方式: 共有7条查询结果,搜索用时 15 毫秒
1
1.
A new experimental model of a vascular carrier to prefabricate a "secondary" island flap, the popliteal musculovascular pedicle, was developed in the rat. Using quantitative skin-surface fluorometry 30 minutes after sodium fluorescein injection and a flap survival area in the prefabricated 8 x 2.5-cm abdominal composite island flap, we compared the revascularization ability of our muscular carrier to nonrevascularized controls: the skeletonized arteriovenous pedicle and the fasciovascular pedicle. The free composite graft with no vascular carrier exhibited near-total necrosis. The skeletonized vascular pedicle demonstrated 15.2% +/- 7.8% perfusion of normal skin on dye fluorescence index measurements and 50% flap survival. The fasciovascular pedicle exhibited better revascularization, with a dye fluorescence index of 36.2 +/- 15.5 (p < 0.01) and 90% +/- 10% flap survival (p < 0.001). India ink injection study and histological examination of our model provided visual evidence of revascularization from the musculovascular pedicle, along with preservation of the carrier's muscular architecture. The musculovascular pedicle is a reliable carrier for making new, vascularized composite flaps.  相似文献   
2.
Minimum mean squared error equalization using a priori information   总被引:11,自引:0,他引:11  
A number of important advances have been made in the area of joint equalization and decoding of data transmitted over intersymbol interference (ISI) channels. Turbo equalization is an iterative approach to this problem, in which a maximum a posteriori probability (MAP) equalizer and a MAP decoder exchange soft information in the form of prior probabilities over the transmitted symbols. A number of reduced-complexity methods for turbo equalization have been introduced in which MAP equalization is replaced with suboptimal, low-complexity approaches. We explore a number of low-complexity soft-input/soft-output (SISO) equalization algorithms based on the minimum mean square error (MMSE) criterion. This includes the extension of existing approaches to general signal constellations and the derivation of a novel approach requiring less complexity than the MMSE-optimal solution. All approaches are qualitatively analyzed by observing the mean-square error averaged over a sequence of equalized data. We show that for the turbo equalization application, the MMSE-based SISO equalizers perform well compared with a MAP equalizer while providing a tremendous complexity reduction  相似文献   
3.
We investigate turbo equalization, or iterative equalization and decoding, as a receiver technology for systems where data is protected by an error-correcting code, shuffled by an interleaver, and mapped onto a signal constellation for transmission over a frequency-selective channel with unknown time-varying channel impulse response. The focus is the concept of soft iterative channel estimation, which is to improve the channel estimate over the iterations by using soft information fed back from the decoder from the previous iteration to generate "extended training sequences" between the actual transmitted training sequences.  相似文献   
4.
Turbo equalization   总被引:3,自引:0,他引:3  
Turbo equalization is an iterative equalization and decoding technique that can achieve equally impressive performance gains for communication systems that send digital data over channels that require equalization, i.e., those that suffer from intersymbol interference (ISI). In this article, we discuss the turbo equalization approach to coded data transmission over ISI channels, with emphasis on the basic ideas and some of the practical details. The original system introduced by Douillard et al. can be viewed as an extension of the turbo decoding algorithm by considering the effect of the ISI channel as another form of error protection, i.e., as a rate-1 convolutional code.  相似文献   
5.
Turbo equalization: principles and new results   总被引:2,自引:0,他引:2  
We study the turbo equalization approach to coded data transmission over channels with intersymbol interference. In the original system invented by Douillard et al. (1995), the data are protected by a convolutional code and the receiver consists of two trellis-based detectors, one for the channel (the equalizer) and one for the code (the decoder). It has been shown that iterating equalization and decoding tasks can yield tremendous improvements in bit error rate. We introduce new approaches to combining equalization based on linear filtering, with decoding.. Through simulation and analytical results, we show that the performance of the new approaches is similar to the trellis-based receiver, while providing large savings in computational complexity. Moreover, this paper provides an overview of the design alternatives for turbo equalization with given system parameters, such as the channel response or the signal-to-noise ratio  相似文献   
6.
Based on extrinsic information transfer (EXIT) charts, the convergence behavior of iterative decoding is studied for a number of serially concatenated systems, such as a serially concatenated code, coded data transmission over an intersymbol interference channel, bit-interleaved coded modulation, or trellis-coded modulation. Efficient optimization algorithms based on simplified EXIT chart construction are devised to find irregular codes improving the convergence of iterative decoding. One optimization criterion is to find concatenated systems exhibiting thresholds of successful decoding convergence, which are close to information-theoretic limits. However, these thresholds are approached only for very long block lengths. To overcome this problem, the decoding convergence after a fixed, finite number of iterations is optimized, which yields systems performing very well for short block lengths, too. As an example, optimal system configurations for communication over an additive white Gaussian noise channel are presented.  相似文献   
7.
We consider the problem of jointly decoding the correlated data picked up and transmitted by the nodes of a large-scale sensor network. Assuming that each sensor node uses a very simple encoder (a scalar quantizer and a modulator), we focus on decoding algorithms that exploit the correlation structure of the sensor data to produce the best possible estimates under the minimum mean-square error (MMSE) criterion. Our analysis shows that a standard implementation of the optimal MMSE decoder is unfeasible for large-scale sensor networks, because its complexity grows exponentially with the number of nodes in the network. Seeking a scalable alternative, we use factor graphs to obtain a simplified model for the correlation structure of the sensor data. This model allows us to use the sum-product decoding algorithm, whose complexity can be made to grow linearly with the size of the network. Considering large sensor networks with arbitrary topologies, we focus on factor trees and give an exact characterization of the decoding complexity, as well as mathematical tools for factorizing Gaussian sources and optimization algorithms for finding optimal factor trees under the Kullback-Leibler criterion.  相似文献   
1
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号