首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   14篇
  免费   0篇
无线电   7篇
一般工业技术   2篇
自动化技术   5篇
  2014年   1篇
  2013年   1篇
  2010年   1篇
  2009年   1篇
  2008年   1篇
  2007年   2篇
  2002年   2篇
  2001年   1篇
  1994年   1篇
  1993年   2篇
  1989年   1篇
排序方式: 共有14条查询结果,搜索用时 15 毫秒
1.
The motion of a walking person is analyzed by examining cycles in the movement. Cycles are detected using autocorrelation and Fourier transform techniques of the smoothed spatio-temporal curvature function of trajectories created by specific points on the object as it performs cyclic motion. A large impulse in the Fourier magnitude plot indicates the frequency at which cycles are occurring. Both synthetically generated and real walking sequences are analyzed for cyclic motion. The real sequences are then used in a motion based recognition application in which one complete cycle is stored as a model, and a matching process is performed using one cycle of an input trajectory.  相似文献   
2.
Pattern classification is a very important image processing task. A typical pattern classification algorithm can be broken into two parts; first, the pattern features are extracted and, second, these features are compared with a stored set of reference features until a match is found. In the second part, usually one of the several clustering algorithms or similarity measures is applied. In this paper, a new application of linear associative memory (LAM) to pattern classification problems is introduced. Here, the clustering algorithms or similarity measures are replaced by a LAM matrix multiplication. With a LAM, the reference features need not be separately stored. Since the second part of most classification algorithms is similar, a LAM standardizes the many clustering algorithms and also allows for a standard digital hardware implementation. Computer simulations on regular textures using a feature extraction algorithm achieved a high percentage of successful classification. In addition, this classification is independent of topological transformations.  相似文献   
3.
Ultrasound imaging of the common carotid artery (CCA) is a non-invasive tool used in medicine to assess the severity of atherosclerosis and monitor its progression through time. It is also used in border detection and texture characterization of the atherosclerotic carotid plaque in the CCA, the identification and measurement of the intima-media thickness (IMT) and the lumen diameter that all are very important in the assessment of cardiovascular disease (CVD). Visual perception, however, is hindered by speckle, a multiplicative noise, that degrades the quality of ultrasound B-mode imaging. Noise reduction is therefore essential for improving the visual observation quality or as a pre-processing step for further automated analysis, such as image segmentation of the IMT and the atherosclerotic carotid plaque in ultrasound images. In order to facilitate this preprocessing step, we have developed in MATLAB® a unified toolbox that integrates image despeckle filtering (IDF), texture analysis and image quality evaluation techniques to automate the pre-processing and complement the disease evaluation in ultrasound CCA images. The proposed software, is based on a graphical user interface (GUI) and incorporates image normalization, 10 different despeckle filtering techniques (DsFlsmv, DsFwiener, DsFlsminsc, DsFkuwahara, DsFgf, DsFmedian, DsFhmedian, DsFad, DsFnldif, DsFsrad), image intensity normalization, 65 texture features, 15 quantitative image quality metrics and objective image quality evaluation. The software is publicly available in an executable form, which can be downloaded from http://www.cs.ucy.ac.cy/medinfo/. It was validated on 100 ultrasound images of the CCA, by comparing its results with quantitative visual analysis performed by a medical expert. It was observed that the despeckle filters DsFlsmv, and DsFhmedian improved image quality perception (based on the expert's assessment and the image texture and quality metrics). It is anticipated that the system could help the physician in the assessment of cardiovascular image analysis.  相似文献   
4.
An improved method for estimating the frame/symbol timing offset in preamble-aided OFDM systems is presented. It uses a conventional preamble structure and combines autocorrelation techniques with restricted crosscorrelation to achieve a near-ideal timing performance without significant increase in complexity. Computer simulations show that the method is robust in both AWGN and fading multipath channels, achieving better performance than the existing methods.  相似文献   
5.
Recent intersatellite radiometric comparisons of the Tropical Rainfall Measurement Mission Microwave Imager (TMI) with polar orbiting satellite radiometer data and modeled clear-sky radiances have uncovered a time-variable radiometric bias in the TMI brightness temperatures. The bias is consistent with a source that generally cools during orbit night and warms during sunlight exposure. The likely primary source has been identified as a slightly emissive parabolic antenna reflector. This paper presents an empirical brightness temperature correction to TMI based on the position around each orbit and the Sun elevation above the orbit plane. The results of radiometric intercomparisons with WindSat and special sensor microwave imager are presented, which demonstrate the effectiveness of the recommended correction approach based on four years of data.  相似文献   
6.

Using a matrix of drop size distributions (DSDs), measured by a microscale array of disdrometers, a method of spatial and temporal DSD interpolation is presented. The goal of this interpolation technique is to estimate the DSD above the disdrometer array as a function of three spatial coordinates, time and drop diameter. This interpolation algorithm assumes simplified drop dynamics, based on cloud advection and terminal velocity of raindrops. Once a 3D DSD has been calculated, useful quantities such as radar reflectivity Z and rainfall rate R can be computed and compared with corresponding rain gauge and weather radar data.  相似文献   
7.
In this paper, we present an algorithm for the automated removal of nonprecipitation related echoes such as atmospheric anomalous propagation (AP) in the lower elevations of meteorological-radar volume scans. The motivation or the development of this technique is the need for an objective quality control algorithm that minimizes human interaction. The algorithm uses both textural and intensity information obtained from the two lower-elevation reflectivity maps. The texture of the reflectivity maps is analyzed with the help of multifractals. Four multifractal exponents are computed for each pixel of the reflectivity maps and are compared to a "strict" and a "soft" threshold. Pixels with multifractal exponents larger than the strict threshold are marked as "nonrain," and pixels with exponents smaller than the soft threshold are marked as "rain." Pixels with all other exponent values are further examined using intensity information. We evaluate our QC procedure by comparison with the Tropical Rainfall Measurement Mission (TRMM) Ground Validation Project quality control algorithm that was developed by TRMM scientists. Comparisons are based on a number of selected cases where nonprecipitation and a variety of rain events are present, and results show that both algorithms are effective in eliminating nonprecipitation related echoes while maintaining the rain pixels  相似文献   
8.
A new practical method for decoding low-density parity check (LDPC) codes is presented. The followed approach involves reformulating the parity check equations using nonlinear functions of a specific form, defined over Rrho, where rho denotes the check node degree. By constraining the inputs to these functions in the closed convex subset [0,1]rho ("box" set) of Rrho, and also by exploiting their form, a multimodal objective function that entails the code constraints is formulated. The gradient projection algorithm is then used for searching for a valid codeword that lies in the vicinity of the channel observation. The computational complexity of the new decoding technique is practically sub-linearly dependent on the code's length, while processing on each variable node can be performed in parallel allowing very low decoding latencies. Simulation results show that convergence is achieved within 10 iterations, although some performance degradations relative to the belief propagation (BP) algorithm are observed  相似文献   
9.
Classification of noisy signals using fuzzy ARTMAP neural networks   总被引:5,自引:0,他引:5  
This paper describes an approach to classification of noisy signals using a technique based on the fuzzy ARTMAP neural network (FAMNN). The proposed method is a modification of the testing phase of the fuzzy ARTMAP that exhibits superior generalization performance compared to the generalization performance of the standard fuzzy ARTMAP in the presence of noise. An application to textured gray-scale image segmentation is presented. The superiority of the proposed modification over the standard fuzzy ARTMAP is established by a number of experiments using various texture sets, feature vectors and noise types. The texture sets include various aerial photos and also samples obtained from the Brodatz album. Furthermore, the classification performance of the standard and the modified fuzzy ARTMAP is compared for different network sizes. Classification results that illustrate the performance of the modified algorithm and the FAMNN are presented.  相似文献   
10.
The current study proposes decoding algorithms for low density parity check codes (LDPC), which offer competitive performance-complexity trade-offs relative to some of the most efficient existing decoding techniques. Unlike existing low-complexity algorithms, which are essentially reduced complexity variations of the classical belief propagation algorithm, starting point in the developed algorithms is the gradient projections (GP) decoding technique, proposed by Kasparis and Evans (2007). The first part of this paper is concerned with the GP algorithm itself, and specifically with determining bounds on the step-size parameter, over which convergence is guaranteed. Consequently, the GP algorithm is reformulated as a message passing routine on a Tanner graph and this new formulation allows development of new low-complexity decoding routines. Simulation evaluations, performed mainly for geometry-based LDPC constructions, show that the new variations achieve similar performances and complexities per iteration to the state-of-the-art algorithms. However, the developed algorithms offer the implementation advantages that the memory-storage requirement is significantly reduced, and also that the performance and convergence speed can be finely traded-off by tuning the step-size parameter.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号