首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
A system developed to be used in the conduct of contact lens research is described. The system provides for data entry and retrieval, graphic image capture and analysis, and query and ASCII interface capabilities. Procedures for accessing the collected data are explained.  相似文献   

3.
A system for the collection and computerized storage, retrieval, and analysis of clinical data, based on collaboration between Research Clinicians and Operational Research workers, is described. This system is operating successfully and provides the clinicians with a powerful aid to research management. It forces them to evaluate closely the data they collect and why they collect it, so enabling them to consider a wide range of factors relevant to the patient's behavior. It is thus possible to detect differences early in a study, evaluate explanations for these differences, and alter treatment accordingly.  相似文献   

4.
Automated correlation of ECG history for early detection of heart disease, especially among the young, has been a matter of increasing interest. However, each electrocardiogram, recorded say a few months apart, generates anywhere from 600 to 2400 digitized data, so that statistical methods cannot directly be applied. An information compression step suitable for such data is presented in this paper and a prediction procedure is developed for forecasting the waveform changes. Specifically, each ECG lead is digitized and represented by itsz-domain modes. These modes are found to exhibit continuity in time, from month to month and year to year, except in the event of major physiological changes such as after surgery, thus lending themselves ideally to statistic al prediction. To enhance discrimination of the subtle changes inP, QRS, andT complexes, the derivatives of the waves are employed for extraction of the modes. This signifies a departure from previous efforts in ECG representation. Indeed, otherwise, important changes in the waves can remain undetected through mode extraction while the human eye can perceive them rather easily from the recorded traces.  相似文献   

5.

Vessel extraction from retinal fundus images is essential for the diagnosis of different opthalmologic diseases like glaucoma, diabetic retinopathy and hypertension. It is a challenging task due to presence of several noises embedded with thin vessels. In this article, we have proposed an improved vessel extraction scheme from retinal fundus images. First, mathematical morphological operation is performed on each planes of the RGB image to remove the vessels for obtaining noise in the image. Next, the original RGB and vessel removed RGB image are transformed into negative gray scale image. These negative gray scale images are subtracted and finally binarized (BW1) by leveling the image. It still contains some granular noise which is removed based on the area of connected component. Further, previously detected vessels are replaced in the gray-scale image with mean value of the gray-scale image and then the gray-scale image is enhanced to obtain the thin vessels. Next, the enhanced image is binarized and thin vessels are obtained (BW2). Finally, the thin vessel image (BW2) is merged with the previously obtained binary image (BW1) and finally we obtain the vessel extracted image. To analyze the performance of our proposed method we have experimented on publicly available DRIVE dataset. We have observed that our algorithm have provides satisfactory performance with the sensitivity, specificity and accuracy of 0.7260, 0.9802 and 0.9563 respectively which is better than the most of the recent works.

  相似文献   

6.
This paper proposes a fully automated information extraction methodology for weblogs. The methodology integrates a set of relevant approaches based on the use of web feeds and processing of HTML for the extraction of weblog properties. The approach includes a model for generating a wrapper that exploits web feeds for deriving a set of extraction rules automatically. Instead of performing a pairwise comparison between posts, the model matches the values of the web feeds against their corresponding HTML elements retrieved from multiple weblog posts. It adopts a probabilistic approach for deriving a set of rules and automating the process of wrapper generation. An evaluation of the model is conducted on a collection of weblogs reporting a prediction accuracy of 89 %. The results of this evaluation show that the proposed technique enables robust extraction of weblog properties and can be applied across the blogosphere.  相似文献   

7.
8.
9.
Pre-processing of human chromosome images in a computer-based automatic classifier is assessed, comparing five different types of filtering, and no filtering. Assessment is by visual appearance of the images after pre-processing, by measurement of the error in an automatic centromere finder in the classifier and by the error in initial segmentation of object images. Smoothing parallel to the image contours followed by a Laplace filter produces the best results with regard to centromere finding, but at the expense of fragmenting a greater proportion of chromosome images than the other techniques.  相似文献   

10.
Extracting edges from noisy images has an important significance in practical applications which utilize some type of visual input capability. This paper describes a new edge extraction technique specifically developed for noisy images which eliminates the necessity of noise removal preprocessing or postprocessing. The algorithm is based on parallel statistical tests for which indeterminate decisions are allowed. A number of well-chose examples are shown to demonstrate the capabilities of the new algorithm for noisy images as well as noise free images.  相似文献   

11.
The alluvial clay plains of the Murray–Darling Basin (MDB) have been extensively developed for irrigated agricultural production. Whilst irrigation has brought economic prosperity, there have been isolated environmental impacts. This is because the plains were formed by a system of ancient streams (i.e. prior stream and palaeochannels) that are characterised by coarse textured sediments and which are susceptible to deep drainage. To improve irrigation efficiency and natural resource management outcomes, information is required to characterise the connectivity of prior stream channels with underlying migrational channel deposits (i.e. palaeochannels). One option is the use of electromagnetic (EM) induction instruments which measure the apparent soil electrical conductivity (σa – mS/m). In this paper, we describe how σa collected using a next-generation DUALEM-421 and an EM34 can be used in conjunction with a joint-inversion algorithm (EM4Soil) to generate a 2d model of electrical conductivity (σ – mS/m) across an irrigated cotton growing field located on Quaternary alluvial clay plain in the lower Gwydir valley of NSW (Australia). The results compare favourably with existing pedological and stratigraphic knowledge. On the clay alluvial plain the accumulation of Aeolian and cyclical salt in the root zone and depth of clay alluvium are discerned by the DUALEM-421 and EM34, respectively. In addition, the approach is able to resolve the location of buried migrational channel deposits (i.e. palaeochannel) underlying the clay plain and the connectivity of these coarser sediments with a prior stream channel. Quantitatively the best correlation between estimated σ and measured soil properties, was found to be greatest when the DUALEM-421 and EM34 data were jointly inverted and when predicting EC1:5 (r2 = 0.61).  相似文献   

12.
Velocity picking is the problem of picking velocity–time pairs based on a coherence metric between multiple seismic signals. Coherence as a function of velocity and time can be expressed as a 2D color semblance velocity image. Currently, humans pick velocities by looking at the semblance velocity image; this process can take days or even weeks to complete for a seismic survey. The problem can be posed as a geometric feature-matching problem. A feature extraction algorithm can recognize islands (peaks) of maximum semblance in the semblance velocity image: a heuristic combinatorial matching process can then be used to find a subset of peaks that maximizes the coherence metric. The peaks define a polyline through the image, and coherence is measured in terms of the summed velocity under the polyline and the smoothness of the polyline. Our best algorithm includes a constraint favoring solutions near the median solution for the local area under consideration. First, each image is processed independently. Then, a second pass of optimization includes proximity to the median as an additional optimization criterion. Our results are similar to those produced by human experts. Accepted: 15 June 2001  相似文献   

13.
14.
15.
Detecting cancer at an early stage is useful in better patient prognosis and treatment planning. Even though there are several preliminary tests and non-invasive procedures that are conducted for the detection of cancer of various organs, a histopathology study is inevitable and is considered a golden standard in the diagnosis of cancer. Today as the cost of electronic components are slashed down, computers with high memory capacity and better processing capabilities are built. Furthermore, imaging modalities have also been developed to a great extent. Interestingly, computers help doctors to interpret medical images in the diagnosis process and thus the area of Computer Aided/Assisted Diagnosis (CAD) is born. Consequently, the diagnosis procedures become reproducible, reliable and less subject to observer variations. This survey, explores the state-of-the-art materials and methods that have been used for CAD to detect cancer from histopathology images.  相似文献   

16.
The design of a system to extract information automatically from paper-based maps and answer queries related to spatial features and structure of geographic data is considered. The foundation of such a system is a set of image-analysis algorithms for extracting spatial features. Efficient algorithms to detect symbols, identify and track various types of lines, follow closed contours, compute distances, find shortest paths, etc. from simplified map images have been developed. A query processor analyzes the queries presented by the user in a predefined syntax, controls the operation of the image processing algorithms, and interacts with the user. The query processor is written in Lisp and calls image-analysis routines written in Fortran  相似文献   

17.
S.L. Wang  W.H. Lau  S.H. Leung   《Pattern recognition》2004,37(12):4477-2387
Visual information from lip shapes and movements helps improve the accuracy and robustness of a speech recognition system. In this paper, a new region-based lip contour extraction algorithm that combines the merits of the point-based model and the parametric model is presented. Our algorithm uses a 16-point lip model to describe the lip contour. Given a robust probability map of the color lip image generated by the FCMS (fuzzy clustering method incorporating shape function) algorithm, a region-based cost function that maximizes the joint probability of the lip and non-lip region can be established. Then an iterative point-driven optimization procedure has been developed to fit the lip model to the probability map. In each iteration, the adjustment of the 16 lip points is governed by three pieces of quadratic curves that constrain the points to form a physical lip shape. Experiments show that the proposed approach provides satisfactory results for 5000 unadorned lip images of over 20 individuals. A real-time lip contour extraction system has also been implemented.  相似文献   

18.
Automatic hair extraction from a given 2D image has been a challenging problem for a long time, especially when complex backgrounds and a wide variety of hairstyles are involved. This paper has made its contribution in the following three aspects. First, it proposes a novel framework that successfully combines the techniques of face detection, outlier-aware initial stroke placement and matting to extract the desired hair region from an input image. Second, it introduces an alpha space to facilitate the choice of matting parameters. Third, it defines a new comparison metric that is well suited for the alpha matte comparison. Our results show that, compared with the manually drawn trimaps for hair extraction, the proposed automatic algorithm can achieve about 86.2 % extraction accuracy.  相似文献   

19.
20.
The problem addressed in this paper is the automatic extraction of names from a document image. Our approach relies on the combination of two complementary analyses. First, the image-based analysis exploits visual clues to select the regions of interest in the document. Second, the textual-based analysis searches for name patterns and low-level word textual features. Both analyses are then combined at the word level through a neural network fusion scheme. Reported results on degraded documents such as facsimile and photocopied technical journals demonstrate the interest of the combined approach.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号