This study introduces a change detection model based on Neighborhood Correlation Image (NCI) logic. It is based on the fact that the same geographic area (e.g., a 3 × 3 pixel window) on two dates of imagery will tend to be highly correlated if little change has occurred, and uncorrelated when change occurs. Computing the piecewise correlation between two data sets provides valuable information regarding the location and numeric change value derived using contextual information within the specified neighborhood. Various neighborhood configurations (i.e., multi-level NCIs) were explored in the study using high spatial resolution multispectral imagery: smaller neighborhood sizes provided some detailed change information (such as a new patios added to an existing building) at the cost of introducing some noise (such as changes in shadows). Larger neighborhood sizes were useful for removing this noise but introduced some inaccurate change information (such as removing some linear feature changes). When combined with image classification using a machine learning decision tree (C5.0), classifications based on multi-level NCIs yielded superior results (e.g., using a 3-pixel circular radius neighborhood had a Kappa of 0.94), compared to the classification that did not incorporate NCIs (Kappa = 0.86). 相似文献
Semantics-preserving dimensionality reduction refers to the problem of selecting those input features that are most predictive of a given outcome; a problem encountered in many areas such as machine learning, pattern recognition, and signal processing. This has found successful application in tasks that involve data sets containing huge numbers of features (in the order of tens of thousands), which would be impossible to process further. Recent examples include text processing and Web content classification. One of the many successful applications of rough set theory has been to this feature selection area. This paper reviews those techniques that preserve the underlying semantics of the data, using crisp and fuzzy rough set-based methodologies. Several approaches to feature selection based on rough set theory are experimentally compared. Additionally, a new area in feature selection, feature grouping, is highlighted and a rough set-based feature grouping technique is detailed. 相似文献
Extensible Markup Language (XML) is fast becoming the new standard for data representation and exchange on the World Wide Web, e.g., in B2B e-commerce. Modern enterprises need to combine data from many sources in order to answer important business questions, creating a need for integration of web-based XML data. Previous web-based data integration efforts have focused almost exclusively on the logical level of data models, creating a need for techniques that focus on the conceptual level in order to communicate the structure and properties of the available data to users at a higher level of abstraction. The most widely used conceptual model at the moment is the Unified Modeling Language (UML).
This paper presents algorithms for automatically constructing UML diagrams from XML DTDs, enabling fast and easy graphical browsing of XML data sources on the web. The algorithms capture important semantic properties of the XML data such as precise cardinalities and aggregation (containment) relationships between the data elements. As a motivating application, it is shown how the generated diagrams can be used for the conceptual design of data warehouses based on web data, and an integration architecture is presented. The choice of data warehouses and On-Line Analytical Processing as the motivating application is another distinguishing feature of the presented approach. 相似文献
Landmarks, or certain characteristic reference points, on cephalograms are used as a diagnostic aid employed in treatment planning by orthodontists. This work presents an algorithm for recognizing some anatomical features and locating landmarks on lateral skull X rays (cephalograms) using digital image processing and feature recognition techniques. A cephalogram is digitized and stored in a computer memory. Prefiltering is applied to remove image noise. The bony and flesh profiles of jaw and front face are traced. Using these profiles the algorithm locates 17 points on the image, some on bony features and others on soft tissue. To locate these points, edge-enhancement, thresholding, and edge-detection techniques are applied. The algorithm can be run on an IBM compatible microcomputer. 相似文献
A 5-HT? receptor agonist based on a benzamide scaffold was identified in a screening of a small commercial compound library, and an elaborate SAR study originating from this hit was performed. The design, synthesis, and functional characterisation of benzamide analogues at the 5-HT?A receptor yielded substantial information concerning the analogues as 5-HT? receptor agonists. However, the potencies of the derived analogues were not significantly improved over that of the initial hit. The benzamide scaffold constitutes a novel type of 5-HT? receptor agonist, as it does not possess a positively charged functionality, which is essential for the binding of all orthosteric ligands to the receptor. Preliminary investigations suggest that the compounds may exert their effects on 5-HT? receptors by binding to an allosteric site in the receptor complex. 相似文献