In order to characterize the pollution discharged into the Moselle River and some of its tributaries, spectroscopic techniques, namely UV-vis spectroscopy and synchronous fluorescence spectroscopy, have been combined. UV-visible spectra were analysed using the maximum of the second derivative at 225 nm (related to nitrates), the SUVA254 and E2/E3 indices (related to the nature of organic matter). Synchronous fluorescence spectra (delta lambda = 50 nm) presented different shapes depending upon the type of pollution. The pollution results from anthropogenic activities: untreated domestic sewage due to misconnections in a periurban river, effluent from urban WWTPS, agricultural runoff (nitrates) in several streams, discharge from a paper mill (humic-like substances due to wood processing) and from steel mills (PAHs). 相似文献
A vast amount of valuable human knowledge is recorded in documents. The rapid growth in the number of machine-readable documents for public or private access necessitates the use of automatic text classification. While a lot of effort has been put into Western languages—mostly English—minimal experimentation has been done with Arabic. This paper presents, first, an up-to-date review of the work done in the field of Arabic text classification and, second, a large and diverse dataset that can be used for benchmarking Arabic text classification algorithms. The different techniques derived from the literature review are illustrated by their application to the proposed dataset. The results of various feature selections, weighting methods, and classification algorithms show, on average, the superiority of support vector machine, followed by the decision tree algorithm (C4.5) and Naïve Bayes. The best classification accuracy was 97 % for the Islamic Topics dataset, and the least accurate was 61 % for the Arabic Poems dataset. 相似文献
We describe a numerical model of an internal pellet target to study the beam dynamics in storage rings, where the nuclear experiments with such type of target are planned. In this model the Monte Carlo algorithm is applied to evaluate the particle coordinates and momentum deviation depending on time and parameters of the target. One has to mention that due to statistical character of the pellet distribution in the target the analytical techniques are not applicable. This is also true for the particle distribution in the stored beam, which is influenced by various effects (such as a cooling process, intra-beam scattering, betatron oscillation, space charge effect). In this case only the Monte Carlo technique to model energy straggling in combination with the pellet distribution in the target should be considered.
Program summary
Program title: PETAG01Catalogue identifier: ADZV_v1_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADZV_v1_0.htmlProgram obtainable from: CPC Program Library, Queen's University, Belfast, N. IrelandLicensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.htmlNo. of lines in distributed program, including test data, etc.: 1068No. of bytes in distributed program, including test data, etc.: 11 314Distribution format: tar.gzProgramming language: Fortran 77, C/C++Computer: Platform independentOperating system: MS Windows 95/2000/XP, Linux (Unix)RAM: 128 MBClassification: 11.10Nature of problem: Particle beam dynamics with use of the pellet target.Solution method: Monte Carlo with analytical approximation.Running time: dozens of seconds 相似文献
Segmenting the heart in medical images is a challenging and important task for many applications. In particular, segmenting the heart in CT images is very useful for cardiology and oncological applications such as radiotherapy. Although the majority of methods in the literature are designed for ventricle segmentation, there is a real interest in segmenting the heart as a whole in this modality. In this paper, we address this problem and propose an automatic and robust method, based on anatomical knowledge about the heart, in particular its position with respect to the lungs. This knowledge is represented in a fuzzy formalism and it is used both to define a region of interest and to drive the evolution of a deformable model in order to segment the heart inside this region. The proposed method has been applied on non-contrast CT images and the obtained results have been compared to manual segmentations of the heart, showing the good accuracy and high robustness of our approach. 相似文献
Remote sensing of invasive species is a critical component of conservation and management efforts, but reliable methods for the detection of invaders have not been widely established. In Hawaiian forests, we recently found that invasive trees often have hyperspectral signatures unique from that of native trees, but mapping based on spectral reflectance properties alone is confounded by issues of canopy senescence and mortality, intra- and inter-canopy gaps and shadowing, and terrain variability. We deployed a new hybrid airborne system combining the Carnegie Airborne Observatory (CAO) small-footprint light detection and ranging (LiDAR) system with the Airborne Visible and Infrared Imaging Spectrometer (AVIRIS) to map the three-dimensional spectral and structural properties of Hawaiian forests. The CAO-AVIRIS systems and data were fully integrated using in-flight and post-flight fusion techniques, facilitating an analysis of forest canopy properties to determine the presence and abundance of three highly invasive tree species in Hawaiian rainforests.
The LiDAR sub-system was used to model forest canopy height and top-of-canopy surfaces; these structural data allowed for automated masking of forest gaps, intra- and inter-canopy shadows, and minimum vegetation height in the AVIRIS images. The remaining sunlit canopy spectra were analyzed using spatially-constrained spectral mixture analysis. The results of the combined LiDAR-spectroscopic analysis highlighted the location and fractional abundance of each invasive tree species throughout the rainforest sites. Field validation studies demonstrated < 6.8% and < 18.6% error rates in the detection of invasive tree species at 7 m2 and 2 m2 minimum canopy cover thresholds. Our results show that full integration of imaging spectroscopy and LiDAR measurements provides enormous flexibility and analytical potential for studies of terrestrial ecosystems and the species contained within them. 相似文献
We present a formal approach to study the evolution of biological networks. We use the Beta Workbench and its BlenX language to model and simulate networks in connection with evolutionary algorithms. Mutations are done on the structure of BlenX programs and networks are selected at any generation by using a fitness function. The feasibility of the approach is illustrated with a simple example. 相似文献
The effects of an educational electronic book (e-book) on 149 five- to six-year-old kindergarteners’ emergent literacy levels were researched in two SES groups: low (LSES) (79 children) vs. middle (MSES) (70 children). In each SES group, children were randomly assigned to four groups. Three groups were assigned to work individually in one of three e-book activity modes: “Read story only”, “Read with dictionary”, or “Read and play” during three similar activity sessions and the fourth group served as a control which received the regular program of the kindergarten. Pre- and post-intervention emergent literacy measures included word meaning, word recognition, and phonological awareness. Results show that word meaning of children from both middle and low SES improved following the educational e-book activity, regardless of mode. Second, LSES children’s emergent literacy levels showed relatively greater improvement rates than did those of the MSES children. Third, children in the “Read with dictionary” and “Read and play” activity modes showed more improvement in their emergent literacy levels than did those in the “Read story only” mode. Implications for future research and for education are discussed. 相似文献
Over the past decade, object recognition work has confounded voxel response detection with potential voxel class identification. Consequently, the claim that there are areas of the brain that are necessary and sufficient for object identification cannot be resolved with existing associative methods (e.g., the general linear model) that are dominant in brain imaging methods. In order to explore this controversy we trained full brain (40,000 voxels) single TR (repetition time) classifiers on data from 10 subjects in two different recognition tasks on the most controversial classes of stimuli (house and face) and show 97.4% median out-of-sample (unseen TRs) generalization. This performance allowed us to reliably and uniquely assay the classifier's voxel diagnosticity in all individual subjects' brains. In this two-class case, there may be specific areas diagnostic for house stimuli (e.g., LO) or for face stimuli (e.g., STS); however, in contrast to the detection results common in this literature, neither the fusiform face area nor parahippocampal place area is shown to be uniquely diagnostic for faces or places, respectively. 相似文献
Product development is an important but also dynamic, lengthy and risky phase in the life of a new product. The optimisation of the product development phase through extensive knowledge of the involved procedures is believed to reduce the risks and improve the final product quality. Artificial intelligence and expert systems have been used successfully in optimising the development phase of some new products as it will be demonstrated by the first sections of this publication. This paper presents the first module of an expert system, a neural network architecture that could predict the reliability performance of a vehicle at later stages of its life by using only information from a first inspection after the vehicle’s prototype production. The paper demonstrates how a tool like neural networks can be designed and optimised for use in reliability performance predictions. Also, this paper presents an optimisation methodology that enabled the neural network to deal with the limited amount of available training data, common during new product development, and to finally achieve acceptable prediction performance with small error. A case example is presented to demonstrate the methodology. 相似文献
This paper gives a detailed analysis of the error surfaces of certain recurrent networks and explains some difficulties encountered in training recurrent networks. We show that these error surfaces contain many spurious valleys, and we analyze the mechanisms that cause the valleys to appear. We demonstrate that the principle mechanism can be understood through the analysis of the roots of random polynomials. This paper also provides suggestions for improvements in batch training procedures that can help avoid the difficulties caused by spurious valleys, thereby improving training speed and reliability. 相似文献