The multiple determination tasks of chemical properties are a classical problem in analytical chemistry. The major problem is concerned in to find the best subset of variables that better represents the compounds. These variables are obtained by a spectrophotometer device. This device measures hundreds of correlated variables related with physicocbemical properties and that can be used to estimate the component of interest. The problem is the selection of a subset of informative and uncorrelated variables that help the minimization of prediction error. Classical algorithms select a subset of variables for each compound considered. In this work we propose the use of the SPEA-II (strength Pareto evolutionary algorithm II). We would like to show that the variable selection algorithm can selected just one subset used for multiple determinations using multiple linear regressions. For the case study is used wheat data obtained by NIR (near-infrared spectroscopy) spectrometry where the objective is the determination of a variable subgroup with information about E protein content (%), test weight (Kg/HI), WKT (wheat kernel texture) (%) and farinograph water absorption (%). The results of traditional techniques of multivariate calibration as the SPA (successive projections algorithm), PLS (partial least square) and mono-objective genetic algorithm are presents for comparisons. For NIR spectral analysis of protein concentration on wheat, the number of variables selected from 775 spectral variables was reduced for just 10 in the SPEA-II algorithm. The prediction error decreased from 0.2 in the classical methods to 0.09 in proposed approach, a reduction of 37%. The model using variables selected by SPEA-II had better prediction performance than classical algorithms and full-spectrum partial least-squares. 相似文献
Using Asimov’s “Bicentennial Man” as a springboard, a number of metaethical issues concerning the emerging field of machine
ethics are discussed. Although the ultimate goal of machine ethics is to create autonomous ethical machines, this presents
a number of challenges. A good way to begin the task of making ethics computable is to create a program that enables a machine
to act an ethical advisor to human beings. This project, unlike creating an autonomous ethical machine, will not require that
we make a judgment about the ethical status of the machine itself, a judgment that will be particularly difficult to make.
Finally, it is argued that Asimov’s “three laws of robotics” are an unsatisfactory basis for machine ethics, regardless of
the status of the machine.
A new approach using input-output techniques is proposed for the analysis of urban stormwater pollution caused by urban land development. The input-output model provides projections of sectoral outputs within an urban region. By defining land as an input to production, these output projections may be translated into projections of commercial and industrial land development. Furthermore, the closed version of the input-output model is used to project residential land development as a function of projected wage income. The pollutant generation in urban stormwater is related to the quantity of each category of land development by a pollutant coefficient matrix. Thus, the model can be used to predict the impact of various economic growth scenarios on pollution loadings in runoff water. This will help planners in assessing the environmental costs of various scenarios, and in preparing for remedial actions. A numerical example is provided to illustrate the applications of the model. 相似文献
In this paper we study the problem of asynchronous processors traversing a list with path compression. We show that if an atomic splice operation is available, the worst-case work forp processors traversing a list of length h is (np1/2). The splice operation can be generalized to removek elements from the list. For thek-splice operation the worst-case work is (np1/k+1).This research was supported by an NSF Presidential Young Investigator Award CCR-8657562, Digital Equipment Corporation, NSF CER Grant CCR-861966, and NSF/Darpa Grant CCR-8907960. A preliminary version of this paper was presented at the Fourth Annual ACM Symposium on Parallel Algorithms and Architectures. 相似文献
A novel optical interconnection is introduced for a multistage optical switching network that uses orthogonally polarized data and address information. The network is unique in that the data information is never regenerated and remains in optical form throughout (i.e., it is never converted into electrical information). This has two main consequences: (1) the bandwidth of the data is not restricted by electrical circuit considerations, and (2) the optical interconnections from one stage of the network to the next must be highly efficient. The interconnection meets several goals: high efficiency, preservation of cross polarization of data and address, low cross talk between polarizations, good manufacturability, resistance to misalignment caused by thermal expansion, and absence of significant aberrations. In addition, sychronization of the signals is maintained, as the optical path lengths for all routes through the system are equal. 相似文献
The goal of holographic particle velocimetry is to infer fluid velocity patterns from images reconstructed from doubly exposed holograms of fluid volumes seeded with small particles. The advantages offered by in-line holography in this context usually make it the method of choice, but seeding densities sufficient to achieve high spatial resolution in the sampling of the velocity fields cause serious degradation, through speckle, of the signal-to-noise ratio in the reconstructed images. The in-line method also leads to a great depth of field in paraxial viewing of reconstructed images, making it essentially impossible to estimate particle depth with useful accuracy. We present here an analysis showing that these limitations can be circumvented by variably scaled correlation, or wavelet transformation. The shift variables of the wavelet transform are provided automatically by the optical correlation methodology. The variable scaling of the wavelet transform derives, in this case, directly from the need to accommodate varying particle depths. To provide such scaling, we use a special optical system incorporating prescribed variability in spacings and focal length of lenses to scan through the range of particle depths.
Calculation shows, among other benefits, improvement by approximately two orders of magnitude in depth resolution. A much higher signal-to-noise ratio together with faster data extraction and processing should be attainable.
A model of pulsed photothermal radiometry (PPTR) based on optical diffusion theory is presented for a turbid, two-layer, semi-infinite medium containing a surface layer whose optical absorption and scattering properties differ from that of the underlying layer. Assuming one-dimensional geometry, we develop expressions for the depth-dependent fluence distributions and radiant-energy-density profiles and for the time dependence of the PPTR signal. Experimental tests of the PPTR model in a series of layered phantoms of varying optical properties are described. The results of these tests are consistent with the model predictions. 相似文献