首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Improving the efficiency of the carbon dioxide (CO2) capture process requires a good understanding of the intricate relationships among parameters involved in the process. The objective of this paper is to study the relationships among the significant parameters impacting CO2 production. An enhanced understanding of the intricate relationships among the process parameters supports prediction and optimization, thereby improving efficiency of the CO2 capture process. Our modeling study used the 3-year operational data collected from the amine-based post combustion CO2 capture process system at the International Test Centre (ITC) of CO2 Capture located in Regina, Saskatchewan of Canada. This paper describes the data modeling process using the approaches of (1) neural network modeling combined with sensitivity analysis and (2) neuro-fuzzy modeling technique. The results from the two modeling processes were compared from the perspectives of predictive accuracy, inclusion of parameters, and support for explication of problem space. We conclude from the study that the neuro-fuzzy modeling technique was able to achieve higher accuracy in predicting the CO2 production rate than the combined approach of neural network modeling and sensitivity analysis.  相似文献   

2.
In the present study, the suitability of optical ASTER satellite data (with 9 spectral bands) for estimating the biomass of boreal forest stands in mineral soils was tested. The remote sensing data were analysed and tested together with standwise forest inventory data. Stand volume estimates were converted to aboveground tree biomass using biomass expansion factors, and the aboveground biomass of understory vegetation was predicted according to the stand age. Non-linear regression analysis and neural networks were applied to develop models for predicting biomass according to standwise ASTER reflectance. All ASTER bands appeared to be sensitive to tree biomass, in particular the green band 1. The relative estimation errors (RMSEr) of the total aboveground biomass of the forest stands were 44.7% and 41.0% using multiple regression analysis and neural networks, respectively. Although the estimation errors remained large, the predictions were relatively accurate in comparison to previous studies. Furthermore, the predictions obtained here were significantly close to the municipality-level mean values provided by the National Forest Inventory of Finland.  相似文献   

3.
This paper presents the development process of an expert decision support system for pre-filtering and analysis of data from the carbon dioxide (CO2) capture process. Chemical absorption has become one of the dominant CO2 capture technologies because of its efficiency and low cost. Since the chemical absorption process consists of dozens of components, it generates more than a 100 different types of data. Monitoring the vast amount of data can be complex, and data filtering and analysis processes are desirable. Specifically, invalid data captured as the equipment is started and shut down need to be filtered, and the filtered data need to be analyzed for different purposes. The expert decision support system for data pre-filtering and analysis not only filters out invalid data using different expert rules, but it can also modify or reuse filtering settings, and export the filtered data to various file formats for further analysis. During development of the expert decision support system, knowledge acquisition was emphasized. The system development process incorporated various technologies including the model-view-control (MVC) design pattern, the embedded database technology, the Java event delivery techniques and the eXtensible Markup Language (XML). Some sample sessions from system executions and some results generated from pre-filtering the data will also be discussed.  相似文献   

4.
We propose a hybrid radial basis function network-data envelopment analysis (RBFN-DEA) neural network for classification problems. The procedure uses the radial basis function to map low dimensional input data from input space to a high dimensional + feature space where DEA can be used to learn the classification function. Using simulated datasets for a non-linearly separable binary classification problem, we illustrate how the RBFN-DEA neural network can be used to solve it. We also show how asymmetric misclassification costs can be incorporated in the hybrid RBFN-DEA model. Our preliminary experiments comparing the RBFN-DEA with feed forward and probabilistic neural networks show that the RBFN-DEA fares very well.  相似文献   

5.
F.  A.  A. R.  D.  A.  A.  D.  W. 《Sensors and actuators. B, Chemical》2003,90(1-3):132-138
An optical fibre sensor for the continuous monitoring of gastric carbon dioxide is described, based on the utilisation of a sensing layer, in which the colour of the layer is dependent on the CO2 concentration. The CO2-sensitive layer consists basically of a dye/quaternary ammonium ion pair, dissolved in a thin layer of ethylcellulose. The sensor was thoroughly characterised in laboratory and its performances were compared with those of Tonocap, the instrument based on gastric tonometry, which is the present method for detecting partial pressure of gastric carbon dioxide. Its measurement range, 0–150 h Pa, its accuracy, ±2.5 h Pa, and its response time, less than 1 min, were capable of satisfying the physicians’ requirements for clinical application. The clinical tests, carried out on volunteers and on intensive care patients, showed that the developed sensor is definitely superior to the sensor that is at present available on the market: thanks to its short response time, the optical fibre sensor is able to detect rapid changes in pCO2, currently unknown because of the lack of a tool with which to measure them.  相似文献   

6.
7.
Many recent papers have dealt with the application of feedforward neural networks in financial data processing. This powerful neural model can implement very complex nonlinear mappings, but when outputs are not available or clustering of patterns is required, the use of unsupervised models such as self-organizing maps is more suitable. The present work shows the capabilities of self-organizing feature maps for the analysis and representation of financial data and for aid in financial decision-making. For this purpose, we analyse the Spanish banking crisis of 1977–1985 and the Spanish economic situation in 1990 and 1991, making use of this unsupervised model. Emphasis is placed on the analysis of the synaptic weights, fundamental for delimiting regions on the map, such as bankrupt or solvent regions, where similar companies are clustered. The time evolution of the companies and other important conclusions can be drawn from the resulting maps.Characters and symbols used and their meaning nx x dimension of the neuron grid, in number of neurons - ny y dimension of the neuron grid, in number of neurons - n dimension of the input vector, number of input variables - (i, j) indices of a neuron on the map - k index of the input variables - w ijk synaptic weight that connects thek input with the (i, j) neuron on the map - W ij weight vector of the (i, j) neuron - x k input vector - X input vector - (t) learning rate - o starting learning rate - f final learning rate - R(t) neighbourhood radius - R0 starting neighbourhood radius - R f final neighbourhood radius - t iteration counter - t rf number of iterations until reachingR f - t f number of iterations until reaching f - h(·) lateral interaction function - standard deviation - for every - d (x, y) distance between the vectors x and y  相似文献   

8.
Generating prediction rules for liquefaction through data mining   总被引:1,自引:0,他引:1  
Prediction of liquefaction is an important subject in geotechnical engineering. Prediction of liquefaction is also a complex problem as it depends on many different physical factors, and the relations between these factors are highly non-linear and complex. Several approaches have been proposed in the literature for modeling and prediction of liquefaction. Most of these approaches are based on classical statistical approaches and neural networks. In this paper a new approach which is based on classification data mining is proposed first time in the literature for liquefaction prediction. The proposed approach is based on extracting accurate classification rules from neural networks via ant colony optimization. The extracted classification rules are in the form of IF–THEN rules which can be easily understood by human. The proposed algorithm is also compared with several other data mining algorithms. It is shown that the proposed algorithm is very effective and accurate in prediction of liquefaction.  相似文献   

9.
This research explores a specific step in the Knowledge Discovery of Databases (KDD) process, Data Mining. The actual data mining process deals significantly with prediction, estimation, classification, pattern recognition and the development of association rules. Therefore, this analysis depends heavily on the accuracy of the database and on the chosen sample data to be used for model training and testing. Data mining is based upon searching the concatenation of multiple databases that usually contain some amount of missing data along with a variable percentage of inaccurate data, pollution, outliers and noise. The issue of missing data must be addressed as ignoring this problem can introduce bias into the models being evaluated and lead to inaccurate data mining conclusions. The objective of this research is to address the Effects of the Neural Network s-Sigmoid Function on KDD in the Presence of Imprecise Data using a three factor ANOVA test and Tukey's Honestly Significant Difference statistics.  相似文献   

10.
A recent novel approach to the visualisation and analysis of datasets, and one which is particularly applicable to those of a high dimension, is discussed in the context of real applications. A feed-forward neural network is utilised to effect a topographic, structure-preserving, dimension-reducing transformation of the data, with an additional facility to incorporate different degrees of associated subjective information. The properties of this transformation are illustrated on synthetic and real datasets, including the 1992 UK Research Assessment Exercise for funding in higher education. The method is compared and contrasted to established techniques for feature extraction, and related to topographic mappings, the Sammon projection and the statistical field of multidimensional scaling.  相似文献   

11.
The fundamental issue for automatic geometric tolerance analysis is the representation model, which should, in conjunction with CAD models, accurately and completely represent the GD&T specification according to the GD&T standards. Furthermore, such a representation model should facilitate GD&T validation and tolerance analysis. Most GD&T representation models proposed so far are specific to the tolerance analysis method. Common tolerance analysis methods are min/max chart, Monte Carlo simulation and multivariate regions. This paper will propose a semantic GD&T model, which can be used for any of these methods. The model is a super constraint-tolerance-feature-graph (SCTF-Graph). This paper will demonstrate how the SCTF-Graph model can represent all the tolerance types in the standards, and can contain all the information that is needed for tolerance analysis: nominal geometry (i.e. trimmed features in this research), constraints, tolerances, degrees of freedom (DoFs) to be controlled, assembly hierarchy, and their respective inter-relationships. This paper will discuss the content of the model, how it can be automatically created from the CAD model containing GD&T information (e.g. attributed B-Rep model), and the implementation of such a model, along with some case studies.  相似文献   

12.
The clinical process often involves comparisons of how one set of measurements is related to previous, similar, data and the use of this information to take decisions concerning possible courses of action, often with insufficient data to make meaningful calculations of probabilities. Self-organising maps are useful devices for data visualisation. To illustrate how visualisation with self-organising maps might be used in the clinical process, this paper describes the investigation of an osteoporosis data set using this technique. The data set had previously been used to show that backpropagation neural networks were capable of distinguishing between patients who had suffered a fracture, and those who had not using measured bone mineral density values; illustrating the power of these networks to model relationships in data. However, we had realised that this was somewhat of an academic exercise given that in reality a non-fracture case might be a fracture case waiting to happen. We felt it would be more productive to examine the data itself rather than model an imposed classification. As part of this investigation, the data set was examined using self-organising maps. From the results of the investigation, we conclude that it is possible to create a map, a compressed data representation, using BMD values which may then be partitioned into low and high fracture risk areas. Using such a map may be a useful screening mechanism for detecting people at risk of osteoporotic fracture.  相似文献   

13.
Improving the efficiency of the carbon dioxide (CO2) capture process requires a good understanding of the intricate relationships among parameters involved in the process. The objective of this research is to study the nature of relationships among the key parameters using the approaches of artificial neural network and statistical analysis. Our modeling study used the three-year operational data collected from the amine-based post-combustion CO2 capture process at the International Test Centre of CO2 Capture (ITC) located in Regina, Saskatchewan of Canada. The goal of CO2 capture is to capture and remove CO2 from industrial gas streams before they are released into the atmosphere. The amine solution is used at ITC for absorbing CO2 from the industrial flue gas, and then the CO2 is separated from the amine solution. The amine solution recycles for further CO2 capture and the CO2 stream can be stored or used for other industrial purposes. This paper describes the data modeling process using the approaches of: (1) statistical analysis and (2) neural network modeling combined with sensitivity analysis. The results from the two modeling process were compared from the perspectives of predictive accuracy, inclusion of parameters, support for exploration and explication of problem space, modeling uncertainty, and involvement of experts. It was observed that the approach of neural network modeling combined with sensitivity analysis achieved much higher accuracy on predicting CO2 production rate than the statistical study.  相似文献   

14.
In order to high reality and efficiency,the technique of motion capture (MoCap) has been widely used in the field of computer animation.With the development of motion capture,a large amount of motion capture databases are available and this is significant for the reuse of motion data.But due to the high degree of freedoms and high capture frequency,the dimension of the motion capture data is usually very high and this will lead to a low efficiency in data processing.So how to process the high dimension data and design an efficient and effective retrieval approach has become a challenge which we can’t ignore.In this paper,first we lay out some problems about the key techniques in motion capture data processing.Then the existing approaches are analyzed and summarized.At last,some future work is proposed.  相似文献   

15.
A remote sensing approach permits for the first time the derivation of a map of the carbon dioxide concentration in a volcanic plume. The airborne imaging remote sensing overcomes the typical difficulties associated with the ground measurements and permits rapid and large views of the volcanic processes together with the measurements of volatile components exolving from craters. Hyperspectral images in the infrared range (1900-2100 nm), where carbon dioxide absorption lines are present, have been used. These images were acquired during an airborne campaign by the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) over the Pu`u` O`o Vent situated at the Kilauea East Rift zone, Hawaii. Using a radiative transfer model to simulate the measured up-welling spectral radiance and by applying the newly developed mapping technique, the carbon dioxide concentration map of the Pu`u` O`o Vent plume were obtained. The carbon dioxide integrated flux rate were calculated and a mean value of 396 ± 138 t d− 1 was obtained. This result is in agreement, within the measurements errors, with those of the ground measurements taken during the airborne campaign.  相似文献   

16.
Taguchi method is an efficient method used in off-line quality control in that the experimental design is combined with the quality loss. This method including three stages of systems design, parameter design, and tolerance design has been deeply discussed in Phadke [Quality engineering using robust design (1989)]. It is observable that most industrial applications solved by Taguchi method belong to single-response problems. However, in the real world more than one quality characteristic should be considered for most industrial products, i.e. most problems customers concern about are multi-response problems. As a result, Taguchi method is not appropriate to optimize a multi-response problem. At present, it is still necessary to rely on the engineering judgment to optimize the multi-response problem; therefore uncertainty will be increased during the decision-making process. On the other hand, due to some uncontrollable causes occurring, only a portion of experiment can be completed so that the censored data will be produced. Traditional approaches for analysis of censored data are computationally complicated. In order to overcome above two shortages, this article proposes an effective procedure on the basis of the neural network (NN) and the data envelopment analysis (DEA) to optimize the multi-response problems. A case study of improving the quality of hard disk driver in Su and Tong [ Total Quality Management 8 (1997) 409] is resolved by the proposed procedure. The result indicates that it yields a satisfactory solution.  相似文献   

17.
In this paper we evaluate two alternative CCS technologies at a coal-fired power plant from an investor's point of view. The first technology uses CO2 for enhanced oil recovery (EOR) paired with storage in deep saline formations (DSF) and the second merely stores CO2 in DSF. The paper updates and improves on an earlier publication by Tzimas et al. (2005). For projects of this type there are many sources of risk, three of which stand out: the price of electricity, the price of oil and the price of carbon allowances. In this paper we develop a general stochastic model that can be adapted to other projects such as enhanced gas recovery (EGR) or industrial plants that use CO2 for either EOR or EGR with CCS. The model is calibrated with UK data and applied to help understand the conditions that generate the incentives needed for early investments in these technologies. Additionally, we analyse the risks of these investments. Investments with EOR and secondary DSF storage can only be profitable (NPV > 0) when there is a high long-term equilibrium price for oil of more than $56.38/barrel. When the investment decision can be made at any time, i.e. there is an option value, then the trigger value for optimal investment is significantly higher.  相似文献   

18.
It has been widely accepted by many studies that non-linearity exists in the financial markets and that neural networks can be effectively used to uncover this relationship. Unfortunately, many of these studies fail to consider alternative forecasting techniques, the relevance of input variables, or the performance of the models when using different trading strategies. This paper introduces an information gain technique used in machine learning for data mining to evaluate the predictive relationships of numerous financial and economic variables. Neural network models for level estimation and classification are then examined for their ability to provide an effective forecast of future values. A cross-validation technique is also employed to improve the generalization ability of several models. The results show that the trading strategies guided by the classification models generate higher risk-adjusted profits than the buy-and-hold strategy, as well as those guided by the level-estimation based forecasts of the neural network and linear regression models.  相似文献   

19.
We report a feasibility study of monitoring gases continuously by means of an adapted piezo-optical dosimeter system. The test gas is carbon dioxide whose presence is detected via a pH indicator colour change. Sensing spots consist of m-Cresol Purple supported in a buffered ethyl cellulose matrix and deposited on an indium-tin oxide coated piezoelectric film. The change in absorbance is measured via illumination with an amber LED whose peak emission at 592 nm corresponds to the basic form of the reagent. A sensor was developed and calibrated over the range 0–2.5% CO2 of interest for personal monitoring. The 90% response times are 3–4 min but the response rates could be used to trigger an alarm in the first few seconds of CO2 increase.  相似文献   

20.
提出一种适用于被动式光学人体运动捕捉散乱数据处理方法。该方法基于光学人体运动捕捉散乱数据的全局信息,提出基于模块分段线性模型的数据处理算法。利用模块分段线性模型归纳出不同模块的变化特征,从而确定各模块数据的匹配优先级及段内拟合函数,有效地对三维运动数据各模块进行全局性分层次预测和跟踪,并对噪声数据进行基于模块的去噪处理;对缺失运动数据提出基于分段Newton插值拟合算法,进行合理的补缺。该方法经优化后在处理过程中无须人工干预,并能满足实时性要求。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号