首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A neural network combined to a neural classifier is used in a real time forecasting of hourly maximum ozone in the centre of France, in an urban atmosphere. This neural model is based on the MultiLayer Perceptron (MLP) structure. The inputs of the statistical network are model output statistics of the weather predictions from the French National Weather Service. These predicted meteorological parameters are very easily available through an air quality network. The lead time used in this forecasting is (t + 24) h. Efforts are related to a regularisation method which is based on a Bayesian Information Criterion-like and to the determination of a confidence interval of forecasting. We offer a statistical validation between various statistical models and a deterministic chemistry-transport model. In this experiment, with the final neural network, the ozone peaks are fairly well predicted (in terms of global fit), with an Agreement Index = 92%, the Mean Absolute Error = the Root Mean Square Error = 15 μg m−3 and the Mean Bias Error = 5 μg m−3, where the European threshold of the hourly ozone is 180 μg m−3.To improve the performance of this exceedance forecasting, instead of the previous model, we use a neural classifier with a sigmoid function in the output layer. The output of the network ranges from [0,1] and can be interpreted as the probability of exceedance of the threshold. This model is compared to a classical logistic regression. With this neural classifier, the Success Index of forecasting is 78% whereas it is from 65% to 72% with the classical MLPs. During the validation phase, in the Summer of 2003, six ozone peaks above the threshold were detected. They actually were seven.Finally, the model called NEUROZONE is now used in real time. New data will be introduced in the training data each year, at the end of September. The network will be re-trained and new regression parameters estimated. So, one of the main difficulties in the training phase – namely the low frequency of ozone peaks above the threshold in this region – will be solved.  相似文献   

2.
Automatically learning the graph structure of a single Bayesian network (BN) which accurately represents the underlying multivariate probability distribution of a collection of random variables is a challenging task. But obtaining a Bayesian solution to this problem based on computing the posterior probability of the presence of any edge or any directed path between two variables or any other structural feature is a much more involved problem, since it requires averaging over all the possible graph structures. For the former problem, recent advances have shown that search + score approaches find much more accurate structures if the search is constrained by a previously inferred skeleton (i.e. a relaxed structure with undirected edges which can be inferred using local search based methods). Based on similar ideas, we propose two novel skeleton-based approaches to approximate a Bayesian solution to the BN learning problem: a new stochastic search which tries to find directed acyclic graph (DAG) structures with a non-negligible score; and a new Markov chain Monte Carlo method over the DAG space. These two approaches are based on the same idea. In a first step, both employ a previously given skeleton and build a Bayesian solution constrained by this skeleton. In a second step, using the preliminary solution, they try to obtain a new Bayesian approximation but this time in an unconstrained graph space, which is the final outcome of the methods. As shown in the experimental evaluation, this new approach strongly boosts the performance of these two standard techniques proving that the idea of employing a skeleton to constrain the model space is also a successful strategy for performing Bayesian structure learning of BNs.  相似文献   

3.
Automated quality control is a key aspect of industrial maintenance. In manufacturing processes, this is often done by monitoring relevant system parameters to detect deviations from normal behavior. Previous approaches define “normalcy” as statistical distributions for a given system parameter, and detect deviations from normal by hypothesis testing. This paper develops an approach to manufacturing quality control using a newly introduced method: Bayesian Posteriors Updated Sequentially and Hierarchically (BPUSH). This approach outperforms previous methods, achieving reliable detection of faulty parts with low computational cost and low false alarm rates (∼0.1%). Finally, this paper shows that sample size requirements for BPUSH fall well below typical sizes for comparable quality control methods, achieving True Positive Rates (TPR) >99% using as few as n = 25 samples.  相似文献   

4.
5.
The development of a thermal switch based on arrays of liquid–metal micro-droplets is presented. Prototype thermal switches are assembled from a silicon substrate on which is deposited an array of 1600 30-μm liquid–metal micro-droplets. The liquid–metal micro-droplet array makes and breaks contact with a second bare silicon substrate. A gap between the two silicon substrates is filled with either air at 760 Torr, air at of 0.5 Torr or xenon at 760 Torr. Heat transfer and thermal resistance across the thermal switches are measured for “on” (make contact) and “off” (break contact) conditions using guard-heated calorimetry. The figure of merit for a thermal switch, the ratio of “off” state thermal resistance over “on” state thermal resistance, Roff/Ron, is 129 ± 43 for a xenon-filled thermal switch that opens 100 μm and 60 ± 17 for an 0.5 Torr air-filled thermal switch that opens 25 μm. These thermal resistance ratios are shown to be markedly higher than values of Roff/Ron for a thermal switch based on contact between polished silicon surfaces. Transient temperature measurements for the liquid–metal micro-droplet switches indicate thermal switching times of less than 100 ms. Switch lifetimes are found to exceed one-million cycles.  相似文献   

6.
《Applied ergonomics》2011,42(1):138-145
IntroductionSubjective workload measures are usually administered in a visual–manual format, either electronically or by paper and pencil. However, vocal responses to spoken queries may sometimes be preferable, for example when experimental manipulations require continuous manual responding or when participants have certain sensory/motor impairments. In the present study, we evaluated the acceptability of the hands-free administration of two subjective workload questionnaires – the NASA Task Load Index (NASA-TLX) and the Multiple Resources Questionnaire (MRQ) – in a surgical training environment where manual responding is often constrained.MethodSixty-four undergraduates performed fifteen 90-s trials of laparoscopic training tasks (five replications of 3 tasks – cannulation, ring transfer, and rope manipulation). Half of the participants provided workload ratings using a traditional paper-and-pencil version of the NASA-TLX and MRQ; the remainder used a vocal (hands-free) version of the questionnaires. A follow-up experiment extended the evaluation of the hands-free version to actual medical students in a Minimally Invasive Surgery (MIS) training facility.ResultsThe NASA-TLX was scored in 2 ways – (1) the traditional procedure using participant-specific weights to combine its 6 subscales, and (2) a simplified procedure – the NASA Raw Task Load Index (NASA-RTLX) – using the unweighted mean of the subscale scores. Comparison of the scores obtained from the hands-free and written administration conditions yielded coefficients of equivalence of r = 0.85 (NASA-TLX) and r = 0.81 (NASA-RTLX). Equivalence estimates for the individual subscales ranged from r = 0.78 (“mental demand”) to r = 0.31 (“effort”). Both administration formats and scoring methods were equally sensitive to task and repetition effects. For the MRQ, the coefficient of equivalence for the hands-free and written versions was r = 0.96 when tested on undergraduates. However, the sensitivity of the hands-free MRQ to task demands (ηpartial2 = 0.138) was substantially less than that for the written version (ηpartial2 = 0.252). This potential shortcoming of the hands-free MRQ did not seem to generalize to medical students who showed robust task effects when using the hands-free MRQ (ηpartial2 = 0.396). A detailed analysis of the MRQ subscales also revealed differences that may be attributable to a “spillover” effect in which participants’ judgments about the demands of completing the questionnaires contaminated their judgments about the primary surgical training tasks.ConclusionVocal versions of the NASA-TLX are acceptable alternatives to standard written formats when researchers wish to obtain global workload estimates. However, care should be used when interpreting the individual subscales if the object is to make comparisons between studies or conditions that use different administration modalities. For the MRQ, the vocal version was less sensitive to experimental manipulations than its written counterpart; however, when medical students rather than undergraduates used the vocal version, the instrument’s sensitivity increased well beyond that obtained with any other combination of administration modality and instrument in this study. Thus, the vocal version of the MRQ may be an acceptable workload assessment technique for selected populations, and it may even be a suitable substitute for the NASA-TLX.  相似文献   

7.
In this paper we propose a novel method for brain SPECT image feature extraction based on the empirical mode decomposition (EMD). The proposed method applied to assist the diagnosis of Alzheimer Disease (AD) selects the most discriminant voxels for support vector machine (SVM) classification from the transformed EMD feature space. In particular, the combination of frequency components of the EMD transformation are found to retain regional differences in functional activity which is characteristic of AD. In general, the EMD represents a fully data-driven, unsupervised and additive signal decomposition and does not need any a priori defined basis system. Several experiments were carried out on a balanced SPECT database collected from the “Virgen de las Nieves” Hospital in Granada (Spain), containing 96 recordings and yielding up to 100% maximum accuracy and 93.52 ± 4.92% on average, with a acceptable biased estimate of the cross-validation (CV) true error, in separating AD and normal controls on this SPECT database. In this way, we achieve the “gold standard” labeling outperforming recently proposed CAD systems.  相似文献   

8.
Many problems are confronted when characterizing a type 1 diabetic patient such as model mismatches, noisy inputs, measurement errors and huge variability in the glucose profiles. In this work we introduce a new identification method based on interval analysis where variability and model imprecisions are represented by an interval model as parametric uncertainty.The minimization of a composite cost index comprising: (1) the glucose envelope width predicted by the interval model, and (2) a Hausdorff-distance-based prediction error with respect to the envelope, is proposed. The method is evaluated with clinical data consisting in insulin and blood glucose reference measurements from 12 patients for four different lunchtime postprandial periods each.Following a “leave-one-day-out” cross-validation study, model prediction capabilities for validation days were encouraging (medians of: relative error = 5.45%, samples predicted = 57%, prediction width = 79.1 mg/dL). The consideration of the days with maximum patient variability represented as identification days, resulted in improved prediction capabilities for the identified model (medians of: relative error = 0.03%, samples predicted = 96.8%, prediction width = 101.3 mg/dL). Feasibility of interval models identification in the context of type 1 diabetes was demonstrated.  相似文献   

9.
This study consists of two cases: (i) The experimental analysis: Shot peening is a method to improve the resistance of metal pieces to fatigue by creating regions of residual stress. In this study, the residual stresses induced in steel specimen type C-1020 by applying various strengths of shot peening, are investigated using the electrochemical layer removal method. The best result is obtained using 0.26 mm A peening strength and the stress encountered in the shot peened material is ?276 MPa, while the maximum residual stress obtained is ?363 MPa at a peening strength of 0.43 mm A. (ii) The mathematical modelling analysis: The use of ANN has been proposed to determine the residual stresses based on various strengths of shot peening using results of experimental analysis. The back-propagation learning algorithm with two different variants and logistic sigmoid transfer function were used in the network. In order to train the neural network, limited experimental measurements were used as training and test data. The best fitting training data set was obtained with four neurons in the hidden layer, which made it possible to predict residual stress with accuracy at least as good as that of the experimental error, over the whole experimental range. After training, it was found the R2 values are 0.996112 and 0.99896 for annealed before peening and shot peened only, respectively. Similarly, these values for testing data are 0.995858 and 0.999143, respectively. As seen from the results of mathematical modelling, the calculated residual stresses are obviously within acceptable uncertainties.  相似文献   

10.
Intensive care is one of the most important components of the modern medical system. Healthcare professionals need to utilize intensive care resources effectively. Mortality prediction models help physicians decide which patients require intensive care the most and which do not. The Simplified Acute Physiology System 2nd version (SAPS II) is one of the most popular mortality scoring systems currently available. This study retrospectively collected data on 496 patients admitted to intensive care units from year 2000 to 2001. The average patient age was 59.96 ± 1.83 years old and 23.8% of patients died before discharge. We used these data as training data and constructed an exponential Bayesian mortality prediction model by combining BSM (Bayesian statistical model) and GA (genetic algorithm). The optimal weights and the parameters were determined with GA. Furthermore, we prospectively collected data on 142 patients for testing the new model. The average patient age for this group was 57.80 ± 3.33 years old and 21.8% patients died before discharge. The mortality prediction power of the new model was better than SAPS II (p < 0.001). The new model combining BSM and GA can manage both binary data and continuous data. The mortality rate is predicted to be high if the patient’s Glasgow coma score is less than 5.  相似文献   

11.
In order to reduce the response time of resistive oxygen sensors using porous cerium oxide thick film, it is important to ascertain the factors controlling response. Pressure modulation method (PMM) was used to find the rate-limiting step of sensor response. This useful method measures the amplitude of sensor output (H(f)) for the sine wave modulation of oxygen partial pressure at constant frequency (f). In PMM, “break” response time, which is minimum period in which the sensor responds precisely, can be measured. Three points were examined: (1) simulated calculations of PMM were carried out using a model of porous thick film in which spherical particles are connected in a three-dimensional network; (2) sensor response speed was experimentally measured using PMM; and (3) the diffusion coefficient and surface reaction coefficient were estimated by comparison between experiment and calculation. The plot of log f versus log H(f) in the high f region was found to have a slope of approximately −0.5 for both porous thick film and non-porous thin film, when the rate-limiting step was diffusion. Calculations showed the response time of porous thick film was 1/20 that of non-porous thin film when the grain diameter of the porous thick film was the same as the thickness of non-porous thin film. At 973 K, “break” response time (tb) of the resistive oxygen sensor was found by experiment to be 109 ms. It was concluded that the response of the resistive oxygen sensor prepared in this study was strongly controlled by diffusion at 923–1023 K, since the experiment revealed that the slope of plot of log f versus log H(f) in the high f region was approximately −0.5. At 923–1023 K, the diffusion coefficient of oxygen vacancy in porous ceria (DV) was expressed as follows: DV (m2s−1) = 5.78 × 10−4 exp(−1.94 eV/kT). At 1023 K, the surface reaction coefficient (K) was found to exceed 10−4 m/s.  相似文献   

12.
13.
Excessive implant-bone relative micromotion is detrimental to both primary as well as long-term stability of a hip stem in cementless total hip arthroplasty (THA). The shape and geometry of the implant are known to influence the resulting post-operative micromotion. Finite element (FE)-based design evaluations are manually intensive and computationally expensive, especially when a large number of designs need to be evaluated for an optimum outcome. This study presents a predictive mathematical model based on back-propagation neural network (BPNN) to relate femoral stem design parameters to the post-operative implant-bone micromotion, with no recourse to tedious nonlinear FE analysis. The characterization of the design parameters were based on our earlier study on shape optimization of femoral implant. The BPNN led to faster prediction of the implant-bone relative micromotion as compared to the FE analysis. Using the BPNN-predicted output as the objective function, a genetic algorithm (GA) based search was performed in order to minimize post-operative micromotion, under simulated physiological loading conditions. The micromotion predicted by the neural network was found to have a significant correlation with FE calculated results (correlation coefficient R2 = 0.80 for training; R2 = 0.82 for test). The optimal stems, evolved from the GA search of over 12,500 designs, were found to offer improved primary stability, as compared to the initial TriLock (DePuy) design. Our predicted results favour lateral-flared designs having rectangular proximal transverse sections with greater stem-sizes.  相似文献   

14.
《Information and Computation》2007,205(11):1575-1607
We propose a new approximation technique for Hybrid Automata. Given any Hybrid Automaton H, we call Approx(H, k) the Polynomial Hybrid Automaton obtained by approximating each formula ϕ in H with the formulae ϕk obtained by replacing the functions in ϕ with their Taylor polynomial of degree k. We prove that Approx(H, k) is an over-approximation of H. We study the conditions ensuring that, given any ϵ > 0, some k0 exists such that, for all k > k0, the “distance” between any vector satisfying ϕk and at least one vector satisfying ϕ is less than ϵ. We study also conditions ensuring that, given any ϵ > 0, some k0 exists such that, for all k > k0, the “distance” between any configuration reached by Approx(H, k) in n steps and at least one configuration reached by H in n steps is less than ϵ.  相似文献   

15.
The passenger’s perception of the airport’s level of service (LOS) may have a significant impact on promoting or discouraging future tourism and business activities. In this study, we take a look at this problem, but unlike in traditional statistical analysis, we apply a new method, the dominance-based rough set approach (DRSA), to an airport service survey. A set of “if  then  ” decision rules is used in the preference model. The passengers indicate their perception of airport LOS by rating a set of criteria/attributes. The proposed method provides practical information that should be of help to airport planners, designers, operators, and managers to develop LOS improvement strategies. The model was implemented using survey data from a large sample of customers from an international airport in Taiwan.  相似文献   

16.
It is demonstrated that the use of an ensemble of neural networks for routine land cover classification of multispectral satellite data can lead to a significant improvement in classification accuracy. Specifically, the AdaBoost.M1 algorithm is applied to a sequence of three-layer, feed-forward neural networks. In order to overcome the drawback of long training time for each network in the ensemble, the networks are trained with an efficient Kalman filter algorithm. On the basis of statistical hypothesis tests, classification performance on multispectral imagery is compared with that of maximum likelihood and support vector machine classifiers. Good generalization accuracies are obtained with computation times of the order of 1 h or less. The algorithms involved are described in detail and a software implementation in the ENVI/IDL image analysis environment is provided.  相似文献   

17.
Using a 2 × 3 mixed between-within subjects experiment (N = 102), we tested how the presence of online comments affects self-other differences and perceptions of media bias, as well as factors predicting subjects’ likelihood of commenting on an online news story. We found that (a) presence of comments lowers self-other differences and consequently attenuates the third-person effect, and (b) perceptions of media bias significantly predict likelihood of commenting. Additionally, we found that subjects were more likely to comment on stories they found biased against their position as a form of corrective action, and that subjects were more likely to share and like stories they found biased in favor of their position as a form of promotional action.  相似文献   

18.
It is widely recognized that effective ranking methods for relational data (e.g., tuples) enable users to overcome the limitations of the traditional Boolean retrieval model and the hardness of structured query writing. To determine the rank of a tuple, term frequency-based methods, such as tf × idf (term frequency × inverse document frequency) schemes, have been commonly adopted in the literature by simply considering a tuple as a single document. However, in many cases, we have noted that tf × idf schemes may not produce effective rankings or specific orderings for relational data with categorical attributes, which is pervasive today. To support fundamental aspects of relational data, we apply the notions of correlation analysis to estimate the extent of relationships between queries and data. This paper proposes a probabilistic ranking model to exploit statistical relationships that exist in relational data of categorical attributes. Given a set of query terms, information on correlative attribute values to the query terms is used to estimate the relevance of the tuple to the query. To quantify the information, we compute the extent of the dependency between correlative attribute values on a Bayesian network. Moreover, we avoid the prohibitive cost of computing insignificant ranking features based on a limited assumption of node independence. Our probabilistic ranking model is domain-independent and leverages only data statistics without any prior knowledge such as user query logs. Experimental results show that our work improves the effectiveness of rankings for real-world datasets and has a reasonable query processing efficiency compared to related work.  相似文献   

19.
In offline settings, authentic behavior has frequently been linked to increased well-being. Social network sites (SNSs) provide a new venue for authenticity, yet the effects of online authenticity are largely unknown. The present study investigated the reciprocal effects of authenticity on SNSs and the psychological well-being of SNS users in a two-wave longitudinal study (N = 374). The results demonstrate that online authenticity had a positive longitudinal effect on three indicators of subjective well-being. The data further illustrate that this beneficial effect of SNS use is not equally accessible to all users: participants with low levels of well-being were less likely to feel authentic on SNSs and to benefit from authenticity. We propose that the results can be explained in light of a “positivity bias in SNS communication” that favors positive forms of authenticity over negative ones.  相似文献   

20.
The implicit Colebrook–White equation has been widely used to estimate the friction factor for turbulent fluid-flow in rough-pipes. In this paper, the state-of-the-art review for the most currently available explicit alternatives to the Colebrook–White equation, is presented. An extensive comparison test was established on the 20 × 500 grid, for a wide range of relative roughness (ε/D) and Reynolds number (R) values (1 × 10?6 ? ε/D ? 5 × 10?2; 4 × 103 ? R ? 108), covering a large portion of turbulent flow zone in Moody’s diagram. Based on the comprehensive error analysis, the magnitude points in which the maximum absolute and the maximum relative error are occurred at the pair of ε/D and R values, are observed. A limiting case of the most of these approximations provided friction factor estimates that are characterized by a mean absolute error of 5 × 10?4, a maximum absolute error of 4 × 10?3 whereas, a mean relative error of 1.3% and a maximum relative error of 5.8%, over the entire range of ε/D and R values, respectively. For practical purposes, the complete results for the maximum and the mean relative errors versus the 20 sets of ε/D value, are also indicated in two comparative figures. The examination results for error properties of these approximations gives one an opportunity to practically evaluate the most accurate formula among of all the previous explicit models; and showing in this way its great flexibility for estimating turbulent flow friction factor. Comparative analysis for the mean relative error profile revealed, the classification for the best-fitted six equations examined was in a good agreement with those of the best model selection criterion claimed in the recent literature, for all performed simulations.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号