首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 765 毫秒
1.
A review of the fracture energy and toughness data for dense ceramics at 22 °C shows maxima commonly occurring as a function of grain size. Such maxima are most pronounced for non-cubic materials, where they are often associated with microcracking and R-curve effects, especially in oxides, but often also occur at too fine a grain size for association with microcracking. The maxima are usually much more limited, but frequently definitive, for cubic materials. In a few cases only a decrease with increasing grain size at larger grain size, or no dependence on grain size is found, but the extent to which these reflect lack of sufficient data is uncertain. In porous ceramics fracture toughness and especially fracture energy commonly show less porosity dependence than strength and Young's modulus. In some cases little, or no, decrease, or possibly a temporary increase in fracture energy or toughness are seen with increasing porosity at low or intermediate levels of porosity in contrast to continuous decreases for strength and Young's modulus. It is suggested that such (widely neglected) variations reflect bridging in porous bodies. The above maxima as a function of grain size and reduced decreases with increased porosity are less pronounced for fracture toughness as opposed to fracture energy, since the former reflects effects of the latter and Young's modulus, which usually has no dependence on grain size, but substantial dependence on porosity. In general, tests with cracks closer to the natural flaw size give results more consistent with strength behaviour. Implications of these findings are discussed.  相似文献   

2.
This article presents a cumulative sum (CUSUM) monitoring approach for count-data time series. A seasonal integer-valued generalized autoregressive conditional heteroscedasticity (INGARCH(1,1)) time series model with Poisson deviates is used to develop a likelihood ratio test formulation to detect changes in the process accounting for temporal correlations and seasonality. Simulation studies show that the proposed CUSUM monitoring approach can provide significantly improved performance in applications where serial correlation or seasonality is prevalent. A case study with real traffic crash counts is presented to illustrate the application of the proposed methodology for roadway safety improvement.  相似文献   

3.
Many countries developed and increased greenery in their country sights to attract international tourists. This planning is now significantly contributing to their economy. The next task is to facilitate the tourists by sufficient arrangements and providing a green and clean environment; it is only possible if an upcoming number of tourists’ arrivals are accurately predicted. But accurate prediction is not easy as empirical evidence shows that the tourists’ arrival data often contains linear, nonlinear, and seasonal patterns. The traditional model, like the seasonal autoregressive fractional integrated moving average (SARFIMA), handles seasonal trends with seasonality. In contrast, the artificial neural network (ANN) model deals better with nonlinear time series. To get a better forecasting result, this study combines the merits of the SARFIMA and the ANN models and the purpose of the hybrid SARFIMA-ANN model. Then, we have used the proposed model to predict the tourists’ arrival in New Zealand, Australia, and London. Empirical results showed that the proposed hybrid model outperforms in predicting tourists’ arrival compared to the traditional SARFIMA and ANN models. Moreover, these results can be generalized to predict tourists’ arrival in any country or region with a complicated data pattern.  相似文献   

4.
In manufacturing applications, we often encounter process transitions due to a changeover in the production or perhaps an unknown perturbation. The main process improvement goal is to shorten the transition time by monitoring the process in order to quickly identify the start and end of the transition period and by actively adjusting the process during the transition. To address these issues, we propose a transition monitoring and adjustment methodology. A polymer process is used to illustrate this methodology. Using simulation, we characterize the impact of the transition adjustment on the effectiveness of monitoring. We show that the adaptive monitoring procedure is robust to small transition adjustments, thus supporting a complimentary application of process monitoring and process adjustment to improve process transitions. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

5.
Peihua Qiu  Lu You 《技术计量学》2020,62(2):236-248
Abstract

In practice, we often need to sequentially monitor the performance of individual subjects or processes, so that interventions can be made in a timely manner to avoid unpleasant consequences (e.g., strokes or airplane crashes) once the longitudinal patterns of their performance variables deviate significantly from the regular patterns of well-functioning subjects or processes. Some statistical methods are available to handle this dynamic screening (DS) problem. Because the performance of the DS methods is related to their signal times, the conventional false positive rate (FPR) and false negative rate (FNR) cannot be effective in measuring their performance. So far, there is no existing metrics in the literature for properly measuring the performance of DS methods. In this article, we aim to fill this gap by proposing a new performance evaluation approach, called process monitoring receiver operating characteristic curve, which properly combines the signal times with (FPR,FNR). Numerical examples and theoretical justifications show that this approach provides an effective tool for measuring the performance of DS methods. Supplementary materials for this article are available online.  相似文献   

6.
A good process understanding is the foundation for process optimization, process monitoring, end-point detection, and estimation of the end-product quality. Performing good process measurements and the construction of process models will contribute to a better process understanding. To improve the process knowledge it is common to build process models. These models are often based on first principles such as kinetic rates or mass balances. These types of models are also known as hard or white models. White models are characterized by being generally applicable but often having only a reasonable fit to real process data. Other commonly used types of models are empirical or black-box models such as regression and neural nets. Black-box models are characterized by having a good data fit but they lack a chemically meaningful model interpretation. Alternative models are grey models, which are combinations of white models and black models. The aim of a grey model is to combine the advantages of both black-box models and white models. In a qualitative case study of monitoring industrial batches using near-infrared (NIR) spectroscopy, it is shown that grey models are a good tool for detecting batch-to-batch variations and an excellent tool for process diagnosis compared to common spectroscopic monitoring tools.  相似文献   

7.
Seasonal changes in the environment are known to be important drivers of population dynamics, giving rise to sustained population cycles. However, it is often difficult to measure the strength and shape of seasonal forces affecting populations. In recent years, statistical time-series methods have been applied to the incidence records of childhood infectious diseases in an attempt to estimate seasonal variation in transmission rates, as driven by the pattern of school terms. In turn, school-term forcing was used to show how susceptible influx rates affect the interepidemic period. In this paper, we document the response of measles dynamics to distinct shifts in the parameter regime using previously unexplored records of measles mortality from the early decades of the twentieth century. We describe temporal patterns of measles epidemics using spectral analysis techniques, and point out a marked decrease in birth rates over time. Changes in host demography alone do not, however, suffice to explain epidemiological transitions. By fitting the time-series susceptible–infected–recovered model to measles mortality data, we obtain estimates of seasonal transmission in different eras, and find that seasonality increased over time. This analysis supports theoretical work linking complex population dynamics and the balance between stochastic and deterministic forces as determined by the strength of seasonality.  相似文献   

8.
Processes that arise naturally, for example, from manufacturing or the environment, often exhibit complicated autocorrelation structures. When monitoring such a process for changes in variance, accounting for that structure is critical. While charts for monitoring the variance of processes of independent observations and some specific autocorrelated processes have been proposed in the past, the chart presented in this article can handle a general stationary process. The performance of the proposed chart was examined through simulations for the first‐order autoregressive and first‐order autoregressive‐moving average processes and demonstrated with examples. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

9.
In this paper, we describe the theory underlying an empirical Bayesian approach to monitoring two or more process characteristics simultaneously. If the data is continuous and multivariate in nature, often the multivariate normal distribution can be used to model the process. Then, using Bayesian theory, we develop techniques to implement empirical Bayes process monitoring of the multivariable process. Lastly, an example is given to illustrate the use of our techniques. Copyright © 2001 John Wiley & Sons, Ltd.  相似文献   

10.
In this paper, control chart pattern recognition using artificial neural networks is presented. An important motivation of this research is the growing interest in intelligent manufacturing systems, specifically in the area of Statistical Process Control (SPC). Online automated process analysis is an important area of research since it allows the interfacing of process control with Computer Integrated Manufacturing (CIM) techniques. Two back-propagation artificial neural networks are used to model traditional Shewhart SPC charts and identify out-of-control situations as specified by the Western Electric Statistical Quality Control Handbook , including instability patterns, trends, cycles, mixtures and systematic variation. Using back propagation, patterns are presented to the network, and training results in a suitable model for the process. The implication of this research is that out-of-control situations can be detected automatically and corrected within a closed-loop environment. This research is the first step in an automated process monitoring and control system based on control chart methods. Results indicate that the performance of the back propagation neural networks is very accurate in identifying control chart patterns.  相似文献   

11.
Transmission of dengue fever depends on a complex interplay of human, climate and mosquito dynamics, which often change in time and space. It is well known that its disease dynamics are highly influenced by multiple factors including population susceptibility to infection as well as by microclimates: small-area climatic conditions which create environments favourable for the breeding and survival of mosquitoes. Here, we present a novel machine learning dengue forecasting approach, which, dynamically in time and space, identifies local patterns in weather and population susceptibility to make epidemic predictions at the city level in Brazil, months ahead of the occurrence of disease outbreaks. Weather-based predictions are improved when information on population susceptibility is incorporated, indicating that immunity is an important predictor neglected by most dengue forecast models. Given the generalizability of our methodology to any location or input data, it may prove valuable for public health decision-making aimed at mitigating the effects of seasonal dengue outbreaks in locations globally.  相似文献   

12.
In many service and manufacturing industries, process monitoring involves multivariate data, instead of univariate data. In these situations, multivariate charts are employed for process monitoring. Very often when the mean vector shifts to an out-of-control situation, the exact shift size is unknown; hence, multivariate charts for monitoring a range of the mean shift sizes in the mean vector are adopted. In this paper, directionally sensitive weighted adaptive multivariate CUSUM charts are developed for monitoring a range of the mean shift sizes. Directionally sensitive charts are useful in situations where the aim lies in monitoring either an increasing or a decreasing shift in the mean vector of the quality characteristics of interest. The Monte Carlo simulation is used to compute the run length characteristics in comparing the sensitivities of the proposed and existing multivariate CUSUM charts. In general, the directionally sensitive and weighted adaptive features enhance the sensitivities of the proposed multivariate CUSUM charts in comparison with the existing multivariate CUSUM charts without the adaptive feature or those that are directionally invariant. It is also found that the variable sampling interval feature enhances the sensitivities of the proposed and existing charts as compared to their fixed sampling interval counterparts. The implementation of the proposed charts in detecting upward and downward shifts in the in-control process mean vector is demonstrated using two different datasets.  相似文献   

13.
Functional data characterize the quality or reliability performance of many manufacturing processes. As can be seen in the literature, such data are informative in process monitoring and control for nanomachining, for ultra-thin semiconductor fabrication, and for antenna, steel-stamping, or chemical manufacturing processes. Many functional data in manufacturing applications show complicated transient patterns such as peaks representing important process characteristics. Wavelet transforms are popular in the computing and engineering fields for handling these types of complicated functional data. This article develops a wavelet-based statistical process control (SPC) procedure for detecting ‘out-of-control’ events that signal process abnormalities. Simulation-based evaluations of average run length indicate that our new procedure performs better than extensions from well-known methods in the literature. More importantly, unlike recent SPC research on linear profile data for monitoring global changes of data patterns, our methods focus on local changes in data segments. In contrast to most of the SPC procedures developed for detecting a known type of process change, our idea of updating the selected parameters adaptively can handle many types of process changes whether known or unknown. Finally, due to the data-reduction efficiency of wavelet thresholding, our procedure can deal effectively with large data sets.  相似文献   

14.
Traditional Duncan‐type models for cost‐efficient process monitoring often inflate type I error probability. Nevertheless, controlling the probability of type I error or false alarms is one of the key issues in sequential monitoring of specific process characteristics. To this end, researchers often recommend economic‐statistical designs. Such designs assign an upper bound on type I error probability to avoid excessive false alarms while achieving cost optimality. In the context of process monitoring, there is a plethora of research on parametric approaches of controlling type I error probability along with the cost optimization. In the nonparametric setup, most of the existing works on process monitoring address one of the two issues but not both simultaneously. In this article, we present two distribution‐free cost‐efficient Shewhart‐type schemes for sequentially monitoring process location with restricted false alarm probability, based, respectively, on the sign and Wilcoxon rank‐sum statistics. We consider the one‐sided shift in location parameter in an unknown continuous univariate process. Nevertheless, one can easily extend our proposed schemes to monitor the two‐sided process shifts. We evaluate and compare the actual performance of the two monitoring schemes employing extensive computer simulation based on Monte Carlo. We investigate the effects of the size of the reference sample and the false alarm constraint. Finally, we provide two illustrative examples, each based on a realistic situation in the industry.  相似文献   

15.
Control charts are recognized as one of the most important tools for statistical process control (SPC), used for monitoring any abnormal deviations in the state of manufacturing processes. However, the effectiveness of control charts is strictly dependent on statistical assumptions that in real applications are frequently violated. In contrast, neural networks (NNs) have excellent noise tolerance in real time, requiring no hypothesis on the statistical distribution of monitored processes. This feature makes NNs promising tools for quality control. In this paper, a self-organizing map (SOM)-based monitoring approach is proposed for enhancing the monitoring of processes. It is capable of providing a comprehensive and quantitative assessment value for the current process state, achieved by minimum quantization error (MQE) calculation. Based on MQE values over time series, a novel MQE chart is developed for monitoring process changes. The aim of this research is to analyse the performance of the MQE chart under the assumption that predictable abnormal patterns are not available. To this aim, the performance of the MQE chart in manufacturing processes (including non-correlated, auto-correlated and multivariate processes) is evaluated. The results indicate that the MQE chart may be a promising tool for quality control.  相似文献   

16.
As an alternative to the conventional R&D innovation, business models are becoming an important locus of lucrative innovation. Due to the rise of the internet economy, business model innovation today often involves technological innovation, and this can be evidenced by business method (BM) patents. Of several mechanisms that stimulate business model innovation, the role of BM patents is probably most noteworthy. To understand how BM patents play their roles in business model innovation, we need to observe the long-term knowledge flow process. Therefore, we aim to identify dynamic patterns of knowledge flows driven by BM patents using a hidden Markov model (HMM) and patent citation data as an input. An HMM is a popular statistical tool for modelling a wide range of time series data. Since it does not have any general theoretical limit in regard to statistical pattern classification, an HMM is capable of characterizing various temporal patterns. A case study is conducted with the BM patents in 16 USPTO subclasses related to secure transactions. After patterns of the individual subclasses are generated, they are grouped into four major patterns through clustering analysis and their characteristics are closely examined. Our analysis reveals that the BM patents for secure transaction in general play increasingly important roles in advancement of business models, facilitating the transfer of knowledge, and thus can provide valuable insights in formulating more effective strategies or policies for business model innovation.  相似文献   

17.
The exponentially weighted moving average (EWMA) control schemes have been proven to be very effective at monitoring random shifts or disturbances in a given process. However, EWMA is somewhat insensitive to the shifts at the process startup. Consequently, fast initial response feature (FIR) or headstart has often been used to increase the sensitivity of EWMA at the process startup. Although FIR feature significantly increases the sensitivity of the EWMA at the startup, its effects diminished after few observations thereby making FIR-based schemes less sensitive compared to the classical EWMA at the process post-startup. In this paper, we proposed the dynamic generalized fast initial response for the EWMA control schemes for monitoring processes with startup and post-startup problems. The proposed scheme is highly sensitive at the startup and has a sensitivity equal to that of the classical EWMA at the process post-startup. The average run length based performance comparisons of the proposed chart and its counterparts are presented. Real-life examples are offered to demonstrate the applications of the proposed scheme.  相似文献   

18.
Owing to usage, environment and aging, the condition of a system deteriorates over time. Regular maintenance is often conducted to restore its condition and to prevent failures from occurring. In this kind of a situation, the process is considered to be stable, thus statistical process control charts can be used to monitor the process. The monitoring can help in making a decision on whether further maintenance is worthwhile or whether the system has deteriorated to a state where regular maintenance is no longer effective. When modeling a deteriorating system, lifetime distributions with increasing failure rate are more appropriate. However, for a regularly maintained system, the failure time distribution can be approximated by the exponential distribution with an average failure rate that depends on the maintenance interval. In this paper, we adopt a modification for a time‐between‐events control chart, i.e. the exponential chart for monitoring the failure process of a maintained Weibull distributed system. We study the effect of changes on the scale parameter of the Weibull distribution while the shape parameter remains at the same level on the sensitivity of the exponential chart. This paper illustrates an approach of integrating maintenance decision with statistical process monitoring methods. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

19.
Unnatural patterns exhibited on process mean and variance control charts can be associated separately with different assignable causes. Quick and accurate knowledge of the type of control chart patterns (CCPs), either because of process mean or variance, can greatly facilitate identification of assignable causes. Over the past few decades, however, process mean and variance CCPs are seldom studied simultaneously in the statistical process control literature. This study proposes a hybrid learning‐based model for simultaneous monitoring of process mean and variance CCPs. In this model, a self‐organization map neural network‐based quantization error control chart is responsible for detecting the out‐of‐control signals, a discrete particle swarm optimization‐based selective ensemble of back‐propagation networks is responsible for classifying the detected out‐of‐control signals into categories of mean and/or variance abnormality, and two discrete particle swarm optimization‐based selective ensembles of learning vector quantization networks are responsible for further identifying the detected mean and variance out‐of‐control signals as one of the specific CCP types, respectively. Extensive simulations indicate that the proposed hybrid learning‐based model outperforms other existing approaches in detecting mean and variance changes, while also capable of CCP recognition. In addition, a case study is conducted to demonstrate how the proposed hybrid learning‐based model can function as an effective tool for monitoring mean and variance simultaneously. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

20.
Controlling and reducing process variability is an essential aspect for maintaining the product or service quality. Even though most practitioners believe that an increasing process variability is often a more severe concern than a shift in location, barely a few research paid attention to the cost-efficient monitoring of process variability. Some of the existing studies addressed the dispersion aspect, assuming that the quality characteristic is Gaussian. Non-normal and complex distributions are not uncommon in modern production processes, time to event processes, or processes involving service quality. Unfortunately, we find no literature on economically designed nonparametric (distribution-free) schemes for monitoring process variability. This article introduces two Shewhart-type cost-optimized nonparametric schemes for monitoring the variability of any unknown but continuous processes to fill the research gap. The proposed monitoring schemes are based on two popular two-sample rank statistics for differences in scale parameters, known as the Ansari–Bradley statistic and the Mood statistic. We assess their actual performance for a set of process scenarios and illustrate the design along with the implementation steps. We discuss a practical problem related to product quality management. It is expected that the proposed schemes will be beneficial in various industrial operations.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号