首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
一种粗模糊神经分类器   总被引:2,自引:0,他引:2  
介绍一种新的粗集编码模糊神经分类器。基于粗集理论的概念,讨论了知识编码、属性简化、分类系统简化的方法;并利用模糊隶属度函数将输入精确信息映射为模糊变量信息,解决分类中病态定义的数据问题和提高系统非线性映射的分类能力;提出了结合系统参数的重要性因子的网络的模糊推理方法和粗模糊神经分类器的网络结构以及有导师的最小平方误差学习训练算法。实现的粗集编码模糊神经分类器具有网络结构空间维数低、学习算法简单、网络训练时间短、非线性特性丰富等优点。  相似文献   

2.
An application of Kohonen's self-organizing map (SOM), learning-vector quantization (LVQ) algorithms, and commonly used backpropagation neural network (BPNN) to predict petrophysical properties obtained from well-log data are presented. A modular, artificial neural network (ANN) comprising a complex network made up from a number of subnetworks is introduced. In this approach, the SOM algorithm is applied first to classify the well-log data into a predefined number of classes, This gives an indication of the lithology in the well. The classes obtained from SOM are then appended back to the training input logs for the training of supervised LVQ. After training, LVQ can be used to classify any unknown input logs. A set of BPNN that corresponds to different classes is then trained. Once the network is trained, it is then used as the classification and prediction model for subsequent input data. Results obtained from example studies using the proposed method have shown to be fast and accurate as compared to a single BPNN network  相似文献   

3.
COVID-19 remains to proliferate precipitously in the world. It has significantly influenced public health, the world economy, and the persons’ lives. Hence, there is a need to speed up the diagnosis and precautions to deal with COVID-19 patients. With this explosion of this pandemic, there is a need for automated diagnosis tools to help specialists based on medical images. This paper presents a hybrid Convolutional Neural Network (CNN)-based classification and segmentation approach for COVID-19 detection from Computed Tomography (CT) images. The proposed approach is employed to classify and segment the COVID-19, pneumonia, and normal CT images. The classification stage is firstly applied to detect and classify the input medical CT images. Then, the segmentation stage is performed to distinguish between pneumonia and COVID-19 CT images. The classification stage is implemented based on a simple and efficient CNN deep learning model. This model comprises four Rectified Linear Units (ReLUs), four batch normalization layers, and four convolutional (Conv) layers. The Conv layer depends on filters with sizes of 64, 32, 16, and 8. A 2 × 2 window and a stride of 2 are employed in the utilized four max-pooling layers. A soft-max activation function and a Fully-Connected (FC) layer are utilized in the classification stage to perform the detection process. For the segmentation process, the Simplified Pulse Coupled Neural Network (SPCNN) is utilized in the proposed hybrid approach. The proposed segmentation approach is based on salient object detection to localize the COVID-19 or pneumonia region, accurately. To summarize the contributions of the paper, we can say that the classification process with a CNN model can be the first stage a highly-effective automated diagnosis system. Once the images are accepted by the system, it is possible to perform further processing through a segmentation process to isolate the regions of interest in the images. The region of interest can be assesses both automatically and through experts. This strategy helps so much in saving the time and efforts of specialists with the explosion of COVID-19 pandemic in the world. The proposed classification approach is applied for different scenarios of 80%, 70%, or 60% of the data for training and 20%, 30, or 40% of the data for testing, respectively. In these scenarios, the proposed approach achieves classification accuracies of 100%, 99.45%, and 98.55%, respectively. Thus, the obtained results demonstrate and prove the efficacy of the proposed approach for assisting the specialists in automated medical diagnosis services.  相似文献   

4.
提出了一种新颖的利用随机森林对单幅户外彩色图像进行晴阴分类的方法。首先定义了天空频率直方图特征和阴影能量特征,给出了其计算方法,并将透射率特征引入天气分类中,将这3种特征与已有特征共同组合构成候选天气特征集;其次定义了Fisher-Random Forest特征重要性计算方法对天气特征进行选择;最后将选择后的天气特征以向量形式输入到随机森林分类器实现对户外图像的晴阴分类。实验结果表明:与其他方法相比,该方法具有较高的准确性及较好的通用性。  相似文献   

5.
李付伟  孙昊  王利强  钱怡  张新昌 《包装工程》2016,37(13):201-206
目的解决在包装企业的生产库存管理中,由于包装材料的价格差异不大,ABC分类库存控制法不能很好地进行分类管理,使用层次分析法分类则不够客观,而且其矩阵一致性检验过于繁琐的问题。方法采用模糊层次综合评价法(FAHP)对包装材料进行分类,并与目前使用的方法分类结果进行对比。结果模糊层次综合评价法可以更加客观系统地对包装材料库存进行分类。结论模糊层次综合评价法比ABC分类法和层次分析法更加科学有效。  相似文献   

6.
现代金融业如果缺少稳定统一的金融工具分类方法,就会造成监管者或交易者缺乏有效交流和理解金融工具的能力。因此,制定金融工具统一的分类标准是现代全球金融业健康发展的核心要件之一,也是我国制定标准的重要原因。本文通过对国际金融工具分类标准的现状进行初步探讨,分析了我国制定证券及相关金融工具分类标准的重要性和紧迫性,阐述了我国制定相关标准的基本原则与方法,着重介绍了分类原则和示例,最后对拟颁布的标准作了分析展望。  相似文献   

7.
By enforcing emission reduction policies, the economic effects on different industry are quite diverse. Scientific estimation for this kind of effect has important realistic meaning for the industry development. A multi-objective programing approach integrated with input–output analysis model is used in this paper to evaluate the impact of emission reduction policy on the cost of reducing gas emissions and undertaking industrial adjustment in Chinese vehicle industry. The empirical results show that gas emission control has positive influence on vehicle industry production value. But, this influence is lower than the average macro-economic cost of CO2 emission in China. These policy implications on vehicle industry are less serious than other high emission industries and, at the same time, the enforcement of reduction policy is a chance for new energy vehicle development.  相似文献   

8.
With the market demands, the classification for highly reliable products becomes more and more significant. The degradation data can provide information about the degradation states and can be used to classify products to various classes according to the reliability attribute. In this paper, a temporal probabilistic approach, named segmental continuous hidden Markov model (SCHMM), is proposed to tackle the problem of degradation modeling and classification for mixed populations. Separate SCHMMs are built for each class of the mixed populations. The SCHMMs can directly depict the correspondence between actual degradation and the hidden states. A novel method called self‐training algorithm for the preprocessing of the original data from the mixed populations is proposed. Furthermore, the unknown parameters of the SCHMMs are estimated by the maximum likelihood method with the complete degradation data. The root mean square error of the estimated degradation value compared with the actual physical degradation value, as well as Akaike information criterion and Bayesian information criterion, is used for the evolution of the fitting accuracy and the selection of model topologies and discretization methods. Then the maximum posterior probability‐based classification criteria are developed. Degradation tests are designed for the data collection. To obtain the optimal classification policies, a cost function that consists of the degradation test cost and misclassification cost is constructed. A numerical example is used to illustrate the proposed method and demonstrate its advantages by comparing with other classification methods.  相似文献   

9.
Classification of imbalanced data is a well explored issue in the data mining and machine learning community where one class representation is overwhelmed by other classes. The Imbalanced distribution of data is a natural occurrence in real world datasets, so needed to be dealt with carefully to get important insights. In case of imbalance in data sets, traditional classifiers have to sacrifice their performances, therefore lead to misclassifications. This paper suggests a weighted nearest neighbor approach in a fuzzy manner to deal with this issue. We have adapted the ‘existing algorithm modification solution’ to learn from imbalanced datasets that classify data without manipulating the natural distribution of data unlike the other popular data balancing methods. The K nearest neighbor is a non-parametric classification method that is mostly used in machine learning problems. Fuzzy classification with the nearest neighbor clears the belonging of an instance to classes and optimal weights with improved nearest neighbor concept helping to correctly classify imbalanced data. The proposed hybrid approach takes care of imbalance nature of data and reduces the inaccuracies appear in applications of original and traditional classifiers. Results show that it performs well over the existing fuzzy nearest neighbor and weighted neighbor strategies for imbalanced learning.  相似文献   

10.
This paper develops a novel computational framework to compute the Sobol indices that quantify the relative contributions of various uncertainty sources towards the system response prediction uncertainty. In the presence of both aleatory and epistemic uncertainty, two challenges are addressed in this paper for the model-based computation of the Sobol indices: due to data uncertainty, input distributions are not precisely known; and due to model uncertainty, the model output is uncertain even for a fixed realization of the input. An auxiliary variable method based on the probability integral transform is introduced to distinguish and represent each uncertainty source explicitly, whether aleatory or epistemic. The auxiliary variables facilitate building a deterministic relationship between the uncertainty sources and the output, which is needed in the Sobol indices computation. The proposed framework is developed for two types of model inputs: random variable input and time series input. A Bayesian autoregressive moving average (ARMA) approach is chosen to model the time series input due to its capability to represent both natural variability and epistemic uncertainty due to limited data. A novel controlled-seed computational technique based on pseudo-random number generation is proposed to efficiently represent the natural variability in the time series input. This controlled-seed method significantly accelerates the Sobol indices computation under time series input, and makes it computationally affordable.  相似文献   

11.
Statistical estimates from simulation involve uncertainty caused by the variability in the input random variables due to limited data. Allocating resources to obtain more experimental data of the input variables to better characterize their probability distributions can reduce the variance of statistical estimates. The methodology proposed determines the optimal number of additional experiments required to minimize the variance of the output moments given single or multiple constraints. The method uses multivariate t-distribution and Wishart distribution to generate realizations of the population mean and covariance of the input variables, respectively, given an amount of available data. This method handles independent and correlated random variables. A particle swarm method is used for the optimization. The optimal number of additional experiments per variable depends on the number and variance of the initial data, the influence of the variable in the output function and the cost of each additional experiment. The methodology is demonstrated using a fretting fatigue example.  相似文献   

12.
重点分析了欧盟于2008年12月16日公布的Classification,labeling and packaging(CLP)法规的主旨内容。简单介绍了欧盟CLP法规的运用目的、适用范围和执行时间。详细叙述了CLP法规所涉及的物质和化合物的分类方法、标签要素和特殊的包装要求。简要分析了美国、日本、欧盟和我国的化学品管理现状。在此基础上,以发展的眼光分析了欧盟CLP法规对包装工业的潜在影响,即对我国出口产品的包装要求的提高和进口产品成本的间接增加。为我国包装行业从容应对CLP法规提出了建设性的意见。  相似文献   

13.
An ideal printed circuit board (PCB) defect inspection system can detect defects and classify PCB defect types. Existing defect inspection technologies can identify defects but fail to classify all PCB defect types. This research thus proposes an algorithmic scheme that can detect and categorize all 14-known PCB defect types. In the proposed algorithmic scheme, fuzzy c-means clustering is used for image segmentation via image subtraction prior to defect detection. Arithmetic and logic operations, the circle hough transform (CHT), morphological reconstruction (MR), and connected component labeling (CCL) are used in defect classification. The algorithmic scheme achieves 100% defect detection and 99.05% defect classification accuracies. The novelty of this research lies in the concurrent use of CHT, MR, and CCL algorithms to accurately detect and classify all 14-known PCB defect types and determine the defect characteristics such as the location, area, and nature of defects. This information is helpful in electronic parts manufacturing for finding the root causes of PCB defects and appropriately adjusting the manufacturing process. Moreover, the algorithmic scheme can be integrated into machine vision to streamline the manufacturing process, improve the PCB quality, and lower the production cost.  相似文献   

14.
Various models which may be used for quantitative assessment of hardware, software and human reliability are compared in this paper. Important comparison criteria are the system life cycle phase in which the model is intended to be used, the failure category and reliability means considered in the model, model purpose, and model characteristic such as model construction approach, model output and model input. The main objective is to present limitations in the use of current models for reliability assessment of computer-based safety shutdown systems in the process industry and to provide recommendations on further model development. Main attention is given to presenting the overall concept of various models from a user's point of view rather than technical details of specific models. A new failure classification scheme is proposed which shows how hardware and software failures may be modelled in a common framework.  相似文献   

15.
In this paper, we consider the material flow network design problem in which locations of input and output points of departments and flow paths are determined concurrently on a given block layout. The objective of the problem is to minimize the sum of transportation cost, flow paths construction cost and penalty cost for non-smooth material flows, i.e., flows with turns. A mixed integer programming model is given for the problem and a three-phase heuristic algorithm is developed to solve the problem. In the suggested algorithm, we generate an initial flow network by determining locations of input/output points and flow paths sequentially in the first and second phases, respectively, and then improve it by changing locations of input/output points and flow paths iteratively in the third phase. To evaluate the performance of the suggested algorithms, a series of computational experiments are performed on well-known problem instances as well as randomly generated test problems. Results of computational experiments show that the suggested algorithm gives good solutions in a short computation time.  相似文献   

16.
Josef Kallrath 《OR Spectrum》2002,24(3):315-341
We describe and solve a real world problem in chemical industry which combines operational planning with strategic aspects. In our simultaneous strategic & operational planning (SSDOP) approach we develop a model based on mixed-integer linear (MILP) optimization and apply it to a real-world problem; the approach seems to be applicable in many other situations provided that people in production planning, process development, strategic and financial planning departments cooperate. The problem is related to the supply chain management of a multi-site production network in which production units are subject to purchase, opening or shut-down decisions leading to an MILP model based on a time-indexed formulation. Besides the framework of the SSDOP approach and consistent net present value calculations, this model includes two additional special and original features: a detailed nonlinear price structure for the raw material purchase model, and a detailed discussion of transport times with respect to the time discretization scheme involving a probability concept. In a maximizing net profit scenario the client reports cost saving of several millions US$. The strategic feature present in the model is analyzed in a consistent framework based on the operational planning model, and vice versa. The demand driven operational planning part links consistently to and influences the strategic. Since the results (strategic desicions or designs) have consequences for many years, and depend on demand forecast, raw material availability, and expected costs or sales prices, resp., a careful sensitivity analysis is necessary showing how stable the decisions might be wit h respect to these input data.  相似文献   

17.
Many approaches have been tried for the classification of arrhythmia. Due to the dynamic nature of electrocardiogram (ECG) signals, it is challenging to use traditional handcrafted techniques, making a machine learning (ML) implementation attractive. Competent monitoring of cardiac arrhythmia patients can save lives. Cardiac arrhythmia prediction and classification has improved significantly during the last few years. Arrhythmias are a group of conditions in which the electrical activity of the heart is abnormal, either faster or slower than normal. It is the most frequent cause of death for both men and women every year in the world. This paper presents a deep learning (DL) technique for the classification of arrhythmias. The proposed technique makes use of the University of California, Irvine (UCI) repository, which consists of a high-dimensional cardiac arrhythmia dataset of 279 attributes. In this research, our goal was to classify cardiac arrhythmia patients into 16 classes depending on the characteristics of the electrocardiography dataset. The DL approach in the form of long short-term memory (LSTM) is an efficient technique to deal with reduced accuracy due to vanishing and exploding gradients in traditional DL frameworks for big data analysis. The goal of this research was to categorize cardiac arrhythmia patients by developing an efficient intelligent system using the LSTM DL algorithm. This approach to arrhythmia classification includes classification algorithms along with noise removal techniques. Therefore, we utilized principal components analysis (PCA) for noise removal, and LSTM for classification. This hybrid comprehensive arrhythmia classification approach performs better than previous approaches to arrhythmia classification. We attained a highest classification accuracy of 93.5% with the DL based disease classification system, and outperformed the earlier approaches used for cardiac arrhythmia classification.  相似文献   

18.
This paper presents a new approach to classify six anomaly types of control chart patterns (CCP), of systematic pattern, cyclic pattern, upward shift, downward shift, upward trend, and downward trend. Current CCP recognition methods use either unprocessed raw data or complex transformed features (via principal component analysis or discrete wavelet transform) as the input representation for the classifier. The objective of using selected features is not only for dimension reduction of input representation, but also implies the process of data compression. In contrast, using raw data is often computationally inefficient while using transformed features is very tedious in most cases. Therefore, owing to its computational advantage, using appropriate features of CCP to achieve good classification accuracy becomes more promising in real process implementation. In this study, using three features of CCP shows quite a competitive performance in terms of classification accuracy and computational loading. More importantly, the proposed method presented here has potential to be generalized to medical, financial, and other application of temporal data.  相似文献   

19.
In robust design, it is common to estimate empirical models that relate an output response variable to controllable input variables and uncontrollable noise variables from experimental data. However, when determining the optimal input settings that minimise output variability, parameter uncertainties in noise factors and response models are typically neglected. This article presents an interval robust design approach that takes parameter uncertainties into account through the confidence regions for these unknown parameters. To avoid obtaining an overly conservative design, the worst and best cases of mean squared error are both adopted to build an optimisation approach. The midpoint and radius of the interval are used to measure the location and dispersion performances, respectively. Meanwhile, a data-driven method is applied to obtain the relative weights of the location and dispersion performances in the optimisation approach. A simulation example and a case study using automobile manufacturing data from the dimensional tolerance design process are used to demonstrate the effectiveness of the proposed approach. The proposed approach of considering both uncertainties is shown to perform better than other approaches.  相似文献   

20.
产业共性技术分类模型方法研究   总被引:3,自引:0,他引:3  
在深入研究产业共性技术的定义及特征的基础上,从共享性、重要性、公益性三个维度对产业共性技术进行了分类研究,运用三维结构模型和矩阵模型对产业共性技术进行了直观描述,给出了对产业共性技术进行分类的综合评分方法,对产业共性技术进行了区间划分,为政府支持产业共性技术创新政策的制定提供了参考依据.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号