首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Demand for rapid development of computerized reporting of the contract, the framework program tireless identity of a generalization of electronic records in the form of slogans, is a matter of major concern. In this paper, the term extraction method based on the use of text mining equipment business Word Stat and Compare-Suite Pro, devised a method to imaginative deployment of a large number of Expert System (ES) techniques. These processes include ES categories, including: Rule-based framework, information-based, case-based, wise experts, and based on the main frame fuzzy, object placement strategy, Depending on the position and the study in the article tile sets most repeated words, theories and buzzwords single ES strategy to select keywords. Build a framework for a main sequence induction query based, the extraction process is based on the slogan Policy. Catchword frame is sorted based on the weight of the sentence According to isolate the slogan, the planned a deduction to in data mining. The product will see the slogans of different ES process articles and all other system all the different articles using age-dependent rule set. First, the creation of an induction search to be adjusted to accommodate articles remaining articles produced, and then approve the use of the remaining items. The results show a very high accuracy rate approval. Artificial intelligence is useful computerized assets, can be integrated with information science and research, in order to provide a more feasible, faster way to break down information. Finally, the estimate can be further improved by limiting the policy test equipment for future business plans and receive orders.  相似文献   

2.
The paper describes a general-purpose board-level fuzzy inference engine intended primarily for experimental and educational applications. The components are all standard TTL integrated circuits (7400 series) and CMOS RAMs (CY7C series). The engine processes 16 rules in parallel with two antecedents and one consequent per rule. The design may easily be scaled to accommodate more or fewer rules. Static RAMs are used to store membership functions of both antecedent and consequent variables. “Min-max” composition is used for inferencing, and for defuzzification, the mean of maxima strategy is used. Simulation on VALID CAE software predicts that the engine is capable of performing up to 1.56 million fuzzy logic inferences per second.  相似文献   

3.
In this study, we propose a novel solution for collecting smart meter data by merging Vehicular Ad-Hoc Networks (VANET) and smart grid communication technologies. In our proposed mechanism, Vehicular Ad-Hoc Networks are utilized for collecting data from smart meters, eliminating the need for manpower. To the best of our knowledge, this is the first study proposing the utilization of public transportation vehicles for collecting data from smart meters. With this work, the use of the IEEE 802.11p protocol has been proposed for the first time for use in smart grid applications. In our scheme, data flows first from smart meters to a bus through infrastructure-to-vehicle (I2V) communication and then from the bus to a bus stop through vehicle-to-infrastructure (V2I) communication. The performance of our proposed mechanism has been investigated in detail in terms of end-to-end delay and delivery ratio by using Network Simulator-2 and with different routing protocols.  相似文献   

4.
Power generation facilities cannot avoid performance degradation caused by severe operating conditions such as high temperature and high pressure, as well as the aging of facilities. Since the performance degradation of facilities can inflict economic on power generation plants, a systematic method is required to accurately diagnose the conditions of the facilities.This paper introduces the fuzzy inference system, which applies fuzzy theory in order to diagnose performance degradation in feedwater heaters among power generation facilities. The reason for selecting only feedwater heaters as the object of analysis is that it plays an important role in the performance degradation of power generation plants, which have recently been reported with failures. In addition, feedwater heaters have the advantage of using many data types that can be used in fuzzy inference because of low measurement limits compared to other facilities. Fuzzy inference systems consists of fuzzy sets and rules with linguistic variables based on expert knowledge, experience and simulation results to efficiently handle various uncertainties of the target facility. We proposed a method for establishing a more elaborate system. According to the experimental results, inference can be made with consideration on uncertainties by quantifying the target based on fuzzy theory. Based on this study, implementation of a fuzzy inference system for diagnosis of feedwater heater performance degradation is expected to contribute to the efficient management of power generation plants.  相似文献   

5.
A summarization technique creates a concise version of large amount of data (big data!) which reduces the computational cost of analysis and decision-making. There are interesting data patterns, such as rare anomalies, which are more infrequent in nature than other data instances. For example, in smart healthcare environment, the proportion of infrequent patterns is very low in the underlying cyber physical system (CPS). Existing summarization techniques overlook the issue of representing such interesting infrequent patterns in a summary. In this paper, a novel clustering-based technique is proposed which uses an information theoretic measure to identify the infrequent frequent patterns for inclusion in a summary. The experiments conducted on seven benchmark CPS datasets show substantially good results in terms of including the infrequent patterns in summaries than existing techniques.  相似文献   

6.
This study presents an intelligent model based on fuzzy systems for making a quantitative formulation between seismic attributes and petrophysical data. The proposed methodology comprises two major steps. Firstly, the petrophysical data, including water saturation (Sw) and porosity, are predicted from seismic attributes using various fuzzy inference systems (FISs), including Sugeno (SFIS), Mamdani (MFIS) and Larsen (LFIS). Secondly, a committee fuzzy inference system (CFIS) is constructed using a hybrid genetic algorithms-pattern search (GA-PS) technique. The inputs of the CFIS model are the outputs and averages of the FIS petrophysical data. The methodology is illustrated using 3D seismic and petrophysical data of 11 wells of an Iranian offshore oil field in the Persian Gulf. The performance of the CFIS model is compared with a probabilistic neural network (PNN). The results show that the CFIS method performed better than neural network, the best individual fuzzy model and a simple averaging method.  相似文献   

7.
This paper gives an overview of two middleware systems that have been developed over the last 6 years to address the challenges involved in developing parallel and distributed implementations of data mining algorithms. FREERIDE (FRamework for Rapid Implementation of Data mining Engines) focuses on data mining in a cluster environment. FREERIDE is based on the observation that parallel versions of several well-known data mining techniques share a relatively similar structure, and can be parallelized by dividing the data instances (or records or transactions) among the nodes. The computation on each node involves reading the data instances in an arbitrary order, processing each data instance, and performing a local reduction. The reduction involves only commutative and associative operations, which means the result is independent of the order in which the data instances are processed. After the local reduction on each node, a global reduction is performed. This similarity in the structure can be exploited by the middleware system to execute the data mining tasks efficiently in parallel, starting from a relatively high-level specification of the technique.  相似文献   

8.
Unemployment rate prediction has become critically significant, because it can help government to make decision and design policies. In previous studies, traditional univariate time series models and econometric methods for unemployment rate prediction have attracted much attention from governments, organizations, research institutes, and scholars. Recently, novel methods using search engine query data were proposed to forecast unemployment rate. In this paper, a data mining framework using search engine query data for unemployment rate prediction is presented. Under the framework, a set of data mining tools including neural networks (NNs) and support vector regressions (SVRs) is developed to forecast unemployment trend. In the proposed method, search engine query data related to employment activities is firstly extracted. Secondly, feature selection model is suggested to reduce the dimension of the query data. Thirdly, various NNs and SVRs are employed to model the relationship between unemployment rate data and query data, and genetic algorithm is used to optimize the parameters and refine the features simultaneously. Fourthly, an appropriate data mining method is selected as the selective predictor by using the cross-validation method. Finally, the selective predictor with the best feature subset and proper parameters is used to forecast unemployment trend. The empirical results show that the proposed framework clearly outperforms the traditional forecasting approaches, and support vector regression with radical basis function (RBF) kernel is dominant for the unemployment rate prediction. These findings imply that the data mining framework is efficient for unemployment rate prediction, and it can strengthen government’s quick responses and service capability.  相似文献   

9.
10.
This paper presents a new approach for power quality time series data mining using S-transform based fuzzy expert system (FES). Initially the power signal time series disturbance data are pre-processed through an advanced signal processing tool such as S-transform and various statistical features are extracted, which are used as inputs to the fuzzy expert system for power quality event detection. The proposed expert system uses a data mining approach for assigning a certainty factor for each classification rule, thereby providing robustness to the rule in the presence of noise. Further to provide a very high degree of accuracy in pattern classification, both the Gaussian and trapezoidal membership functions of the concerned fuzzy sets are optimized using a fuzzy logic based adaptive particle swarm optimization (PSO) technique. The proposed hybrid PSO-fuzzy expert system (PSOFES) provides accurate classification rates even under noisy conditions compared to the existing techniques, which show the efficacy and robustness of the proposed algorithm for power quality time series data mining.  相似文献   

11.
Online tuning of fuzzy inference systems using dynamic fuzzy Q-learning   总被引:1,自引:0,他引:1  
This paper presents a dynamic fuzzy Q-learning (DFQL) method that is capable of tuning fuzzy inference systems (FIS) online. A novel online self-organizing learning algorithm is developed so that structure and parameters identification are accomplished automatically and simultaneously based only on Q-learning. Self-organizing fuzzy inference is introduced to calculate actions and Q-functions so as to enable us to deal with continuous-valued states and actions. Fuzzy rules provide a natural mean of incorporating the bias components for rapid reinforcement learning. Experimental results and comparative studies with the fuzzy Q-learning (FQL) and continuous-action Q-learning in the wall-following task of mobile robots demonstrate that the proposed DFQL method is superior.  相似文献   

12.
路艳丽  雷英杰  王坚 《计算机应用》2007,27(11):2814-2816
直觉F推理克服了普通F推理在不确定性信息的描述、推理结果可信性等方面存在的局限性。在介绍普通F推理直觉化扩展的基础上,首先分析了两类推理算法的相互转化问题,指出普通F推理是直觉F推理的一种特例,当直觉指数为0时二者可相互转化。其次,比较了两类算法的还原性,分析表明Zadeh型、Mamdani型、Larsen型直觉F推理算法与其对应的普通F推理算法具有相同的还原性。最后,通过实例研究了直觉F推理算法在推理结果精度、可信性上的优势,从而较普通F推理更适用于智能控制与决策。  相似文献   

13.
OLAP数据挖掘引擎算法的设计与实现   总被引:2,自引:2,他引:0  
基于OLAP的数据挖掘,是数据挖掘的一个新的发展方向。对于如何把OLAP(联机分析处理技术)和DM(数据挖掘)统一起来,从而在数据库或数据仓库的不同层次进行挖掘,提出了OLAP数据挖掘系统的结构。通过研究数据挖掘方法和OLAP操作的特点,以及数据立方的构建和物化,对传统的DM算法进行了改进,设计并实现了更能适应OLAP数据挖掘引擎的算法。  相似文献   

14.
模糊时间序列挖掘在复杂系统模糊建模中的应用   总被引:5,自引:0,他引:5  
针对于复杂工业过程领域模糊建模问题, 提出了一种基于时间序列的模糊定量数据挖掘方法, 并讨论了其在复杂系统模糊逻辑推理模型结构辨识中的应用. 该方法建立在系统历史采集数据库基础之上, 较好的解决了多入多出 (MIMO)非线性复杂工业过程模糊建模时初始模型的建立问题. 文章最后讨论了该方法在氧化铝熟料烧结回转窑建模中的应用, 取得了良好的现场运行效果.  相似文献   

15.
Image subband coding using fuzzy inference and adaptive quantization   总被引:2,自引:0,他引:2  
Wavelet image decomposition generates a hierarchical data structure to represent an image. Recently, a new class of image compression algorithms has been developed for exploiting dependencies between the hierarchical wavelet coefficients using zerotrees. This paper deals with a fuzzy inference filter for image entropy coding by choosing significant coefficients and zerotree roots in the higher frequency wavelet subbands. Moreover, an adaptive quantization is proposed to improve the coding performance. Evaluating with the standard images, the proposed approaches are comparable or superior to most state-of-the-art coders. Based on the fuzzy energy judgment, the proposed approaches can achieve an excellent performance on the combination applications of image compression and watermarking.  相似文献   

16.
A framework for the development of a decision support system (DSS) that exhibits uncommonly transparent rule-based inference logic is introduced. A DSS is constructed by marrying a statistically based fuzzy inference system (FIS) with a user interface, allowing drill-down exploration of the underlying statistical support, providing transparent access to both the rule-based inference as well as the underlying statistical basis for the rules. The FIS is constructed through a "pattern discovery" based analysis of training data. Such an analysis yields a rule base characterized by simple explanations for any rule or data division in the extracted knowledge base. The reliability of a fuzzy inference is well predicted by a confidence measure that determines the probability of a correct suggestion by examination of values produced within the inference calculation. The combination of these components provides a means of constructing decision support systems that exhibit a degree of transparency beyond that commonly observed in supervised-learning-based methods. A prototype DSS is analyzed in terms of its workflow and usability, outlining the insight derived through use of the framework. This is demonstrated by considering a simple synthetic data example and a more interesting real-world example application with the goal of characterizing patients with respect to risk of heart disease. Specific input data samples and corresponding output suggestions created by the system are presented and discussed. The means by which the suggestions made by the system may be used in a larger decision context is evaluated.  相似文献   

17.
Data grids are middleware systems that offer secure shared storage of massive scientific datasets over wide area networks. The main challenge in their design is to provide reliable storage, search, and transfer of numerous or large files over geographically dispersed heterogeneous platforms. The Storage Resource Broker (SRB) is an example of a system that provides these services and that has been deployed in multiple high-performance scientific projects during the past few years. In this paper, we take a detailed look at several of its functional features and examine its scalability using synthetic and trace-based workloads. Unlike traditional file systems, SRB uses a commodity database to manage both system- and user-defined metadata. We quantitatively evaluate this decision and draw insightful conclusions about its implications to the system architecture and performance characteristics. We find that the bulk transfer facilities of SRB demonstrate good scalability properties, and we identify the bottleneck resources across different data search and transfer tasks. We examine the sensitivity to several configuration parameters and provide details about how different internal operations contribute to the overall performance.  相似文献   

18.
Elicitation of classification rules by fuzzy data mining   总被引:1,自引:0,他引:1  
Data mining techniques can be used to find potentially useful patterns from data and to ease the knowledge acquisition bottleneck in building prototype rule-based systems. Based on the partition methods presented in simple-fuzzy-partition-based method (SFPBM) proposed by Hu et al. (Comput. Ind. Eng. 43(4) (2002) 735), the aim of this paper is to propose a new fuzzy data mining technique consisting of two phases to find fuzzy if–then rules for classification problems: one to find frequent fuzzy grids by using a pre-specified simple fuzzy partition method to divide each quantitative attribute, and the other to generate fuzzy classification rules from frequent fuzzy grids. To improve the classification performance of the proposed method, we specially incorporate adaptive rules proposed by Nozaki et al. (IEEE Trans. Fuzzy Syst. 4(3) (1996) 238) into our methods to adjust the confidence of each classification rule. For classification generalization ability, the simulation results from the iris data demonstrate that the proposed method may effectively derive fuzzy classification rules from training samples.  相似文献   

19.
Distributed data mining on grids: services, tools, and applications   总被引:4,自引:0,他引:4  
Data mining algorithms are widely used today for the analysis of large corporate and scientific datasets stored in databases and data archives. Industry, science, and commerce fields often need to analyze very large datasets maintained over geographically distributed sites by using the computational power of distributed and parallel systems. The grid can play a significant role in providing an effective computational support for distributed knowledge discovery applications. For the development of data mining applications on grids we designed a system called Knowledge Grid. This paper describes the Knowledge Grid framework and presents the toolset provided by the Knowledge Grid for implementing distributed knowledge discovery. The paper discusses how to design and implement data mining applications by using the Knowledge Grid tools starting from searching grid resources, composing software and data components, and executing the resulting data mining process on a grid. Some performance results are also discussed.  相似文献   

20.
Tailored energy efficiency campaigns that make use of household-specific information can trigger substantial energy savings in the residential sector. The information required for such campaigns, however, is often missing. We show that utility companies can extract that information from smart meter data using machine learning. We derive 133 features from smart meter and weather data and use the Random Forest classifier that allows us to recognize 19 household classes related to 11 household characteristics (e.g., electric heating, size of dwelling) with an accuracy of up to 95% (69% on average). The results indicate that even datasets with an hourly or daily resolution are sufficient to impute key household characteristics with decent accuracy and that data from different yearly seasons does not considerably influence the classification performance. Furthermore, we demonstrate that a small training data set consisting of only 200 households already reaches a good performance. Our work may serve as benchmark for upcoming, similar research on smart meter data and provide guidance for practitioners for estimating the efforts of implementing such analytics solutions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号