首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 140 毫秒
1.
With the help of relative entropy theory,norm theory,and bootstrap methodology,a new hypothesis testing method is proposed to verify reliability with a three-parameter Weibull distribution.Based on the relative difference information of the experimental value vector to the theoretical value vector of reliability,six criteria of the minimum weighted relative entropy norm are established to extract the optimal information vector of the Weibull parameters in the reliability experiment of product lifetime.The rejection region used in the hypothesis testing is deduced via the area of intersection set of the estimated truth-value function and its confidence interval function of the three-parameter Weibull distribution.The case studies of simulation lifetime,helicopter component failure,and ceramic material failure indicate that the proposed method is able to reflect the practical situation of the reliability experiment.  相似文献   

2.
In this paper,the problem of increasing information transfer authenticity is formulated.And to reach a decision,the control methods and algorithms based on the use of statistical and structural information redundancy are presented.It is assumed that the controllable information is submitted as the text element images and it contains redundancy,caused by statistical relations and non-uniformity probability distribution of the transmitted data.The use of statistical redundancy allows to develop the adaptive rules of the authenticity control which take into account non-stationarity properties of image data while transferring the information.The structural redundancy peculiar to the container of image in a data transfer package is used for developing new rules to control the information authenticity on the basis of pattern recognition mechanisms.The techniques offered in this work are used to estimate the authenticity in structure of data transfer packages.The results of comparative analysis for developed methods and algorithms show that their parameters of efficiency are increased by criterion of probability of undetected mistakes,labour input and cost of realization.  相似文献   

3.
This paper makes a brief introduction of the principle of Principal Component Analysis (PCA). Then according to the information entropy theory, and making full use of the inherent characteristic of eigenvalues of correlation matrix of data, the 2^nd information function, the 2nd information entropy and geometry entropy under PCA are proposed firstly, by which the information features of PCA are metricized. In addition, two new concepts of Information Rate (IR) and Accumulated Information Rate (AIR) are proposed, which are used to illustrate the degree of information feature extraction of PCA. In the end, through simulated application in practice, the results show that the method proposed in this paper is efficient and satisfactory. It provides a new research approach of information feature extraction for pattern recognition, machine learning, and data mining and so on.  相似文献   

4.
Social networking service (SNS) applications are changing the way information spreads in online communities. As real social relationships are projected into SNS applications, word of mouth has been an important factor in the information spreading processes of those applications. By assuming each user needs a cost to accept some specific information, this paper studies the initial "seed user" selection strategy to maximize information spreading in a social network with a cost budget. The main contributions of this paper are: 1) proposing a graphic SEIR model (gSEIR) by extending the epidemic compartmental model to simulate the dynamic information spreading process between individuals in the social network; 2) proposing a formal definition for the influence maximization problem with limit cost (IMLC) in social networks, and proving that this problem can be transformed to the weighted set-cover problem (WSCP) and thus is NP-Complete; 3) providing four different greedy algorithms to solve the IMLC problem; 4) proposing a heuristic algorithm based on the method of Lagrange multipliers (HILR) for the same problem; 5) providing two parts of experiments to test the proposed models and algorithms in this paper. In the first part, we verify that gSEIR can generate similar macro-behavior as an SIR model for the information spreading process in an online community by combining the micro-behaviors of all the users in that community, and that gSEIR can also simulate the dynamic change process of the statuses of all the individuals in the corresponding social networks during the information spreading process. In the second part, by applying the simulation result from gSEIR as the prediction of information spreading in the given social network, we test the effectiveness and efficiency of all provided algorithms to solve the influence maximization problem with cost limit. The result show that the heuristic algorithm HILR is the best for the IMLC problem.  相似文献   

5.
The polygonal approximation problem is a primary problem in computer graphics,pattern recognition,CAD/CAM,etc.In R^2,the cone intersection method(CIM) is one of the most efficient algorithms for approximating polygonal curves,With CIM Eu and Toussaint,by imposing an additional constraint and changing the given error criteria,resolve the three-dimensional weighted minimum number polygonal approximation problem with the parallel-strip error criterion(PS-WMN)under L2 norm.In this paper,without any additional constraint and change of the error criteria,a CIM solution to the same problem with the line segment error criterion(LS-WMN)is presented,which is more frequently encountered than the PS-WMN is .Its time complexity is O(n^3),and the space complexity is O(n^2) .An approximation algorithm is also presented,which takes O(n^2) time and O(n) space.Results of some examples are given to illustrate the efficiency of these algorithms.  相似文献   

6.
Predicting the response variables of the target dataset is one of the main problems in machine learning. Predictive models are desired to perform satisfactorily in a broad range of target domains. However, that may not be plausible if there is a mismatch between the source and target domain distributions. The goal of domain adaptation algorithms is to solve this issue and deploy a model across different target domains. We propose a method based on kernel distribution embedding and Hilbert-Schmidt independence criterion (HSIC) to address this problem. The proposed method embeds both source and target data into a new feature space with two properties: 1) the distributions of the source and the target datasets are as close as possible in the new feature space, and 2) the important structural information of the data is preserved. The embedded data can be in lower dimensional space while preserving the aforementioned properties and therefore the method can be considered as a dimensionality reduction method as well. Our proposed method has a closed-form solution and the experimental results show that it works well in practice.  相似文献   

7.
An optimal adaptive H-infinity tracking control design via wavelet network   总被引:1,自引:1,他引:0  
In this paper, an optimal adaptive H-infinity tracking control design method via wavelet network for a class of uncertain nonlinear systems with external disturbances is proposed to achieve H-infinity tracking performance. First, an alternate tracking error and a performance index with respect to the tracking error and the control effort are introduced in order to obtain better performance, especially, in reducing the cost of the control effort in the case of small attenuation levels. Next, H-infinity tracking performance, which attenuates the influence of both wavelet network approximation error and external disturbances on the modified tracking error, is formulated. Our results indicate that a small attenuation level does not lead to a large control signal. The proposed method insures an optimal trade-off between the amplitude of control signals and the performance of tracking errors. An example is given to illustrate the design efficiency.  相似文献   

8.
This paper focuses on the online distributed optimization problem based on multi-agent systems. In this problem, each agent can only access its own cost function and a convex set, and can only exchange local state information with its current neighbors through a time-varying digraph. In addition, the agents do not have access to the information about the current cost functions until decisions are made. Different from most existing works on online distributed optimization, here we consider the case where the cost functions are strongly pseudoconvex and real gradients of the cost functions are not available. To handle this problem, a random gradient-free online distributed algorithm involving the multi-point gradient estimator is proposed. Of particular interest is that under the proposed algorithm, each agent only uses the estimation information of gradients instead of the real gradient information to make decisions. The dynamic regret is employed to measure the proposed algorithm. We prove that if the cumulative deviation of the minimizer sequence grows within a certain rate, then the expectation of dynamic regret increases sublinearly. Finally, a simulation example is given to corroborate the validity of our results.  相似文献   

9.
In this study, a two-hop wireless sensor network with multiple relay nodes is considered where the amplify-and-forward (AF) scheme is employed. Two algorithms are presented to jointly consider interference suppression and power allocation (PA) based on the minimization of the symbol error rate (SER) criterion. A stochastic gradient (SG) algorithm is developed on the basis of the minimum-SER (MSER) criterion to jointly update the parameter vectors that allocate the power levels among the relay sensors subject to a total power constraint and the linear receiver. In addition, a conjugate gradient (CG) algorithm is developed on the basis of the SER criterion. A centralized algorithm is designed at the fusion center. Destination nodes transmit the quantized information of the PA vector to the relay nodes through a limited-feedback channel. The complexity and convergence analysis of the proposed algorithms are carried out. Simulation results show that the proposed two adaptive algorithms significantly outperform the other previously reported algorithms.  相似文献   

10.
Rule selection has long been a problem of great challenge that has to be solved when developing a rule-based knowledge learning system.Many methods have been proposed to evaluate the eligibility of a single rule based on some criteria.However,in a knowledge learning system there is usually a set of rules,These rules are not independent,but interactive,They tend to affect each other and form a rulesystem.In such case,it is no longer rasonable to isolate each rule from others for evaluation.A best rule according to certain criterion is not always the best one for the whole system.Furthermore,the data in the real world from which people want to create their learning system are often ill-defined and inconsistent.In this case,the completeness and consistency criteria for rule selection are no longer essential.In this paper,some ideas about how to solve the rule-selection problem in a systematic way are proposed.These ideas have been applied in the design of a Chinese business card layout analysis system and gained a goods result on the training data set of 425 images.The implementation of the system and the result are presented in this paper.  相似文献   

11.
Generalized information potential criterion for adaptive system training   总被引:2,自引:0,他引:2  
We have previously proposed the quadratic Renyi's error entropy as an alternative cost function for supervised adaptive system training. An entropy criterion instructs the minimization of the average information content of the error signal rather than merely trying to minimize its energy. In this paper, we propose a generalization of the error entropy criterion that enables the use of any order of Renyi's entropy and any suitable kernel function in density estimation. It is shown that the proposed entropy estimator preserves the global minimum of actual entropy. The equivalence between global optimization by convolution smoothing and the convolution by the kernel in Parzen windowing is also discussed. Simulation results are presented for time-series prediction and classification where experimental demonstration of all the theoretical concepts is presented.  相似文献   

12.
郭振华  岳红  王宏 《计算机仿真》2005,22(11):91-94
基于最小均方误差的主元分析和主元神经网络是有效的多变量降维统计技术,它们所提取的主元含有系统最大方差.非高斯随机系统的近似模型应当含有系统最大信息熵,但包含最大方差并不一定包含最大信息熵.该文提出一种以最小残差熵为通用指标的非线性主元神经网络模型,并给出了一种基于Parzen窗口密度函数估计的熵近似计算方法和网络学习算法.然后从信息论角度分析了,在高斯随机系统中基于最小残差熵和最小均方差为指标的主元网络学习结果具有一致性.最后以仿真验证该方法的有效性,并与基于最小均方误差的主元分析和主元神经网络方法的计算结果进行对比性分析.  相似文献   

13.
在分析空调系统常规PID自整定控制局限性的基础上,指出利用偏差及偏差变化量作为PID参数调节判据是常规PID自整定控制难以取得理想控制效果的根源,从而引入波形分析法求解控制过程的性能指标即超调量与调节时间,并提出利用性能指标作为PID参数调节的判据将克服常规PID自整定控制的局限性.对性能指标与PID调节器参数的关系进行了分析,给出了根据性能指标进行PID调节的模糊控制规则,基于上述规则建立了空调机组模糊PID控制系统数学模型,并进行了仿真.仿真结果表明,该波形分析法模糊自整定PID控制可以较好解决如空调系统等大、纯滞后受控对象的控制,得到较理想的控制性能指标,并因此获得较高的经济效益.  相似文献   

14.
Recent publications have proposed various information-theoretic learning (ITL) criteria based on Renyi's quadratic entropy with nonparametric kernel-based density estimation as alternative performance metrics for both supervised and unsupervised adaptive system training. These metrics, based on entropy and mutual information, take into account higher order statistics unlike the mean-square error (MSE) criterion. The drawback of these information-based metrics is the increased computational complexity, which underscores the importance of efficient training algorithms. In this paper, we examine familiar advanced-parameter search algorithms and propose modifications to allow training of systems with these ITL criteria. The well known algorithms tailored here for ITL include various improved gradient-descent methods, conjugate gradient approaches, and the Levenberg-Marquardt (LM) algorithm. Sample problems and metrics are presented to illustrate the computational efficiency attained by employing the proposed algorithms.  相似文献   

15.
Selection of generative models in classification   总被引:1,自引:0,他引:1  
This paper is concerned with the selection of a generative model for supervised classification. Classical criteria for model selection assess the fit of a model rather than its ability to produce a low classification error rate. A new criterion, the Bayesian entropy criterion (BEC), is proposed. This criterion takes into account the decisional purpose of a model by minimizing the integrated classification entropy. It provides an interesting alternative to the cross-validated error rate which is computationally expensive. The asymptotic behavior of the BEC criterion is presented. Numerical experiments on both simulated and real data sets show that BEC performs better than the BIC criterion to select a model minimizing the classification error rate and provides analogous performance to the cross-validated error rate.  相似文献   

16.
Uncertainty is imposed simultaneously with multispectral data acquisition in remote sensing. It grows and propagates in processing, transmitting and classification processes. This uncertainty affects the extracted information quality. Usually, the classification performance is evaluated by criteria such as the accuracy and reliability. These criteria can not show the exact quality and certainty of the classification results. Unlike the correctness, no special criterion has been propounded for evaluation of the certainty and uncertainty of the classification results. Some criteria such as RMSE, which are used for this purpose, are sensitive to error variations instead of uncertainty variations. This study proposes the entropy, as a special criterion for visualizing and evaluating the uncertainty of the results. This paper follows the uncertainty problem in multispectral data classification process. In addition to entropy, several uncertainty criteria are introduced and applied in order to evaluate the classification performance.  相似文献   

17.
Feature extraction using information-theoretic learning   总被引:3,自引:0,他引:3  
A classification system typically consists of both a feature extractor (preprocessor) and a classifier. These two components can be trained either independently or simultaneously. The former option has an implementation advantage since the extractor need only be trained once for use with any classifier, whereas the latter has an advantage since it can be used to minimize classification error directly. Certain criteria, such as minimum classification error, are better suited for simultaneous training, whereas other criteria, such as mutual information, are amenable for training the feature extractor either independently or simultaneously. Herein, an information-theoretic criterion is introduced and is evaluated for training the extractor independently of the classifier. The proposed method uses nonparametric estimation of Renyi's entropy to train the extractor by maximizing an approximation of the mutual information between the class labels and the output of the feature extractor. The evaluations show that the proposed method, even though it uses independent training, performs at least as well as three feature extraction methods that train the extractor and classifier simultaneously.  相似文献   

18.
曾安  郑齐弥 《计算机科学》2016,43(8):249-253
传统的深度置信网络(DBNs)训练过程采用重构误差作为RBM网络的评价指标,它能在一定程度上反映网络对训练样本的似然度,但它并不是可靠的。而最大信息系数(MIC)能反映两个属性间的相关度,保留相关度较大的属性,且MIC较稳健,不易受异常值的影响,可作为网络评价指标。故提出一种基于最大信息系数(MIC)的深度置信网络方法,一方面用MIC对数据进行降维预处理,提高数据与网络的拟合度,降低网络分类误差;另一方面将MIC作为网络评价标准,改进重构误差的不可靠性。分别利用传统方法与基于MIC的深度置信网络方法对手写数据集MNIST和USPS进行分类实验,结果表明,基于MIC的深度置信网络方法能有效地提高识别率。  相似文献   

19.
Robust diffusion adaptive estimation algorithms based on the maximum correntropy criterion (MCC), including adapt then combine MCC and combine then adapt MCC, are developed to deal with the distributed estimation over network in impulsive (long-tailed) noise environments. The cost functions used in distributed estimation are in general based on the mean square error (MSE) criterion, which is desirable when the measurement noise is Gaussian. In non-Gaussian situations, especially for the impulsive-noise case, MCC based methods may achieve much better performance than the MSE methods as they take into account higher order statistics of error distribution. The proposed methods can also outperform the robust diffusion least mean p-power (DLMP) and diffusion minimum error entropy (DMEE) algorithms. The mean and mean square convergence analysis of the new algorithms are also carried out.  相似文献   

20.
NLOS(non line of sight, 非视距)误差是影响LTE终端定位精度主要因素之一, 针对这一问题, 提出了一种减小该误差的迭代定位算法, 通过引入误差系数重构OTDOA(observed time difference of arrival, 到达时间观测差)测量值, 采用迭代计算获取一组最优误差系数来改善NLOS误差的影响。该算法不需要信道环境的先验信息, 且可通过分层细化减小计算量。仿真结果表明, 该算法能有效地减小NLOS环境下的定位误差。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号