首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到10条相似文献,搜索用时 15 毫秒
1.
恒定应力加速寿命试验模型及应用——威布尔分布   总被引:2,自引:0,他引:2  
加速寿命试验是可靠性试验技术的基本方法之一。检验加速寿命试验中的数据是否服从威布尔分布,进而得出加速系数和加速寿命方程,用在不同应力水平下试验得到的数据,运用威布尔分布模型,并对其中的待估参数进行点估计和区间估计,最终得出产品在正常应力水平下的各项可靠性特征量的估计。  相似文献   

2.
Online marketplace, taken the form of “open market” where a very large number of buyers and sellers participate, has occupied a rapid increasing position in e-commerce, which resulting in sellers’ increasing investment on online advertising. Hence, there is a growing need to identify the effectiveness of online advertising in the online marketplaces such as eBay.com. However, it is problematic to directly apply the existing online advertising effect models for click-through data of online marketplaces. Therefore, there is a need for developing a model to estimate the effectiveness of online advertising in online marketplace considering its characteristics. In this paper, we develop an analytical Bayesian approach to modeling click-though data by employing the Poisson-gamma distribution. Our results have implications for online advertising effect measurement, and may help guide advertisers in decision-making.  相似文献   

3.
The objective of this work was to investigate the factors influencing the quality of empirical Bayes estimates (EBEs) of individual random effects of a mixed-effects Markov model for ordered categorical data. It was motivated by an attempt to develop a model-based dose adaptation tool for clinical use in colorectal cancer patients receiving capecitabine, which induces severe hand-and-foot syndrome (HFS) toxicity in more than a half of the patients. This simulation-based study employed a published mixed-effects model for HFS. The quality of EBEs was assessed in terms of accuracy and precision, as well as shrinkage. Three optimization algorithms were compared: simplex, quasi-Newton and adaptive random search. The investigated factors were amount of data per patient, distribution of categories within patients, magnitude of the inter-individual variability, and values of the effect model parameters. The main factors affecting the quality of EBEs were the values of parameters governing the dose–response relationship and the within-subject distribution of categories. For the chosen HFS toxicity model, the accuracy and precision of EBEs were rather low, and therefore the feasibility of their use for individual model-based dose adaptation seemed limited.  相似文献   

4.
针对海量存储系统中数据分布存在可扩展性以及灵活性的问题,提出一种高效的数据分布算法。该算法采用一致性哈希的存储思想,利用“二分”的映射方式映射物理存储节点,摒弃了Chord算法中每台节点对路由表维护的做法,实现O(1)时间内直接路由。该算法还采用了“微分逼近”的思想,实现数据的均匀分布性。实验结果证明, TTD算法具备数据分布无关性的特点,且当物理节点逼近2^N (N〉0)时,数据分布就会越均匀。反之,可以通过虚拟节点的引入,确保数据的均匀分布。算法改进了海量存储系统中数据分布的均匀程度,有效优化了系统的整体性能。  相似文献   

5.
With the help of relative entropy theory,norm theory,and bootstrap methodology,a new hypothesis testing method is proposed to verify reliability with a three-parameter Weibull distribution.Based on the relative difference information of the experimental value vector to the theoretical value vector of reliability,six criteria of the minimum weighted relative entropy norm are established to extract the optimal information vector of the Weibull parameters in the reliability experiment of product lifetime.The rejection region used in the hypothesis testing is deduced via the area of intersection set of the estimated truth-value function and its confidence interval function of the three-parameter Weibull distribution.The case studies of simulation lifetime,helicopter component failure,and ceramic material failure indicate that the proposed method is able to reflect the practical situation of the reliability experiment.  相似文献   

6.
Clustering of data in an uncertain environment can result into different partitions of the data at different points in time. Therefore, the initial formed clusters of non-stationary data can adapt over time which means that feature vectors associated with different clusters can follow different migration types to and from other clusters. This paper investigates different data migration types and proposes a technique to generate artificial non-stationary data which follows different migration types. Furthermore, the paper proposes clustering performance measures which are more applicable to measure the clustering quality in a non-stationary environment compared to the clustering performance measures for stationary environments. The proposed clustering performance measures in this paper are then used to compare the clustering results of three network based artificial immune models, since the adaptability and self-organising behaviour of the natural immune system inspired the modelling of network based artificial immune models for clustering of non-stationary data.  相似文献   

7.
In reliability analysis, accelerated life-testing allows for gradual increment of stress levels on test units during an experiment. In a special class of accelerated life tests known as step-stress tests, the stress levels increase discretely at pre-fixed time points, and this allows the experimenter to obtain information on the parameters of the lifetime distributions more quickly than under normal operating conditions. Moreover, when a test unit fails, there are often more than one fatal cause for the failure, such as mechanical or electrical. In this article, we consider the simple step-stress model under time constraint when the lifetime distributions of the different risk factors are independently exponentially distributed. Under this setup, we derive the maximum likelihood estimators (MLEs) of the unknown mean parameters of the different causes under the assumption of a cumulative exposure model. Since it is found that the MLEs do not exist when there is no failure by any particular risk factor within the specified time frame, the exact sampling distributions of the MLEs are derived through the use of conditional moment generating functions. Using these exact distributions as well as the asymptotic distributions, the parametric bootstrap method, and the Bayesian posterior distribution, we discuss the construction of confidence intervals and credible intervals for the parameters. Their performance is assessed through Monte Carlo simulations and finally, we illustrate the methods of inference discussed here with an example.  相似文献   

8.
An approach to transform continuous data to finite dimensional data is briefly outlined. A model to reduce the dimension of the finite dimensional data is developed for the case when the covariance matrices are not necessarily equal. Necessary and sufficient conditions with respect to the spatial properties of the means and covariance matrices are given so that the linear transformation of data of higher dimensions to lower dimensions does not increase the probabilities of misclassification.  相似文献   

9.
Rapid growth in social networks(SNs)presents a unique scalability challenge for SN operators because of the massive amounts of data distribution among large number of concurrent online users.A request from any user may trigger hundreds of server activities to generate a customized page and which has already become a huge burden.Based on the theoretical model and analytical study considering realistic network scenarios,this article proposes a hybrid P2P-based architecture called PAIDD.PAIDD fulfills effective data distribution primarily through P2P connectivity and social graph among users but with the help of central servers.To increase system efficiency,PAIDD performs optimized content prefetching based on social interactions among users.PAIDD chooses interaction as the criteria because user’s interaction graph is measured to be much smaller than the social graph.Our experiments confirm that PAIDD ensures satisfactory user experience without incurring extensive overhead on clients’network.More importantly,PAIDD can effectively achieve one order of magnitude of load reduction at central servers.  相似文献   

10.
Object-oriented technologies are playing increasingly important roles in every level of software application for water resource management and modelling, except for data management levels where the relational logic is still the uncontested choice of information system developers despite the object–relational impedance mismatch. In this paper, we would like to present our experience concerning two different technologies for developing the object-oriented data management layer in information systems for water resources management: (i) the Java solution to obtain transparent persistence, the Java Data Object (JDO) technology; (ii) a purer object solution with a light open source Object Database, Db4o. The process for implementing the two technologies in a Java-based hydro-information system is described, and the two different solutions were analysed and compared.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号