首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
可靠度与不可靠度均是衡量系统性能的重要指标,但是有的时候由于研究系统的不可靠性得到的结果更加优越并且形式简练,这时就会考虑系统的不可靠性。由于在Posbist模糊可靠性理论基础之下研究一般并联系统的模糊不可靠性得到的结果更优越,因此主要研究了一般并联系统的模糊不可靠性。基于Posbist模糊可靠性的基本理论,证明了模糊变量与模糊事件的关系,定义了部件(或系统)的模糊不可靠度,给出并证明了部件为不可修复且部件间相互独立的情形下一般并联系统的不可靠度与部件不可靠度、与部件隶属函数之间的关系式以及系统隶属函数与部件隶属函数的关系式。有了这些明确的理论基础,就可以利用计算机技术对相对复杂的串并联等系统的性能进行研究。  相似文献   

3.
In defect prediction studies, open-source and real-world defect data sets are frequently used. The quality of these data sets is one of the main factors affecting the validity of defect prediction methods. One of the issues is repeated data points in defect prediction data sets. The main goal of the paper is to explore how low-level metrics are derived. This paper also presents a cleansing algorithm that removes repeated data points from defect data sets. The method was applied on 20 data sets, including five open source sets, and area under the curve (AUC) and precision performance parameters have been improved by 4.05% and 6.7%, respectively. In addition, this work discusses how static code metrics should be used in bug prediction. The study provides tips to obtain better defect prediction results.  相似文献   

4.
Stalking computer bugs, that is to say, finding errors in computer hardware and software, occupies and has occupied much of the time and ingenuity of the people who design, build, program and use computers. The author considers the origin of the word bug. From at least the time of Thomas Edison, U.S. engineers have used the word bug to refer to flaws in the systems they developed. This short word conveniently covered a multitude of possible problems. It also suggested that difficulties were small and could be easily corrected. IBM engineers who installed the ASSC Mark I at Harvard University in 1944 taught the phrase to the staff there. Grace Murray Hopper used the word with particular enthusiasm in documents relating to her work. In 1947, when technicians building the Mark II computer at Harvard discovered a moth in one of the relays, they saved it as the first actual case of a bug being found  相似文献   

5.
Exploiting user feedback to compensate for the unreliability of user models   总被引:1,自引:1,他引:0  
Natural Language is a powerful medium for interacting with users, and sophisticated computer systems using natural language are becoming more prevalent. Just as human speakers show an essential, inbuilt responsiveness to their hearers, computer systems must tailor their utterances to users. Recognizing this, researchers devised user models and strategies for exploiting them in order to enable systems to produce the best answer for a particular user.Because these efforts were largely devoted to investigating how a user model could be exploited to produce better responses, systems employing them typically assumed that a detailed and correct model of the user was available a priori, and that the information needed to generate appropriate responses was included in that model. However, in practice, the completeness and accuracy of a user model cannot be guaranteed. Thus, unless systems can compensate for incorrect or incomplete user models, the impracticality of building user models will prevent much of the work on tailoring from being successfully applied in real systems. In this paper, we argue that one way for a system to compensate for an unreliable user model is to be able to react to feedback from users about the suitability of the texts it produces. We also discuss how such a capability can actually alleviate some of the burden now placed on user modeling. Finally, we present a text generation system that employs whatever information is available in its user model in an attempt to produce satisfactory texts, but is also capable of responding to the user's follow-up questions about the texts it produces.Dr. Johanna D. Moore holds interdisciplinary appointments as an Assistant Professor of Computer Science and as a Research Scientist at the Learning Research and Development Center at the University of Pittsburgh. Her research interests include natural language generation, discourse, expert system explanation, human-computer interaction, user modeling, intelligent tutoring systems, and knowledge representation. She received her MS and PhD in Computer Science from the University of California at Los Angeles, and her BS in Mathematics and Computer Science from the University of California at Los Angeles. She is a member of the Cognitive Science Society, ACL, AAAI, ACM, IEEE, and Phi Beta Kappa. Readers can reach Dr. Moore at the Department of Computer Science, University of Pittsburgh, Pittsburgh, PA 15260.Dr. Cecile Paris is the project leader of the Explainable Expert System project at USC's information Sciences Institute. She received her PhD and MS in Computer Science from Columbia University (New York) and her bachelor's degree from the University of California in Berkeley. Her research interests include natural language generation and user modeling, discourse, expert system explanation, human-computer interaction, intelligent tutoring systems, machine learning, and knowledge acquisition. At Columbia University, she developed a natural language generation system capable of producing multi-sentential texts tailored to the users level of expertise about the domain. At ISI, she has been involved in designing a flexible explanation facility that supports dialogue for an expert system shell. Dr. Paris is a member of the Association for Computational Linguistics (ACL), the American Association for Artificial Intelligence (AAAI), the Cognitive Science Society, ACM, IEEE, and Phi Kappa Phi. Readers can reach Dr. Paris at USC/ISI, 4676 Admiralty Way, Marina Del Rey, California, 90292  相似文献   

6.
Compositional data naturally appear in many fields of application. For instance, in chemistry, the relative contributions of different chemical substances to a product are typically described in terms of a compositional data vector. Although the aggregation of compositional data frequently arises in practice, the functions formalizing this process do not fit the standard order-based aggregation framework. This is due to the fact that there is no intuitive order that carries the semantics of the set of compositional data vectors (referred to as the standard simplex). In this paper, we consider the more general betweenness-based aggregation framework that yields a natural definition of an aggregation function for compositional data. The weighted centroid is proved to fit within this definition and discussed to be linked to a very tangible interpretation. Other functions for the aggregation of compositional data are presented and their fit within the proposed definition is discussed.  相似文献   

7.
We will consider the following problem in this paper: Assume that there are numerical data (like salaries of individuals) stored in a database and some subsums of these numbers are made public or just available for persons not eligible to learn the original data. Our motivating question is: At most how many of these subsums may be disclosed such that none of the numbers can be uniquely determined from these sums. These types of problems arise in the cases when certain tasks concerning a database are done by subcontractors who are not eligible to learn the elements of the database, but naturally should be given some data to fulfill there task. In database theory such examples are called statistical databases as they are used for statistical purposes and no individual data are supposed to be obtained using a restricted list of SUM queries. This problem was originally introduced by [1], originally solved by Miller et al. [7] and revisited by Griggs [4, 5]. It was shown in [7] that no more than subsums of a given set of secure data may be disclosed without disclosing at least one of the data, which upper bound is sharp as well. To calculate a subsum, it might need some operations whose number is limited. This is why it is natural to assume that the disclosed subsums of the original elements of the database will contain only a limited number of elements, say at most . The goal of the present paper is to determine the maximum number of subsums of size at most which can be disclosed without making possible to calculate any of the individual data . The maximum is exactly determined for the case when the number of data is much larger than the size restriction .  相似文献   

8.
Two SeaWinds radar scatterometers operated in tandem for 9 months in 2003, enabling resolution of the diurnal cycle in Greenland. This dataset provides unprecedented temporal resolution for Ku-band scattering observations of snow and ice melt conditions. As a step towards improved radar-based melt intensity estimation, a simple Markov melt–thaw model is developed to estimate melt and refreeze indices. The melt indices model is evaluated with the aid of a simple geophysical–electromagnetic model and validated by comparing tandem SeaWinds observations against automated weather station data. The new approach is used to analyse the melt conditions over the Greenland ice-sheet in 2003. The strengths and limitations of the approach are considered.  相似文献   

9.
A data model and an access method for summary data management are presented. Summary data, represented as a trinary tuple 〈statistical function, category, summary〉, are metaknowledge summarized by a statistical function of a category of individual information typically stored in a conventional database. For instance, 〈average-income, female engineer with 10 years' experience and master's degree, $45000〉 is a summary datum. The computational complexity of the derivability problem has been found intractable in general, and the proposed summary data model, enforcing the disjointness constraint, alleviates the intractable problem without loss of information. In order to store, manage, and access summary data, a multidimensional access method called summary data (SD) tree is proposed. By preserving the category hierarchy, the SD tree provides for efficient operations, including summary data search, derivation, insertion, and deletion  相似文献   

10.
From a quality of service viewpoint, the transmission packet unreliability and transmission time are both critical performance indicators in a computer system when assessing the Internet quality for supervisors and customers. A computer system is usually modelled as a network topology where each branch denotes a transmission medium and each vertex represents a station of servers. Almost every branch has multiple capacities/states due to failure, partial failure, maintenance, etc. This type of network is known as a multi-state computer network (MSCN). This paper proposes an efficient algorithm that computes the system reliability, i.e., the probability that a specified amount of data can be sent through k (k ≥ 2) disjoint minimal paths within both the tolerable packet unreliability and time threshold. Furthermore, two routing schemes are established in advance to indicate the main and spare minimal paths to increase the system reliability (referred to as spare reliability). Thus, the spare reliability can be readily computed according to the routing scheme.  相似文献   

11.
Summary The class of data dependencies is a class of first-order sentences that seem to be suitable to express semantic constraints for relational databases. We deal with the question of which classes of databases are axiomatizable by data dependencies. (A class of databases is said to be axiomatizable by sentences of a certain kind if there exists a set of sentences of that kind such that is the class of all models of that set.) Our results characterize, by algebraic closure conditions, classes of databases that are axiomatizable by dependencies of different kinds. Our technique is model-theoretic, and the characterization easily entails all previously known results on axiomatizability by dependencies.Research partially supported by Swiss National Science Foundation Grant No. 82.820.0.80 (1980–1982), revision of the paper was done while visiting the Mathematisches Forschungsinstitut at the Swiss Federal Institute of Technology (Summer 1985)Part of the research reported here was done while the author was at Stanford University and supported by a Weizmann Post-Doctoral Fellowship and AFOSR grant 80-0212  相似文献   

12.
A new approach to problems of clustering and classification of multidimensional pictorial data is presented. Proceeding logically from simple models and assumptions, the development of a clustering technique and program is described. Some tests of the program have been performed and this work is reported. The techniques make use of information from the spatial domain.  相似文献   

13.
14.
Obtaining an agricultural drought index using solely remotely sensed products has numerous benefits over their in situ counterparts such as if a country does not have the resources to implement an in situ ground network. One such index, created by Rhee et al. (2010), uses a combination of precipitation data from the Tropical Rainfall Measuring Mission (TRMM), with land-surface temperature (LST) data and vegetation indices (VIs) using data from the Moderate Resolution Imaging Spectroradiometer (MODIS) to assess drought conditions. With TRMM data becoming no longer available (as of mid-2015), this study sought to test precipitation data from the Climate Prediction Center (CPC) Morphing (CMORPH) Technique over the study period of January 2003–September 2014, in order to take the place of the TRMM data set in a drought severity index (DSI). This study also attempted to refine the methodology using the quasi-climatological anomalies (short-term climatological anomalies) of each parameter within the DSI. We validated the results of the DSI against in situ percentage available water (PAW) data from a soil water balance (SWB) model over the country of Uruguay. The results of the DSI correlated well with the PAW over the warmer months (October–March) of the year with average r-values ranging from 0.74 to 0.81, but underperformed during the colder months (April–September) with average r-values ranging from 0.38 to 0.50. This underperformance is due to the fact that precipitation during this season continues to have high variability, whereas PAW stays relatively constant. Spatially the DSI correlates well over the majority of the country with the possible exception of underperformance near the coastal area in the southeastern portion of the country. Ultimately, this research has the ability to aid Uruguay in better drought monitoring and mitigation practices as well as emergency aid resource allocation.  相似文献   

15.
16.
17.
Wu  Xiaoxue  Zheng  Wei  Pu  Minchao  Chen  Jie  Mu  Dejun 《Software Quality Journal》2020,28(1):195-220
Software Quality Journal - Symptoms of software aging include performance degradation and failure occurrence increasing when software systems run for a period of time. Therefore, software aging is...  相似文献   

18.
《Software, IEEE》2004,21(1):100-103
Bug tacking systems give developers a unique and clear view into user's everyday product experiences. Adding some statistical analysis and software teams can efficiently improve product quality. It's hard to tell precisely how well the error reporting system working, but this seems to be a bug weapon that has landed a permanent spot in microsoft's arsenal. Automated bug tracking, combined with statistical reporting, plays a key role for developers at the Mozilla Foundations, best known for its open source Web browser and email software. The sparse, random sampling approach produces enough data for the team to do what it call "statistical debugging"-bug detection through statistical analysis.  相似文献   

19.
In this paper, we consider an unknown semi-coherent structure function. Our main focus is the inductive inference problem, that is, how to learn the structure function from data which partially defines the function. We develop a set of algorithms and simulate their success in learning an arbitrary 10-component function, and conclude that the algorithms are feasible.  相似文献   

20.
Many of the state-of-the-art classification algorithms for data with linearly ordered attribute domains and a linearly ordered label set insist on the monotonicity of the induced classification rule. Training and evaluation of such algorithms requires the availability of sufficiently general monotone data sets. In this short contribution we introduce an algorithm that allows for the (almost) uniform random generation of monotone data sets based on the Markov Chain Monte Carlo method.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号