首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 250 毫秒
1.
Since the early 1970s tremendous growth has been seen in the research of software reliability growth modeling.In general, software reliability growth models (SRGMs) are applicable to the late stages of testing in software development and they can provide useful information about how to improve the reliability of software products.A number of SRGMs have been proposed in the literature to represent time-dependent fault identification/removal phenomenon;still new models are being proposed that could fit a greater number of reliability growth curves.Often,it is assumed that detected faults axe immediately corrected when mathematical models are developed.This assumption may not be realistic in practice because the time to remove a detected fault depends on the complexity of the fault,the skill and experience of the personnel,the size of the debugging team,the technique,and so on.Thus,the detected fault need not be immediately removed,and it may lag the fault detection process by a delay effect factor.In this paper,we first review how different software reliability growth models have been developed,where fault detection process is dependent not only on the number of residual fault content but also on the testing time,and see how these models can be reinterpreted as the delayed fault detection model by using a delay effect factor.Based on the power function of the testing time concept,we propose four new SRGMs that assume the presence of two types of faults in the software:leading and dependent faults.Leading faults are those that can be removed upon a failure being observed.However,dependent faults are masked by leading faults and can only be removed after the corresponding leading fault has been removed with a debugging time lag.These models have been tested on real software error data to show its goodness of fit,predictive validity and applicability.  相似文献   

2.
Failure of a safety critical system can lead to big losses.Very high software reliability is required for automating the working of systems such as aircraft controller and nuclear reactor controller software systems.Fault-tolerant softwares are used to increase the overall reliability of software systems.Fault tolerance is achieved using the fault-tolerant schemes such as fault recovery (recovery block scheme),fault masking (N-version programming (NVP)) or a combination of both (Hybrid scheme).These softwares incorporate the ability of system survival even on a failure.Many researchers in the field of software engineering have done excellent work to study the reliability of fault-tolerant systems.Most of them consider the stable system reliability.Few attempts have been made in reliability modeling to study the reliability growth for an NVP system.Recently,a model was proposed to analyze the reliability growth of an NVP system incorporating the effect of fault removal efficiency.In this model,a proportion of the number of failures is assumed to be a measure of fault generation while an appropriate measure of fault generation should be the proportion of faults removed.In this paper,we first propose a testing efficiency model incorporating the effect of imperfect fault debugging and error generation.Using this model,a software reliability growth model (SRGM) is developed to model the reliability growth of an NVP system.The proposed model is useful for practical applications and can provide the measures of debugging effectiveness and additional workload or skilled professional required.It is very important for a developer to determine the optimal release time of the software to improve its performance in terms of competition and cost.In this paper,we also formulate the optimal software release time problem for a 3VP system under fuzzy environment and discuss a the fuzzy optimization technique for solving the problem with a numerical illustration.  相似文献   

3.
In this paper we propose a sufficient codition for minimal routing in 3-dimensional (3-D) meshes with faulty nodes,It is based on an early work of the author on minial routing in 2-dimensional(2-D) meshes,Unlike many traditional models that assume all the nodes know global fault distribution or just adjacent fault information,our approach is based on the concept of limited global fault information,First,we propose a fault model called faulty cube in which all faulty nodes in the system are contained in a set of faulty cubes.Fault information is then distributed to limited number of nodes while it is still sufficeint to support minimal routing.The limited fault information collcted at each node is represented by a vector caaled extended safety level.The extended safety level associated with a node can be used to determine the existence of a minimal path from this node to a given destination .Specifically,we study the existence of minimal paths at a given source node,limited distribution of fault information,minimal routing,and deadlock-free and livelock-free routing.our results show that any minimal routing that is partially adaptive can be applied in our model as long as the dstination node meets a certain conditon.We also propose a dynamic planar-adaptive routing scheme that offers better fault tolerance and adaptivity than the planar-adaptive routing scheme in 3-D meshes.Our approach is the first attempt to address adaptive and minimal routing is 3-D meshes with faulty nodes using limited fault information.  相似文献   

4.
Relevance estimation is one of the core concerns of information retrieval(IR)studies.Although existing retrieval models gained much success in both deepening our understanding of information seeking behavior and building effective retrieval systems,we have to admit that the models work in a rather different manner from how humans make relevance judgments.Users’information seeking behaviors involve complex cognitive processes,however,the majority of these behavior patterns are not considered in existing retrieval models.To bridge the gap between practical user behavior and retrieval model,it is essential to systematically investigate user cognitive behavior during relevance judgement and incorporate these heuristics into retrieval models.In this paper,we aim to formally define a set of basic user reading heuristics during relevance judgement and investigate their corresponding modeling strategies in retrieval models.Further experiments are conducted to evaluate the effectiveness of different reading heuristics for improving ranking performance.Based on a large-scale Web search dataset,we find that most reading heuristics can improve the performance of retrieval model and establish guidelines for improving the design of retrieval models with human-inspired heuristics.Our study sheds light on building retrieval model from the perspective of cognitive behavior.  相似文献   

5.
Pest insect monitoring and control is crucial to ensure a safe and profitable crop growth in all plantation types,as well as guarantee food quality and limited use of pesticides.We aim at extending traditional monitoring by means of traps,by involving the general public in reporting the presence of insects by using smartphones.This includes the largely unexplored problem of detecting insects in images that are taken in noncontrolled conditions.Furthermore,pest insects are,in many cases,extremely similar to other species that are harmless.Therefore,computer vision algorithms must not be fooled by these similar insects,not to raise unmotivated alarms.In this work,we study the capabilities of state-of-the-art(SoA)object detection models based on convolutional neural networks(CNN)for the task of detecting beetle-like pest insects on nonhomogeneous images taken outdoors by different sources.Moreover,we focus on disambiguating a pest insect from similar harmless species.We consider not only detection performance of different models,but also required computational resources.This study aims at providing a baseline model for this kind of tasks.Our results show the suitability of current SoA models for this application,highlighting how FasterRCNN with a MobileNetV3 backbone is a particularly good starting point for accuracy and inference execution latency.This combination provided a mean average precision score of 92.66%that can be considered qualitatively at least as good as the score obtained by other authors that adopted more specific models.  相似文献   

6.
Generating test data that can expose the faults of the program is an important issue in software testing. Al- though previous methods of covering path can generate test data to traverse target path, the test data generated by these methods are difficult in detecting some low-probabilistic faults that lie on the covered paths. We present a method of generating test data for covering multiple paths to detect faults in this study. First, we transform the problem of cover- ing multiple paths and detecting faults into a multi-objective optimization problem with constraint, and construct a mathe- matical model for it. Then, we give a strategy of solving the model based on a weighted genetic algorithm. Finally, we ap- ply our method to several real-world programs, and compare it with several methods. The experimental results confirm that the proposed method can more efficiently generate test data that not only traverse the target paths but also detect faults lying in them than other methods.  相似文献   

7.
Markov random fields (MRFs) can be used for a wide variety of vision problems. In this paper we will propose a Markov random field (MRF) image segmentation model. The theoretical framework is based on Bayesian estimation via the energy optimization. Graph cuts have emerged as a powerful optimization technique for minimizing energy functions that arise in low-level vision problem. The theorem of Ford and Fulkerson states that min-cut and max-flow problems are equivalent. So, the minimum s/t cut problem can be solved by finding a maximum flow from the source s to the sink t. we adopt a new min-cut/max-flow algorithm which belongs to the group of algorithms based on augmenting paths. We propose a parameter estimation method using expectation maximization (EM) algorithm. We also choose Gaussian mixture model as our image model and model the density associated with one of image segments (or classes) as a multivariate Gaussian distribution. Characteristic features related to the information in color, texture and position are extracted for each pixel. Experimental results will be provided to illustrate the performance of our method.  相似文献   

8.
Community smells are sub-optimal developer community structures that hinder productivity.Prior studies performed smell prediction and provided refactoring guidelines from a top-down aspect to help community shepherds.Simultaneously,refactoring smells also requires bottom-up effort from every developer.However,supportive measures and guidelines for them are not available at a fine-grained level.Since recent work revealed developers'personalities and working states could influence community smells'emergence and variation,we build prediction models with experience,sentiment,and development process features of developers considering three smells including Organizational Silo,Lone Wolf,and Bottleneck,as well as two related classes including smelly developer and smelly quitter.We predict the five classes in the individual granularity,and we also generate forecasts for the number of smelly developers in the community granularity.The proposed models achieve F-measures ranging from 0.73 to 0.92 in individual-wide within-project,time-wise,and cross-project prediction,and mean R2 performance of 0.68 in community-wide Smelly Developer prediction.We also exploit SHAP(SHapley Additive exPlanations)to assess feature importance to explain our predictors.In conclusion,we suggest developers with heavy workload should foster more frequent communication in a straightforward and polite way to build healthier communities,and we recommend community shepherds to use the forecasting model for refactoring planning.  相似文献   

9.
InterPlaNetary Internet(IPN)plays a very important role in the exploitation of space.However,the relays in IPN are suffered from high symbol error,limited storage space,and limited available energy.To analyze the performance of the relay in IPN,in this article,we build a differential game model,where the controller is transmitting rate,and the goal is to maximize the payoff according to the symbol error rate,the storage space and the available energy in the system.By such a design,the green relays can be achieved for IPNs,which could prolong the live of the relays.Since the model is not easy to discuss,we analyze it using the Bellman theorem and then get some formulas on the trace of the optimal transmitting rate.Finally,extensive simulation results are presented to demonstrate the performance of our proposed model,which shows that by using our derived optimal transmitting rate trace,the relay’s payoff can be maximized.  相似文献   

10.
Community smells are sub-optimal developer community structures that hinder productivity.Prior studies performed smell prediction and provided refactoring guidelines from a top-down aspect to help community shepherds.Simultaneously,refactoring smells also requires bottom-up effort from every developer.However,supportive measures and guidelines for them are not available at a fine-grained level.Since recent work revealed developers'personalities and working states could influence community smells'emergence and variation,we build prediction models with experience,sentiment,and development process features of developers considering three smells including Organizational Silo,Lone Wolf,and Bottleneck,as well as two related classes including smelly developer and smelly quitter.We predict the five classes in the individual granularity,and we also generate forecasts for the number of smelly developers in the community granularity.The proposed models achieve F-measures ranging from 0.73 to 0.92 in individual-wide within-project,time-wise,and cross-project prediction,and mean R2 performance of 0.68 in community-wide Smelly Developer prediction.We also exploit SHAP(SHapley Additive exPlanations)to assess feature importance to explain our predictors.In conclusion,we suggest developers with heavy workload should foster more frequent communication in a straightforward and polite way to build healthier communities,and we recommend community shepherds to use the forecasting model for refactoring planning.  相似文献   

11.
This paper presents a new model of scenarios, dedicated to the specification and verification of system be- haviours in the context of software product lines (SPL). We draw our inspiration from some techniques that are mostly used in the hardware community, and we show how they could be applied to the verification of software components. We point out the benefits of synchronous languages and mod- els to bridge the gap between both worlds.  相似文献   

12.
Performance variability,stemming from nondeterministic hardware and software behaviors or deterministic behaviors such as measurement bias,is a well-known phenomenon of computer systems which increases the difficulty of comparing computer performance metrics and is slated to become even more of a concern as interest in Big Data analytic increases.Conventional methods use various measures(such as geometric mean)to quantify the performance of different benchmarks to compare computers without considering this variability which may lead to wrong conclusions.In this paper,we propose three resampling methods for performance evaluation and comparison:a randomization test for a general performance comparison between two computers,bootstrapping confidence estimation,and an empirical distribution and five-number-summary for performance evaluation.The results show that for both PARSEC and highvariance BigDataBench benchmarks 1)the randomization test substantially improves our chance to identify the difference between performance comparisons when the difference is not large;2)bootstrapping confidence estimation provides an accurate confidence interval for the performance comparison measure(e.g.,ratio of geometric means);and 3)when the difference is very small,a single test is often not enough to reveal the nature of the computer performance due to the variability of computer systems.We further propose using empirical distribution to evaluate computer performance and a five-number-summary to summarize computer performance.We use published SPEC 2006 results to investigate the sources of performance variation by predicting performance and relative variation for 8,236 machines.We achieve a correlation of predicted performances of 0.992 and a correlation of predicted and measured relative variation of 0.5.Finally,we propose the utilization of a novel biplotting technique to visualize the effectiveness of benchmarks and cluster machines by behavior.We illustrate the results and conclusion through detailed Monte Carlo simulation studies and real examples.  相似文献   

13.
It is well known that 802.11 suffers from both inefficiency and unfairness in the face of competition and interference.This paper provides a detailed analysis of the impact of topology and traffic type on network performance when two flows compete with each other for airspace.We consider both TCP and UDP flows and a comprehensive set of node topologies.We vary these topologies to consider all combinations of the following four node-to-node interactions:(1) nodes unable to read or sense each other,(2)nodes able to sense each other but not able to read each other's packets and nodes able to communicate with(3)weak and with(4)strong signal.We evaluate all possible cases through simulation and show that the cases can be reduced to 9 UDP and 10 TCP 802.11g models with similar efficiency/fairness characteristics. We also validate our simulation results with extensive experiments conducted in a laboratory testbed.These more detailed models improve on previous work such as hidden-/exposed-terminal categorization and are thus better suited as a basis for adaptive techniques to improve performance in 802.11 multi-hop WLAN or Mesh Networks.  相似文献   

14.
15.
It is now commonly accepted that the unit disk graph used to model the physical layer in wireless networks does not reflect real radio transmissions,and that a more realistic model should be considered for experimental simulations. Previous work on realistic scenarios has been focused on unicast,however broadcast requirements are fundamentally different and cannot be derived from the unicast case.Therefore,the broadcast protocols must be adapted in order to still be efficient under realistic assumptions.In this paper,we study the well-known multipoint relay broadcast protocol(MPR),in which each node has to choose a set of 1-hop neighbors to act as relays in order to cover the whole 2-hop neighborhood.We give experimental results showing that the original strategy used to select these multipoint relays does not suit a realistic model. On the basis of these results,we propose new selection strategies solely based on link quality.One of the key aspects of our solutions is that our strategies do not require any additional hardware and may be implemented at the application layer, which is particularly relevant to the context of ad hoc and sensor networks where energy savings are mandatory.We finally provide new experimental results that demonstrate the superiority of our strategies under realistic physical assumptions.  相似文献   

16.
Complicated global climate problems trigger researchers from different scientific disciplines to link multiphysics simulations called models for integrated modeling of climate changes by using a software framework called earth system modeling (ESM). As its critical component, coupler is in charge of connections and interactions among models. With the advance of next-generation models, greater data transfer volume and higher coupling frequency are expected to put heavy performance burden on coupler. High efficient coupling techniques are required. In this paper, we propose the sub-domain mapping method to improve the parallel coupling consisted of data transfer and data transformation. By using one specific interpolation oriented communication routing, the communication operations that are originally decentralized in various steps can be combined together for execution. This can reduce the redundant communications and the entailed synchronization costs. The tests on the Tianhe-lA (TH-1A) supercomputer show that our method can achieve 1.1 to 4.9 fold performance improve- ments. We also present further optimization solution for the multi-interpolation cases. The test results show that our method can achieve up to 3.4 fold speedup over the original coupling execution of the current climate system.  相似文献   

17.
Traditional engineering disciplines such as mechanical and electrical engineering are guided by physical laws. They provide the constraints for acceptable engineering solutions by enforcing regularity and thereby limiting complexity. Violations of physical laws can be experienced instantly in the lab. Software engineering is not constrained by physical laws. Consequently, we often create software artifacts which are too complex to be understood,tested or maintained. As too complex software solutions may even work initially, we are tempted to believe that no laws apply. We only learn about the violation of some form of "cognitive laws" late during development or during maintenance, when too high complexity inflicts follow-up defects or increases maintenance costs. Initial work by Barry Boehm (e.g., CoCoMo) aimed at predicting and controlling software project costs based on estimated software size. Through innovative life cycle process models (e.g., Spiral model) Barry Boehm also provided the basis for incremental risk evaluation and adjustment of such predictions. The proposal in this paper is to work towards a scientific basis for software engineering by capturing more such time-lagging dependencies among software artifacts in the form of empirical models and thereby making developers aware of so-called "cognitive laws" that must be adhered to. This paper attempts to answer the questions why we need software engineering laws and how they could look like, how we have to organize our discipline in order to build up software engineering laws, what such laws already exist and how we could develop further laws, how such laws could contribute to the maturing of science and engineering of software in the future, and what the remaining challenges are for teaching, research, and practice in the future.  相似文献   

18.
19.
Appropriate comments of code snippets provide insight for code functionality, which are helpful for program comprehension. However, due to the great cost of authoring with the comments, many code projects do not contain adequate comments. Automatic comment generation techniques have been proposed to generate comments from pieces of code in order to alleviate the human efforts in annotating the code. Most existing approaches attempt to exploit certain correlations (usually manually given) between code and generated comments, which could be easily violated if coding patterns change and hence the performance of comment generation declines. In addition, recent approaches ignore exploiting the code constructs and leveraging the code snippets like plain text. Furthermore, previous datasets are also too small to validate the methods and show their advantage. In this paper, we propose a new attention mechanism called CodeAttention to translate code to comments, which is able to utilize the code constructs, such as critical statements, symbols and keywords. By focusing on these specific points, CodeAttention could understand the semantic meanings of code better than previous methods. To verify our approach in wider coding patterns, we build a large dataset from open projects in GitHub. Experimental results in this large dataset demonstrate that the proposed method has better performance over existing approaches in both objective and subjective evaluation. We also perform ablation studies to determine effects of different parts in CodeAttention.  相似文献   

20.
MapReduce is a popular framework for large- scale data analysis. As data access is critical for MapReduce's performance, some recent work has applied different storage models, such as column-store or PAX-store, to MapReduce platforms. However, the data access patterns of different queries are very different. No storage model is able to achieve the optimal performance alone. In this paper, we study how MapReduce can benefit from the presence of two different column-store models - pure column-store and PAX-store. We propose a hybrid storage system called hybrid columnstore (HC-store). Based on the characteristics of the incoming MapReduce tasks, our storage model can determine whether to access the underlying pure column-store or PAX-store. We studied the properties of the different storage models and create a cost model to decide the data access strategy at runtime. We have implemented HC-store on top of Hadoop. Our experimental results show that HC-store is able to outperform PAX-store and column-store, especially when confronted with diverse workload.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号