首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper considers the mapping tasks that can be accomplished using microcomputers and the power and limitations of the present generation of machines. The range of peripheral devices that need to be driven by a microcomputer in mapping applications is discussed. It is argued that in a rapidly changing environment, with hardware regularly offering more power for less money, portable machine-independent software needs to be developed for mapping applications. This needs to take full advantage of the powerful interactive capabilities of microcomputers to provide both skilled and naive users with opportunities for interacting with maps, both at the design and the end-user stages, and in new forms. Such interactive facilities hitherto have been too expensive or too difficult to implement on mainframe computers.  相似文献   

2.
Evaluation and selection of the software packages is complicated and time consuming decision making process. Selection of inappropriate software package can turn out to be costly and adversely affects business processes and functioning of the organization. In this paper we describe (i) generic methodology for software selection, (ii) software evaluation criteria, and (iii) hybrid knowledge based system (HKBS) approach to assist decision makers in evaluation and selection of the software packages. The proposed HKBS approach employs an integrated rule based and case based reasoning techniques. Rule based reasoning is used to capture user needs of the software package and formulate a problem case. Case based reasoning is used to retrieve and compare candidate software packages with the user needs of the package. This paper also evaluates and compares HKBS approach with the widely used existing software evaluation techniques such as analytic hierarchy process (AHP) and weighted scoring method (WSM).  相似文献   

3.
ContextSoftware Process Engineering promotes the systematic production of software by following a set of well-defined technical and management processes. A comprehensive management of these processes involves the accomplishment of a number of activities such as model design, verification, validation, deployment and evaluation. However, the deployment and evaluation activities need more research efforts in order to achieve greater automation.ObjectiveWith the aim of minimizing the required time to adapt the tools at the beginning of each new project and reducing the complexity of the construction of mechanisms for automated evaluation, the Software Process Deployment & Evaluation Framework (SPDEF) has been elaborated and is described in this paper.MethodThe proposed framework is based on the application of well-known techniques in Software Engineering, such as Model Driven Engineering and Information Integration through Linked Open Data. It comprises a systematic method for the deployment and evaluation, a number of models and relationships between models, and some software tools.ResultsAutomated deployment of the OpenUP methodology is tested through the application of the SPDEF framework and support tools to enable the automated quality assessment of software development or maintenance projects.ConclusionsMaking use of the method and the software components developed in the context of the proposed framework, the alignment between the definition of the processes and the supporting tools is improved, while the existing complexity is reduced when it comes to automating the quality evaluation of software processes.  相似文献   

4.
This paper describes a software trace facility (STF) developed to provide data about the flow of control between modules of the IBM System/360 operating system OS/MVT. The motivation for STF is discussed and a brief Introduction to OS/MVT presented to show how STF interfaces with the operating system. The output of the program is illustrated and some details of the program logic are discussed together with the tracing options available to the user. The paper then describes some potential applications.  相似文献   

5.
The instruction mix of a CDC CYBER/74 computer in a university environment was monitored, and in this paper frequencies of execution for the most commonly used instructions are given. From these measurements we make a number of observations about several aspects of computing patterns. One observation is the fact that if we exclude the idle loop of the operating system, the percentage of occurrences for each type of instruction over various time intervals is constant. This fact is used to define a machine-level software profile (MLSP) for the type of machine operations in the given computing environment. It is shown that the MLSP could be used to find machine utilization and the extent to which software takes advantage of machine architecture, and as a consistent method to improve the performance of a machine configuration.  相似文献   

6.
Agile methods for software development promote iterative design and implementation. Most of them divide a project into functionalities, called user stories; at each iteration, often called a sprint, a subset of user stories are developed. The sprint planning phase is critical to ensure the project success, but it is also a difficult problem because several factors impact on the optimality of a sprint plan, e.g., the estimated complexity, business value, and affinity of the user stories to be included in each sprint. In this paper we present an approach for sprint planning based on an integer linear programming model. Given the estimates made by the project team and a set of development constraints, the optimal solution of the model is a sprint plan that maximizes the business value perceived by users. Solving to optimality the model by a general-purpose MIP solver, such as IBM Ilog Cplex, takes time and for some instances even finding a feasible solution requires too large computing times for an operational use. For this reason we propose an effective Lagrangian heuristic based on a relaxation of the proposed model and some greedy and exchange algorithms. Computational results on both real and synthetic projects show the effectiveness of the proposed approach.  相似文献   

7.
An exploration of enterprise technology selection and evaluation   总被引:1,自引:0,他引:1  
The evaluation-and-selection of enterprise technologies by firms has been said to be largely rational and deterministic. This paper challenges this notion, and puts forward the argument that substantial ceremonial aspects also play an important role. An in-depth, exploratory longitudinal case study of a bank selecting a ubiquitous and pervasive e-mail system was conducted using grounded theory and a hermeneutic [pre] understanding of institutional and decision making theories. Intuition, symbols, rituals, and ceremony all figured prominently in the decision process. However, rather than being in conflict with the rational processes, we found them to be in tension, leading to a more holistic social construction of decision processes. For researchers, this suggests that a focus on process rationality, not outcomes, might lead to a fuller understanding of these critical decisions. For managers, it underscores the importance of understanding the past in order to create the future.  相似文献   

8.
This paper documents a prototype knowledge system which has been developed to select the most appropriate CAD software based on the user's requirements and preferences.  相似文献   

9.

Context

Software productivity measurement is essential in order to control and improve the performance of software development. For example, by identifying role models (e.g. projects, individuals, tasks) when comparing productivity data. The prediction is of relevance to determine whether corrective actions are needed, and to discover which alternative improvement action would yield the best results.

Objective

In this study we identify studies for software productivity prediction and measurement. Based on the identified studies we first create a classification scheme and map the studies into the scheme (systematic map). Thereafter, a detailed analysis and synthesis of the studies is conducted.

Method

As a research method for systematically identifying and aggregating the evidence of productivity measurement and prediction approaches systematic mapping and systematic review have been used.

Results

In total 38 studies have been identified, resulting in a classification scheme for empirical research on software productivity. The mapping allowed to identify the rigor of the evidence with respect to the different productivity approaches. In the detailed analysis the results were tabulated and synthesized to provide recommendations to practitioners.

Conclusion

Risks with simple ratio-based measurement approaches were shown. In response to the problems data envelopment analysis seems to be a strong approach to capture multivariate productivity measures, and allows to identify reference projects to which inefficient projects should be compared. Regarding simulation no general prediction model can be identified. Simulation and statistical process control are promising methods for software productivity prediction. Overall, further evidence is needed to make stronger claims and recommendations. In particular, the discussion of validity threats should become standard, and models need to be compared with each other.  相似文献   

10.
Replications are commonly considered to be important contributions to investigate the generality of empirical studies. By replicating an original study it may be shown that the results are either valid or invalid in another context, outside the specific environment in which the original study was launched. The results of the replicated study show how much confidence we could possibly have in the original study. We present a replication of a method for selecting software reliability growth models to decide whether to stop testing and release software. We applied the selection method in an empirical study, conducted in a different development environment than the original study. The results of the replication study show that with the changed values of stability and curve fit, the selection method works well on the empirical system test data available, i.e., the method was applicable in an environment that was different from the original one. The application of the SRGMs to failures during functional testing resulted in predictions with low relative error, thus providing a useful approach in giving good estimates of the total number of failures to expect during functional testing.
Carina AnderssonEmail:
  相似文献   

11.
本文首先简要介绍了PSP的原理,阐述了如何使学生理解从个体软件开发过程到软件产品工程过程,培养学生从开发简单小程序的实践转向开发大规模软件。然后结合实际的教学环境对教学策略加以详细的说明,并对收集到的学生数据进行总结和分析。  相似文献   

12.
Several requirements are placed on queueing models of computer systems. These include credibility, accuracy, timeliness and cost. Modelling software can have critical impact on all of these requirements. We survey the characteristics of major pieces of queueing software. Based on this survey we synthesize a set of design objectives for queueing software. Finally, we discuss our own queueing network software, the Research Queueing Package (RESQ), in light of these objectives.  相似文献   

13.
The design and analysis of the structure of software systems has typically been based on purely qualitative grounds. In this paper we report on our positive experience with a set of quantitative measures of software structure. These metrics, based on the number of possible paths of information flow through a given component, were used to evaluate the design and implementation of a software system (the UNIX operating system kernel) which exhibits the interconnectivity of components typical of large-scale software systems. Several examples are presented which show the power of this technique in locating a variety of both design and implementation defects. Suggested repairs, which agree with the commonly accepted principles of structured design and programming, are presented. The effect of these alterations on the structure of the system and the quantitative measurements of that structure lead to a convincing validation of the utility of information flow metrics.  相似文献   

14.
ContextTo determine the effectiveness of software testers a suitable performance appraisal approach is necessary, both for research and practice purposes. However, review of relevant literature reveals little information of how software testers are appraised in practice.Objective(i) To enhance our knowledge of industry practice of performance appraisal of software testers and (ii) to collect feedback from project managers on a proposed performance appraisal form for software testers.MethodA web-based survey with questionnaire was used to collect responses. Participants were recruited using cluster and snowball sampling. 18 software development project managers participated.ResultsWe found two broad trends in performance appraisal of software testers – same employee appraisal process for all employees and a specialized performance appraisal method for software testers. Detailed opinions were collected and analyzed on how performance of software testers should be appraised. Our proposed appraisal approach was generally well-received.ConclusionFactors such as number of bugs found after delivery and efficiency of executing test cases were considered important in appraising software testers’ performance. Our proposed approach was refined based on the feedback received.  相似文献   

15.
When developing multiple products within a common application domain, systematic use of a software product family process can yield increased productivity in cost, quality, effort and schedule. Such a process provides the means for the reuse of software assets which can considerably reduce the development time and the cost of software products. A comprehensive strategy for the evaluating the maturity of a software product family process is needed due to growing popularity of this concept in the software industry. In this paper, we propose a five-level maturity scale for software product family process. We also present a fuzzy inference system for evaluating maturity of software product family process using the proposed maturity scale. This research is aimed at establishing a comprehensive and unified strategy for process evaluation of a software product family. Such a process evaluation strategy will enable an organization to discover and monitor the strengths and weaknesses of the various activities performed during development of multiple products within a common application domain.  相似文献   

16.
ContextThe software defect prediction during software development has recently attracted the attention of many researchers. The software defect density indicator prediction in each phase of software development life cycle (SDLC) is desirable for developing a reliable software product. Software defect prediction at the end of testing phase may not be more beneficial because the changes need to be performed in the previous phases of SDLC may require huge amount of money and effort to be spent in order to achieve target software quality. Therefore, phase-wise software defect density indicator prediction model is of great importance.ObjectiveIn this paper, a fuzzy logic based phase-wise software defect prediction model is proposed using the top most reliability relevant metrics of the each phase of the SDLC.MethodIn the proposed model, defect density indicator in requirement analysis, design, coding and testing phase is predicted using nine software metrics of these four phases. The defect density indicator metric predicted at the end of the each phase is also taken as an input to the next phase. Software metrics are assessed in linguistic terms and fuzzy inference system has been employed to develop the model.ResultsThe predictive accuracy of the proposed model is validated using twenty real software project data. Validation results are satisfactory. Measures based on the mean magnitude of relative error and balanced mean magnitude of relative error decrease significantly as the software project size increases.ConclusionIn this paper, a fuzzy logic based model is proposed for predicting software defect density indicator at each phase of the SDLC. The predicted defects of twenty different software projects are found very near to the actual defects detected during testing. The predicted defect density indicators are very helpful to analyze the defect severity in different artifacts of SDLC of a software project.  相似文献   

17.
In usability context of interactive systems the heuristic evaluation method is widespread. In most applications the results tend to be qualitative, describing such aspects that require some improvement for the benefit of usability. However, these qualitative results do not allow us to determine how usable it is or how it becomes an interactive system. Hence the need for quantitative results may also be very necessary in order to determine the effort that would be needed to get a sufficiently usable system.This article describes, following the idea of the UsabAIPO Project, a new experiment to obtain quantitative results after a heuristic evaluation. This new experimentation has required some variation on the original idea, working with a set of different heuristic categories, while considering the use of the score depending on severity and frequency parameters.  相似文献   

18.

Context

A software reference architecture is a generic architecture for a class of systems that is used as a foundation for the design of concrete architectures from this class. The generic nature of reference architectures leads to a less defined architecture design and application contexts, which makes the architecture goal definition and architecture design non-trivial steps, rooted in uncertainty.

Objective

The paper presents a structured and comprehensive study on the congruence between context, goals, and design of software reference architectures. It proposes a tool for the design of congruent reference architectures and for the analysis of the level of congruence of existing reference architectures.

Method

We define a framework for congruent reference architectures. The framework is based on state of the art results from literature and practice. We validate our framework and its quality as analytical tool by applying it for the analysis of 24 reference architectures. The conclusions from our analysis are compared to the opinions of experts on these reference architectures documented in literature and dedicated communication.

Results

Our framework consists of a multi-dimensional classification space and of five types of reference architectures that are formed by combining specific values from the multi-dimensional classification space. Reference architectures that can be classified in one of these types have better chances to become a success. The validation of our framework confirms its quality as a tool for the analysis of the congruence of software reference architectures.

Conclusion

This paper facilitates software architects and scientists in the inception, design, and application of congruent software reference architectures. The application of the tool improves the chance for success of a reference architecture.  相似文献   

19.
20.
T. R. Hopkins 《Software》1980,10(3):175-181
The increase in the number of available dialects of BASIC has lead to the usual difficulties encountered when transporting software. The proposed American National Standard Minimal BASIC represents a small but almost universal subset of all BASICs. PBASIC is a verifier for ANS Minimal BASIC. The verifier is itself written in PFORT, a portable subset of FORTRAN IV.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号