首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
L.C. Briand, S. Morasca and V.R. Basili (ibid., vol. 22, no. 1, pp. 68-85, Jan. 1996) introduced a measurement-theoretic approach to software measurement and criticized (among others) the work of the author, but they misinterpreted his work. The author does not require additive software (complexity) measures as Briand, Morasca and Basili state. The author uses the concept of the extensive structure in order to show the empirical properties behind software measures. Briand, Morasca and Basili use the concept of meaningfulness in order to describe scales and that certain scale levels are not excluded by the Weyuker properties. However, they do not consider that scales and scale types are different things  相似文献   

2.
Using fault injection and failure-tolerance measurement with ultrarare inputs, the authors create on automated software environment that can supplement traditional testing methods. Applied to four case studies, their methods promise to make software more robust  相似文献   

3.
A view of software measurement that disagrees with the model presented by Kitchenham, Pfleeger, and Fenton (1995), is given. Whereas Kitchenham et al. argue that properties used to define measures should not constrain the scale type of measures, the authors contend that that is an inappropriate restriction. In addition, a misinterpretation of Weyuker's (1988) properties is noted  相似文献   

4.
Building a software architecture that meets functional requirements is a quite consolidated activity, whereas keeping high quality attributes is still an open challenge. In this paper we introduce an optimization framework that supports the decision whether to buy software components or to build them in-house upon designing a software architecture. We devise a non-linear cost/quality optimization model based on decision variables indicating the set of architectural components to buy and to build in order to minimize the software cost while keeping satisfactory values of quality attributes. From this point of view, our tool can be ideally embedded into a Cost Benefit Analysis Method to provide decision support to software architects. The novelty of our approach consists in building costs and quality attributes on a common set of decision variables related to software development. We start from a special case of the framework where the quality constraints are related to the delivery time and the product reliability, and the model solution also devises the amount of unit testing to be performed on built components. We generalize the framework formulation to represent a broader class of architectural cost-minimization problems under quality constraints, and discuss advantages and limitations of such approach.  相似文献   

5.
The measure property set of Briand, Morasca, and Basili (1996) establishes the foundation of a real software measurement theory. Unfortunately, a number of inconsistencies related to additivity properties might hinder its acceptance and further elaboration. The authors show how to remove the ambiguity in the property definitions  相似文献   

6.
When working with large-scale models or numerous small models, there can be a temptation to rely on default settings in proprietary software to derive solutions to the model. In this paper we show that, for the solution of non-linear dynamic models, this approach can be inappropriate. Alternative linear and non-linear specifications of a particular model are examined. One version of the model, expressed in levels, is highly non-linear. A second version of the model, expressed in logarithms, is linear. The dynamic solution of each model version has a combination of stable and unstable eigenvalues so that any dynamic solution requires the calculation of appropriate “jumps” in endogenous variables. We can derive a closed-form solution of the model, which we use as our “true” benchmark, for comparison with computational solutions of both linear and non-linear models. Our approach is to compare the “goodness of fit” of reverse-shooting solutions for both the linear and non-linear model, by comparing the computational solutions with the benchmark solution. Under the basic solution method with default settings, we show that there is significant difference between the computational solution for the non-linear model and the benchmark closed-form solution. We show that this result can be substantially improved using modifications to the solver and to parameter settings.  相似文献   

7.
Electrical conductivity measurements in the laboratory are critical for interpreting geoelectric and magnetotelluric profiles of the Earth's crust and mantle. In order to facilitate access to the current database on electrical conductivity of geomaterials, we have developed a freely available web application (SIGMELTS) dedicated to the calculation of electrical properties. Based on a compilation of previous studies, SIGMELTS computes the electrical conductivity of silicate melts, carbonatites, minerals, fluids, and mantle materials as a function of different parameters, such as composition, temperature, pressure, water content, and oxygen fugacity. Calculations on two-phase mixtures are also implemented using existing mixing models for different geometries. An illustration of the use of SIGMELTS is provided, in which calculations are applied to the subduction zone-related volcanic zone in the Central Andes. Along with petrological considerations, field and laboratory electrical data allow discrimination between the different hypotheses regarding the formation and rise from depth of melts and fluids and quantification of their storage conditions.  相似文献   

8.
We hypothesize that software defect repair times can be characterized by the Laplace Transform of the Lognormal (LTLN) distribution. This hypothesis is rooted in the observation that software defect repair times are influenced by the multiplicative interplay of several factors, and the lognormal distribution is a natural choice to model rates of occurrence of such phenomenon. Conversion of the lognormal rate distribution to an occurrence time distribution yields the LTLN. We analyzed a total of more than 10,000 software defect repair times collected over nine products at Cisco Systems to confirm our LTLN hypothesis. Our results also demonstrate that the LTLN distribution provides a statistically better fit to the observed repair times than either of the two most widely used repair time distributions, namely, the lognormal and the exponential. Moreover, we show that the repair times of subsets of defects, partitioned according to the Orthogonal Defect Classification (ODC) scheme also follow the LTLN distribution. Finally, we describe how the insights that lead to the LTLN repair time model allow us to consider and evaluate alternative process improvement strategies.  相似文献   

9.
This paper presents the InterMod methodology. By combining the widely accepted Agile Methods, Model-Driven Developments and User-Centred Design it allows us to develop high-quality interactive applications. As a main characteristic, it plans and organises the software project as a series of iterations that are guided by the User Objectives in an agile and user-centred manner. At each iteration, the software development work can be distributed to different teams according to some developmental and integration activities. Each activity is driven by models that are validated by a multidisciplinary team composed of developers and users. The requirements are incrementally collected and formalised by means of models based on user-centred design. Besides, the Semantically Enriched Human–Computer Interaction model is proposed to speed up project validation. This model enriches a human–computer interaction model with some visual characteristics and the application semantic. Thus, the enriched model provides enough information to generate prototypes so users and developers can easily validate this model. Diagram project is a real case study that is used to illustrate the application of the InterMod methodology through the whole paper.  相似文献   

10.
A popular technique in paleoclimatology is the definition of occurrences of climate-sensitive lithofacies, such as evaporite deposits, using a global grid system. The simplest and most widely used grid systems in paleoclimatology are orthogonal grids that use lines of latitude and longitude as grid-cell boundaries. Occurrences defined using orthogonal grids, however, can differ greatly in size and shape because lines of longitude converge at the poles, distorting the shape of the grid system. As a result of this distortion, the latitude at which the occurrences were defined can affect the number and distribution of occurrences. As an alternative, spherical geodesic systems can be used. Spherical geodesic systems have near-equal area and near-equal shape grid-cells for the entire sphere, which significantly reduce biases introduced by the grid system. Spherical geodesic systems can make paleoclimatic studies using occurrences of climate-sensitive lithofacies more reliable. To make spherical geodesic systems practical for paleoclimate applications, a “tool kit” of programs written in C has been assembled. Four programs are included in the tool kit: DESIGNER, which designs spherical geodesic grids, PLOTTER, which generates import files for Terra Mobilis™ and PGIS/Mac™ to display the grids, MAPPER, which defines occurrences using the grids, and ROTATOR, which rotates data about Euler poles. Middle Devonian evaporite data for North America were compiled to demonstrate each of the functions.  相似文献   

11.
Gupta and Magnusson [The capacitated lot-sizing and scheduling problem with sequence-dependent setup costs and setup times. Computers and Operations Research 2005;32(4):727–47] develop a model for the single machine capacitated lot-sizing and scheduling problem (CLSP) with sequence dependent setup times and setup costs, incorporating all the usual features of setup carryovers. In this note we show that this model does not avoid disconnected subtours. A new set of constraints is added to the model to provide an exact formulation for this problem.  相似文献   

12.
The age of technological society demands that ethical concerns of the path are not forgotten. Technological powering of a personal act shortens the gap between organization and person, and personal ethical concerns then face a dilemma. Indian’s thought suggests that if a mental state of equanimity without contention prevails over as a process, the evils and demerits disappear and ethical dissonance reduces because there is no common evil. Further, it is no longer necessary to translate potential consequences of the choices in terms of risks. Liberty peace and love in this technological time come through the state where the approach is for hands-off.  相似文献   

13.
Computer Anxiety: “Trait” or “State”?   总被引:2,自引:2,他引:0  
A recurring question in the study of computer anxiety is whether computer anxiety is a relatively stable personality trait or a mutable, temporary state. The two studies reported examined this question in two groups of first year psychology students. These students were requested to complete a computer anxiety test, a trait anxiety test, and a state anxiety test. Some groups were administered the tests in a pen and paper format, while others were tested using computerized tests. In the first study, a Dutch version of the Profile of Mood States (POMS) was used; in the second study, a Dutch adaptation of the State-Trait Anxiety Inventory (STAI). The data were analyzed using structural equation modeling. In both studies, computer anxiety turned out to be related more strongly to trait anxiety than to state anxiety. In fact, there was no relationship between computer anxiety and state anxiety in the pen and paper format. In the computerized versions however, computer anxiety and state anxiety were related, suggesting that state anxiety in situations involving a computer is caused by pre-existing computer anxiety.  相似文献   

14.
软件复用经济学模型比较分析   总被引:2,自引:2,他引:0  
说明了软件复用经济学模型的主要目的和任务,对目前国际上提出的12个模型进行了比较分析。将复用经济学模型划分为两类:成本收益模型和投资分析模型,并举例介绍了相关类型的模型。对分析的所有模型列表比较,分析了它们的适用情况和异同点,并对领域存在的问题进行了讨论。  相似文献   

15.
View ann-vertex,m-edge undirected graph as an electrical network with unit resistors as edges. We extend known relations between random walks and electrical networks by showing that resistance in this network is intimately connected with thelengths of random walks on the graph. For example, thecommute time between two verticess andt (the expected length of a random walk froms tot and back) is precisely characterized by the effective resistanceR st betweens andt: commute time=2mR st . As a corollary, thecover time (the expected length of a random walk visiting all vertices) is characterized by the maximum resistanceR in the graph to within a factor of logn:mR<-cover time<-O(mRlogn). For many graphs, the bounds on cover time obtained in this manner are better than those obtained from previous techniques such as the eigenvalues of the adjacency matrix. In particular, we improve known bounds on cover times for high-degree graphs and expanders, and give new proofs of known results for multi-dimensional meshes. Moreover, resistance seems to provide an intuitively appealing and tractable approach to these problems.  相似文献   

16.
产品线成本模型的比较与分析   总被引:1,自引:0,他引:1  
为给软件产品线决策者应用软件产品线模型提供理论上的参考,综合分析比较了近年来的20种软件产品线模型,在对软件产品线模型的投资循环、重用方式、货币时间价值、经济函数、成本因子和重用成本等方面因素进行细致分析的基础上提出了软件产品线模型的比较框架,在该框架内着重分析了其中5种典型的软件产品线模型,对应用软件产品线开发方式的成本估算和投资分析做了细致分析,并对当前软件产品线模型时存在的问题和发展方向进行了探讨.  相似文献   

17.
The paper proposes a modified scheme of the method of resolving functions for conflict-controlled processes with a cylindrical terminal set. This scheme ensures that the game is terminated in a definite guaranteed time in the class of stroboscopic strategies without any additional conditions. The guaranteed times of various schemes of the method of resolving functions are compared with that of the first Pontryagin method in terms of convex-valued mappings for a certain structure of the terminal set. __________ Translated from Kibernetika i Sistemnyi Analiz, No. 4, pp. 89–100, July–August 2008.  相似文献   

18.
A reader-writer queue manages two classes of customers: readers and writers. An unlimited number of readers can be processed in parallel; writers are processed serially. Both classes arrive according to a Poisson process. Reader and writer service times are general iid random variables. There is infinite room in the queue for waiting customers.

In this paper, a reader-writer queue is considered under the following priority disciplines: strong reader preference (SRP), reader preference (RP), alternating exhaustive priority (AEP), writer preference (WP), and strong writer preference (SWP). Preemptive priority is given to readers under the SRP discipline, or to writers under the SWP discipline. Non-preemptive priority is accorded to readers with the RP discipline, or to writers with the WP discipline. For the AEP discipline, customers of a given class are served exhaustively in an alternating fashion.

For the five priority disciplines, a stability condition and first moments for the steady-state reader and writer queueing times are given. Using these analytical results, each of the five priority disciplines is seen to be optimal (among the five) in some region of the parameter space. Simulation results are also presented.  相似文献   


19.
Scholars have begun naming and defining terms that describe the multifaceted kinds of composing practices occurring in their classrooms and scholarship. This paper analyzes the terms “multimedia” and “multimodal,” examining how each term has been defined and presenting examples of documents, surveys, web sites and others to show when and how each term is used in both academic and non-academic/industry contexts. This paper shows that rather than the use of these terms being driven by any difference in their definitions, their use is more contingent upon the context and the audience to whom a particular discussion is being directed. While “multimedia” is used more frequently in public/industry contexts, “multimodal” is preferred in the field of composition and rhetoric. This preference for terms can be best explained by understanding the differences in how texts are valued and evaluated in these contexts. “Multimodal” is a term valued by instructors because of its emphasis on design and process, whereas “multimedia” is valued in the public sphere because of its emphasis on the production of a deliverable text. Ultimately, instructors need to continue using both terms in their teaching and scholarship because although “multimodal” is a term that is more theoretically accurate to describe the cognitive and socially situated choices students are making in their compositions, “multimedia” works as a gateway term for instructors and scholars to interface with those outside of academia in familiar and important ways.  相似文献   

20.
In this paper, problems related to the use of the microcomputer as a teaching tool in engineering education are discussed. The objectives and methodological principles of educational software are defined, stressing the need to improve the quality of engineering education. The microcomputer is considered as an engineering tool in the decision-making process, offering possibilities for further deeper analysis of physical phenomena in electrical devices. Examples of applications of the methodological principles developed for the elaboration of educational software packages for electrical engineering are given.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号