共查询到20条相似文献,搜索用时 10 毫秒
1.
Quality requirements engineering for systems and software architecting: methods, approaches, and tools 总被引:1,自引:0,他引:1
Requirements engineering and software architecture are quite mature software engineering sub-disciplines, which often seem to be disconnected for many reasons and it is difficult to perceive the impact of functional and non-functional requirements on architecture and to establish appropriate trace links for traceability purposes. In other cases, the estimation of how non-functional requirements, as the quality properties a system should pose, is not perceived useful enough to produce high-quality software. Therefore, in this special issue, we want to highlight the importance and the role of quality requirements for architecting and building complex software systems that in many cases require multidisciplinary engineering techniques, which increases the complexity of the software development process. 相似文献
2.
Aaron K. Massey Paul N. Otto Lauren J. Hayward Annie I. Antón 《Requirements Engineering》2010,15(1):119-137
Governments enact laws and regulations to safeguard the security and privacy of their citizens. In response, requirements
engineers must specify compliant system requirements to satisfy applicable legal security and privacy obligations. Specifying
legally compliant requirements is challenging because legal texts are complex and ambiguous by nature. In this paper, we discuss
our evaluation of the requirements for iTrust, an open-source Electronic Health Records system, for compliance with legal
requirements governing security and privacy in the healthcare domain. We begin with an overview of the method we developed,
using existing requirements engineering techniques, and then summarize our experiences in applying our method to the iTrust
system. We illustrate some of the challenges that practitioners face when specifying requirements for a system that must comply
with law and close with a discussion of needed future research focusing on security and privacy requirements. 相似文献
3.
Enabling Smart Building applications will help to achieve the ongoing efficient commissioning of buildings, ultimately attaining peak performance in energy use and improved occupant health and comfort, at minimum cost. For these technologies to be scalable, ontology must be adopted to semantically represent data generated by building mechanical systems, acting as conduit for connection to Smart Building applications. As the Building Automation System (BAS) industry considers Brick and Project Haystack ontologies for such applications, this paper provides a quantitative comparison of their completeness and expressiveness using a case study. This is contextualized within the broader set of ontological approaches developed for Smart Buildings, and critically evaluated using key ontology qualities outlined in literature. Brick achieved higher assessment values in completeness and expressiveness achieving 59% and 100% respectively, as compared to Haystacks 43% and 96%. Additionally, Brick exhibited five of six desirable qualities, where Haystack exhibited only three. Overall, this critical analysis has found Brick to be preferable to Haystack but still lacking in completeness; to overcome this, it should be integrated with other existing ontologies to serve as a holistic ontology for the longer- term development of Smart Building applications. These will support innovative approaches to sustainability in building operations across scales and as next- generation building controls and automation strategies. 相似文献
4.
Requirements definition is a critical activity within information systems development. It involves many stakeholder groups: managers, various end-users and different systems development professionals. Each group is likely to have its own viewpoint representing a particular perspective or set of perceptions of the problem domain. To ensure as far as possible that the system to be implemented meets the needs and expectations of all involved stakeholders, it is necessary to understand their various viewpoints and manage any inconsistencies and conflicts. Viewpoint development during requirements definition is the process of identifying, understanding and representing different viewpoints. This paper proposes a conceptual framework for understanding and investigating viewpoint development approaches. Results of the use of the framework for a comparison of viewpoint development approaches are discussed and some important issues and directions for future research are identified. 相似文献
5.
Developing a modular system that properly supports a range of security models is challenging. The work presented here details our experiences with the modular Linux security framework called Linux Security Modules, or LSMs. Throughout our experiences we discovered that the developers of the LSM framework made certain tradeoffs for speed and simplicity during implementation, and consequently leaving the framework incomplete. Our experiences show at which points the theory of the LSM differs from reality, and details how these differences play out when developing and using a custom LSM. Copyright © 2009 John Wiley & Sons, Ltd. 相似文献
6.
Xiangmao MENG Wenkai LI Xiaoqing PENG Yaohang LI Min LI 《Frontiers of Computer Science》2021,15(6):156902
In the post-genomic era, proteomics has achieved significant theoretical and practical advances with the development of high-throughput technologies. Especially the rapid accumulation of protein-protein interactions (PPIs) provides a foundation for constructing protein interaction networks (PINs), which can furnish a new perspective for understanding cellular organizations, processes, and functions at network level. In this paper, we present a comprehensive survey on three main characteristics of PINs: centrality, modularity, and dynamics. 1) Different centrality measures, which are used to calculate the importance of proteins, are summarized based on the structural characteristics of PINs or on the basis of its integrated biological information; 2) Different modularity definitions and various clustering algorithms for predicting protein complexes or identifying functional modules are introduced; 3) The dynamics of proteins, PPIs and sub-networks are discussed, respectively. Finally, the main applications of PINs in the complex diseases are reviewed, and the challenges and future research directions are also discussed. 相似文献
7.
Reliably predicting software defects is one of the holy grails of software engineering. Researchers have devised and implemented a plethora of defect/bug prediction approaches varying in terms of accuracy, complexity and the input data they require. However, the absence of an established benchmark makes it hard, if not impossible, to compare approaches. We present a benchmark for defect prediction, in the form of a publicly available dataset consisting of several software systems, and provide an extensive comparison of well-known bug prediction approaches, together with novel approaches we devised. We evaluate the performance of the approaches using different performance indicators: classification of entities as defect-prone or not, ranking of the entities, with and without taking into account the effort to review an entity. We performed three sets of experiments aimed at (1) comparing the approaches across different systems, (2) testing whether the differences in performance are statistically significant, and (3) investigating the stability of approaches across different learners. Our results indicate that, while some approaches perform better than others in a statistically significant manner, external validity in defect prediction is still an open problem, as generalizing results to different contexts/learners proved to be a partially unsuccessful endeavor. 相似文献
8.
Partial transition systems support abstract model checking of complex temporal properties by combining both over- and under-approximating abstractions into a single model. Over the years, three families of such modeling formalisms have emerged, represented by (1) Kripke Modal Transition Systems (KMTSs), with restrictions on necessary and possible behaviors; (2) Mixed Transition Systems (MixTSs), with relaxation on these restrictions; and (3) Generalized Kripke MTSs (GKMTSs), with hyper-transitions, respectively. In this paper, we investigate these formalisms based on two fundamental ways of using partial transition systems (PTSs) - as objects for abstracting concrete systems (and thus, a PTS is semantically consistent if it abstracts at least one concrete system) and as models for checking temporal properties (and thus, a PTS is logically consistent if it gives consistent interpretation to all temporal logic formulas). We study the connection between semantic and logical consistency of PTSs, compare the three families w.r.t. their expressive power (i.e., what can be modeled, what abstractions can be captured using them), and discuss the analysis power of these formalisms, i.e., the cost and precision of model checking.Specifically, we identify a class of PTSs for which semantic and logical consistency coincide and define a necessary and sufficient structural condition to guarantee consistency. We also show that all three families of PTSs have the same expressive power (but do differ in succinctness). However, GKMTSs are more precise (i.e., can establish more properties) for model checking than the other two families. The direct use of GKMTSs in practice has been hampered by the difficulty of encoding them symbolically. We address this problem by developing a new semantics for temporal logic of PTSs that makes the MixTS family as precise for model checking as the GKMTS family. The outcome is a symbolic model checking algorithm that combines the efficient encoding of MixTSs with the model checking precision of GKMTSs. Our preliminary experiments indicate that the new algorithm is a good match for predicate-abstraction-based model checkers. 相似文献
9.
10.
Sandeep Reddivari Shirin Rad Tanmay Bhowmik Nisreen Cain Nan Niu 《Requirements Engineering》2014,19(3):257-279
For many software projects, keeping requirements on track needs an effective and efficient path from data to decision. Visual analytics creates such a path that enables the human to extract insights by interacting with the relevant information. While various requirements visualization techniques exist, few have produced end-to-end value to practitioners. In this paper, we advance the literature on visual requirements analytics by characterizing its key components and relationships in a framework. We follow the goal–question–metric paradigm to define the framework by teasing out five conceptual goals (user, data, model, visualization, and knowledge), their specific operationalizations, and their interconnections. The framework allows us to not only assess existing approaches, but also create tool enhancements in a principled manner. We evaluate our enhanced tool support through a case study where massive, heterogeneous, and dynamic requirements are processed, visualized, and analyzed. Working together with practitioners on a contemporary software project within its real-life context leads to the main finding that visual analytics can help tackle both open-ended visual exploration tasks and well-structured visual exploitation tasks in requirements engineering. In addition, the study helps the practitioners to reach actionable decisions in a wide range of areas relating to their project, ranging from theme and outlier identification, over requirements tracing, to risk assessment. Overall, our work illuminates how the data-to-decision analytical capabilities could be improved by the increased interactivity of requirements visualization. 相似文献
11.
This paper presents an evaluation of the security quality requirements engineering (SQUARE) method. The evaluation of SQUARE was conducted by its application on the advanced metering infrastructure of smart grid as a case study. We evaluated the effectiveness of SQUARE with respect to its ability to elicit a set of artifacts, threats, and vulnerabilities; to perform likelihood, impact analysis, and risk level determination; and to elicit, categorize, and prioritize the security requirements. The main contribution of this work is the evaluation of the effectiveness of SQUARE using qualitative security requirements engineering method evaluation criteria. 相似文献
12.
For reasons of tractability, the airline scheduling problem has traditionally been sequentially decomposed into various stages (e.g. schedule generation, fleet assignment, aircraft routing, and crew pairing), with the decisions from one stage imposed upon the decision-making process in subsequent stages. Whilst this approach greatly simplifies the solution process, it unfortunately fails to capture many dependencies between the various stages, most notably between those of aircraft routing and crew pairing, and how these dependencies affect the propagation of delays through the flight network. In Dunbar et al. (2012) [9] we introduced a new algorithm to accurately calculate and minimize the cost of propagated delay, in a framework that integrates aircraft routing and crew pairing. In this paper we extend the approach of Dunbar et al. (2012) [9] by proposing two new algorithms that achieve further improvements in delay propagation reduction via the incorporation of stochastic delay information. We additionally propose a heuristic, used in conjunction with these two approaches, capable of re-timing an incumbent aircraft and crew schedule to further minimize the cost of delay propagation. These algorithms provide promising results when applied to a real-world airline network and motivate our final integrated aircraft routing, crew pairing and re-timing approach which provides a substantially significant reduction in delay propagation. 相似文献
13.
The purpose of this paper is to evaluate two methods of assessing the productivity and quality impact of Computer Aided Software Engineering (CASE) and Fourth Generation Language (4GL) technologies: (1) by the retrospective method; and (2) the cross-sectional method. Both methods involve the use of questionnaire surveys. Developers' perceptions depend on the context in which they are expressed and this includes expectations about the effectiveness of a given software product. Consequently, it is generally not reliable to base inferences about the relative merits of CASE and 4GLs on a cross-sectional comparison of two separate samples of users. The retrospective method that requires each respondent to directly compare different products is shown to be more reliable. However, there may be scope to employ cross-sectional comparisons of the findings from different samples where both sets of respondents use the same reference point for their judgements, and where numerical rather than verbal rating scales are used to measure perceptions. 相似文献
14.
《Behaviour & Information Technology》2012,31(9):920-937
Numerous researchers have proposed website design norms that are suitable for the elderly. However, in the design of community platforms, elderly users were not considered in social media usage. Young and middle-aged people are the main targets in several social media platforms. To co-ordinate the digital lives of elderly users, the emphasis in this study was to determine the problems and to search for solutions. The real requirements and proper solutions for the elderly were integrated by analysing possible factors. This study is an anthropological user centred approach, to explore the verbal behaviour of senior citizens while they accessed Facebook. Facebook, which is a social media platform with a multilingual language interface, is currently used worldwide and served the purpose of an experimental base for this research. By determining the user environments that are suitable for the elderly, including web page accessibility, interface design and real social life transformation, this article proposes the factors for a social media website, the factors for the elderly to use social media platforms, a social media platform design that can be easily used by the elderly and design factors suitable for the elderly. 相似文献
15.
Elena García-Barriocanal Miguel-Angel Sicilia Miltiadis Lytras 《Computers in human behavior》2007,23(6):2641
The use of toolkits and reference frameworks for the design and evaluation of learning activities enables the systematic application of pedagogical criteria in the elaboration of learning resources and learning designs. Pedagogical classification as described in such frameworks is a major criterion for the retrieval of learning objects, since it serves to partition the space of available learning resources depending either on the pedagogical standpoint that was used to create them, or on the interpreted pedagogical orientation of their constituent learning contents and activities. However, pedagogical classification systems need to be evaluated to assess their quality with regards to providing a degree of inter-subjective agreement on the meaning of the classification dimensions they provide. Without such evaluation, classification metadata, which is typically provided by a variety of contributors, is at risk of being fuzzy in reflecting the actual pedagogical orientations, thus hampering the effective retrieval of resources. This paper describes a case study that evaluates the general pedagogical dimensions proposed by Conole et al. to classify learning resources. Rater agreement techniques are used for the assessment, which is proposed as a general technique for the evaluation of such kind of classification schemas. The case study evaluates the degree of coherence of the pedagogical dimensions proposed by Conole et al. as an objective instrument to classify pedagogical resources. In addition, the technical details on how to integrate such classifications in learning object metadata are provided. 相似文献
16.
Sullivan K.J. Kalet I.J. Notkin D. 《IEEE transactions on pattern analysis and machine intelligence》1996,22(8):563-579
A software engineer's confidence in the profitability of a novel design technique depends to a significant degree on previous demonstrations of its profitability in practice. Trials of proposed techniques are thus of considerable value in providing factual bases for evaluation. We present our experience with a previously presented design approach as a basis for evaluating its promise and problems. Specifically, we report on our use of the mediator method to reconcile tight behavioral integration with ease of development and evolution of Prism, a system for planning radiation treatments for cancer patients. Prism is now in routine clinical use in several major research hospitals. Our work supports two claims. In comparison to more common design techniques, the mediator approach eases the development and evolution of integrated systems; and the method can be learned and used profitably by practising software engineers 相似文献
17.
18.
Lecture recordings are increasingly used to supplement lecture attendance within higher education, but their impact on student learning remains unclear. Here we describe a study to evaluate student use of lecture recordings and quantify their impact on academic performance. Questionnaire responses and online monitoring of student's access to recordings indicate that ∼75% students use this material, the majority in a targeted manner. In contrast, a small subset of students (∼5%) are highly dependent on recordings downloading every lecture, and viewing the material for long periods, such that this represents a large proportion of their independent study. This ‘high user’ group is atypical, as it contains a high proportion of dyslexic and Non-English Speaking Background students. Despite high usage, lecture recordings do not have a significant impact on academic performance, either across the cohort or with students that use the recordings. Overall, this approach appears to be beneficial, but may reduce lecture attendance and encourage surface learning approaches in a minority of students. 相似文献
19.
Michael J. Spier 《International journal of parallel programming》1975,4(2):133-149
Thedomain concept-discussed in an earlier paper-is essentially no more than a generalization of the classical operating system monitor. It is argued that domain machines may be put into immediate practical use in order to further and enhance the modularity and reliability of general software. By providing an arbitrary number of monitor-like protective structures, the monitor's proven advantages of database protection and controlled procedure entry points may be applied-within the domain machine's run-time environment-at a much finer level of modular resolution. Examples are given to demonstrate the domain architecture's ability to interdict (and intercept!) various software error conditions. It is suggested that in view of present economic realities (increasingly expensive software, ever less expensive hardware), the potential improvement of software quality through use of a more sophisticated hardware base may be worth considering by the industry.This paper is a statement of the author's personal position, which is not necessarily that of Digital Equipment Corporation. It may not be construed to imply any product commitment by Digital Equipment Corporation. 相似文献