首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Model-driven Engineering (MDE) has attained great importance in both the Software Engineering industry and the research community, where it is now widely used to provide a suitable approach with which to improve productivity when developing software artefacts. In this scenario, measurement models (software artefacts) have become a fundamental point in improvement of productivity, where MDE and Software Measurement can reap mutual benefits. MDE principles and techniques can be used in software measurement to build more automatic and generic solutions, and to achieve this, it is fundamental to be able to develop software measurement models. To facilitate this task, a domain-specific language named “Software Measurement Modelling Language” (SMML) has been developed. This paper tackles the question of whether the use of SMML can assist in the definition of software measurement models. An empirical study was conducted, with the aim of verifying whether SMML makes it easier to construct measurement models which are more usable and maintainable as regards textual notation. The results show that models which do not use the language are more difficult—in terms of effort, correctness and efficiency—to understand and modify than those represented with SMML. Additional feedback was also obtained, to verify the suitability of the graphical representation of each symbol (element or relationship) of SMML.  相似文献   

2.
We present a rich and highly dynamic technique for analyzing, visualizing, and exploring the execution traces of reactive systems. The two inputs are a designer’s inter-object scenario-based behavioral model, visually described using a UML2-compliant dialect of live sequence charts (LSC), and an execution trace of the system. Our method allows one to visualize, navigate through, and explore, the activation and progress of the scenarios as they “come to life” during execution. Thus, a concrete system’s runtime is recorded and viewed through abstractions provided by behavioral models used for its design, tying the visualization and exploration of system execution traces to model-driven engineering. We support both event-based and real-time-based tracing, and use details-on-demand mechanisms, multi-scaling grids, and gradient coloring methods. Novel model exploration techniques include semantics-based navigation, filtering, and trace comparison. The ideas are implemented and tested in a prototype tool called the Tracer.  相似文献   

3.
A software architecture is a key asset for any organization that builds complex software-intensive systems. Because of an architecture's central role as a project blueprint, organizations should analyze the architecture before committing resources to it. An analysis helps to ensure that sound architectural decisions are made. Over the past decade a large number of architecture analysis methods have been created, and at least two surveys of these methods have been published. This paper examines the criteria for analyzing architecture analysis methods, and suggests a new set of criteria that focus on the essence of what it means to be an architecture analysis method. These criteria could be used to compare methods, to help understand the suitability of a method, or to improve a method. We then examine two methods—the Architecture Tradeoff Analysis Method and Architecture-level Modifiability Analysis—in light of these criteria, and provide some insight into how these methods can be improved. Rick Kazman is a Senior Member of the Technical Staff at the Software Engineering Institute of Carnegie Mellon University and Professor at the University of Hawaii. His primary research interests are software architecture, design and analysis tools, software visualization, and software engineering economics. He also has interests in human-computer interaction and information retrieval. Kazman has created several highly influential methods and tools for architecture analysis, including the SAAM and the ATAM. He is the author of over 80 papers, and co-author of several books, including “Software Architecture in Practice”, and “Evaluating Software Architectures: Methods and Case Studies”. Len Bass is a Senior Member of the Technical Staff at the Software Engineering Institute (SEI). He has written two award winning books in software architecture as well as several other books and numerous papers in a wide variety of areas of computer science and software engineering. He is currently working on techniques for the methodical design of software architectures and to understand how to support usability through software architecture. He has been involved in the development of numerous different production or research software systems ranging from operating systems to database management systems to automotive systems. Mark Klein is Senior Member of the Technical Staff of the Software Engineering Institute. He has over 20 years of experience in research on various facets of software engineering, dependable real-time systems and numerical methods. Klein's most recent work focuses on the analysis of software architectures, architecture tradeoff analysis, attribute-driven architectural design and scheduling theory. Klein's work in real-time systems involved the development of rate monotonic analysis (RMA), the extension of the theoretical basis for RMA, and its application to realistic systems. Klein's earliest work involved research in high-order finite element methods for solving fluid flow equations arising in oil reservoir simulation. He is the co-author two books: “A Practitioner's Handbook for Real-Time Analysis: Guide to Rate Monotonic Analysis for Real-Time Systems” and “Evaluating Software Architecture: Methods and Case Studies”. Anthony J. Lattanze is an Associate Teaching Professor at the Institute for Software Research International (ISRI) at Carnegie Mellon University (CMU) and a senior member of the technical staff at the Software Engineering Institute (SEI). Anthony teaches courses in CMUs Masters of Software Engineering Program in Software Architecture, Real-Time/Embedded Systems, and Software Development Studio. His primary research interest is in the area software architectural design for embedded, software intensive systems. Anthony consults and teaches throughout industry in the areas of software architecture design and architecture evaluation. Prior to Carnegie Mellon, Mr. Lattanze was the Chief of Software Engineering for the Technology Development Group at the United States Flight Test Center at Edwards Air Force Base, CA. During his tenure at the Flight Test Center, he was involved with a number of software and systems engineering projects as a software and systems architect, project manager, and developer. During this time as he was involved with the development, test, and evaluation of avionics systems for the B-2 Stealth Bomber, F-117 Stealth Fighter, and F-22 Advanced Tactical Fighter among other systems. Linda Northrop is the director of the Product Line Systems Program at the Software Engineering Institute (SEI) where she leads the SEI work in software architecture, software product lines and predictable component engineering. Under her leadership the SEI has developed software architecture and product line methods that are used worldwide, a series of five highly-acclaimed books, and Software Architecture and Software Product Line Curricula. She is co-author of the book, “Software Product Lines: Practices and Patterns,” and a primary author of the SEI Framework for Software Product Line Practice.  相似文献   

4.
We present “Pipe ’n Prune” (PnP), a new hybrid method for iceberg-cube query computation. The novelty of our method is that it achieves a tight integration of top-down piping for data aggregation with bottom-up a priori data pruning. A particular strength of PnP is that it is efficient for all of the following scenarios: (1) Sequential iceberg-cube queries, (2) External memory iceberg-cube queries, and (3) Parallel iceberg-cube queries on shared-nothing PC clusters with multiple disks. We performed an extensive performance analysis of PnP for the above scenarios with the following main results: In the first scenario PnP performs very well for both dense and sparse data sets, providing an interesting alternative to BUC and Star-Cubing. In the second scenario PnP shows a surprisingly efficient handling of disk I/O, with an external memory running time that is less than twice the running time for full in-memory computation of the same iceberg-cube query. In the third scenario PnP scales very well, providing near linear speedup for a larger number of processors and thereby solving the scalability problem observed for the parallel iceberg-cubes proposed by Ng et al. Research partially supported by the Natural Sciences and Engineering Research Council of Canada. A preliminary version of this work appeared in the International Conference on Data Engineering (ICDE’05).  相似文献   

5.
A major obstacle in the technology transfer agenda of behavioral analysis and design methods is the need for logics or automata to express properties for control-intensive systems. Interaction-modeling notations may offer a replacement or a complement, with a practitioner-appealing and lightweight flavor, due partly to the sub specification of intended behavior by means of scenarios. We propose a novel approach consisting of engineering a new formal notation of this sort based on a simple compact declarative semantics: VTS (visual timed event scenarios). Scenarios represent event patterns, graphically depicting conditions over traces. They predicate general system events and provide features to describe complex properties not expressible with MSC-like notations. The underlying formalism supports partial orders and real-time constraints. The problem of checking whether a timed-automaton model has a matching trace is proven decidable. On top of this kernel, we introduce a notation to state properties over all system traces: conditional scenarios, allowing engineers to describe uniquely rich connections between antecedent and consequent portions of the scenario. An undecidability result is presented for the general case of the model-checking problem over dense-time domains, to later identify a decidable-yet practically relevant-subclass, where verification is solvable by generating antiscenarios expressed in the VTS-kernel notation.  相似文献   

6.
7.
In October 2002 I attended the Ninth Monterey Software Engineering workshop held in Venice, Italy. This year’s theme was titled “Radical Innovations of Software and Systems Engineering in the Future.” In preparing my talk for the workshop, I thought hard about what I could possibly say on this topic that would not sound stupid. I certainly thought it would be awfully presumptious of me to predict how people will or should be developing software in the future. More easily, I could imagine what the systems of tomorrow will look like and who will be developing them, though anything I would say would sound like platitudes. I could also state some strong opinions about what matters and what doesn’t in the process of software development. Stating such attitudes would at least provoke some discussion. Hence, what follows captures some of what I said at the workshop. Published online: 10 April 2003  相似文献   

8.
In this paper,an interactive learning algorithm of context-frmm language is presented.This algorithm is designed especially for system SAQ,which is a system for formal secification acquisition and verification.As the kernel of concept acquisition subsystem(SAQ/CL)of SAQ,the algorithm has been implemented on SUN SPARC workstation.The grammar to be obtained can represent sentence structure naturally.  相似文献   

9.
UML (Unified Modeling Language) is a standard design notation which offers the state machines diagram to specify reactive software systems. The “Modeling and Analysis of Real-Time and Embedded systems” profile (MARTE) enables UML with capabilities for performance analysis. MARTE has been specialized in a “Dependability Analysis and Modeling” profile (DAM), then providing UML with dependability assets. In this work, we propose an approach for the automatic transformation of UML-DAM models into Deterministic and Stochastic Petri nets and the subsequent dependability analysis.  相似文献   

10.
In this article, a new UML extension for the specification of hybrid systems, where observables may consist of both discrete and time-continuous parameters, is presented. Whereas hybrid modeling constructs are not available in standard UML, several specification formalisms for this type of system have been elaborated and discussed, among them the CHARON language of Alur et al. which possesses already several attractive features for modeling embedded real-time systems with hybrid characteristics. Adopting this as a basis, the profile inherits formal semantics based on CHARON, so it offers the possibility for formal reasoning about hybrid UML specifications. Conversely, the CHARON framework is associated with a new syntactic representation within the UML 2.0 world, allowing to develop hybrid specifications with arbitrary CASE tools supporting UML 2.0 and its profiling mechanism. The “look-and-feel” of the profile is illustrated by means of a case study of an embedded system controlling the cabin illumination in an aircraft. The benefits and weaknesses of the constructed hybrid UML profile are discussed, resulting in feed-back for the improvement of both UML 2.0 and the CHARON formalism. The work presented in this article has been investigated by the authors in the context of the HYBRIS (Efficient Specification of Hybrid Systems) project supported by the Deutsche Forschungsgemeinschaft DFG as part of the priority programme on Software Specification - Integration of Software Specification Techniques for Applications in Engineering.  相似文献   

11.
As the Model Driven Development (MDD) and Product Line Engineering (PLE) appear as major trends for reducing software development complexity and costs, an important missing stone becomes more visible: there is no standard and reusable assets for packaging the know-how and artifacts required when applying these approaches. To overcome this limit, we introduce in this paper the notion of MDA Tool Component, i.e., a packaging unit for encapsulating business know-how and required resources in order to support specific modeling activities on a certain kind of model. The aim of this work is to provide a standard way for representing this know-how packaging unit. This is done by introducing a two-layer MOF-compliant metamodel. Whilst the first layer focuses on the definition of the structure and contents of the MDA Tool Component, the second layer introduces a language independent way for describing its behavior. An OMG RFP (Request For Proposal) has been issued in order to standardize this approach. This work is supported in part by the IST European project “MODELWARE” (contract no 511731) and extends the work presented in the paper entitled “MDA Components: A Flexible Way for Implementing the MDA Approach” edited in proceedings of the ECMDA-FA’05 conference.  相似文献   

12.
“There will always (I hope) be print books, but just as the advent of photography changed the role of painting or film changed the role of theater in our culture, electronic publishing is changing the world of print media. To look for a one-to-one transposition to the new medium is to miss the future until it has passed you by.”—Tim O’Reilly (2002). It is not hard to envisage that publishers will leverage subscribers’ information, interest groups’ shared knowledge and others sources to enhance their publications. While this enhances the value of the publication through more accurate and personalized content, it also brings a new set of challenges to the publisher. Content is now driven by web and in a truly automated system, that is, no designer “re-touch” intervention is envisaged. This paper introduces an exploratory mapping strategy to allocate web driven content in a highly graphical publication like a traditional magazine. Two major aspects of the mapping are covered, those enable different level of flexibility and address different content flowing strategies. The last contribution is an evaluation of existing standards, which potentially can leverage this work to incorporate flexible mapping, and subsequently, composition capabilities. The work published here is an extended version of the article presented at the Eight ACM Symposium on Document Engineering in fall 2008 (Giannetti 2008).  相似文献   

13.
Editorial     
In June 1999, the first International Workshop on Integrated Formal Methods was held at York University in the UK. The primary aim of the workshop was the combination of behavioural and state-based formalisms to yield practical solutions to industrial problems. The workshop proceedings were edited by Keijiro Araki, Andy Galloway and Kenji Taguchi and are available as “IFM99” (ISBN 1-85233-107-0, published by Springer). After the workshop, selected authors were invited to develop journal versions of their papers, incorporating further extensions, corrections and revisions. This was arranged by Andy Galloway who then passed the papers to the journal for refereeing. And here we must record our sincere thanks to Andy. Without his efforts this issue of the journal would simply not have been possible. Following reports from referees and senior colleagues from the editorial board (and the withdrawal of one submission for publication in a book) five papers were accepted for publication here. We hope that those rejected will be further revised and resubmitted; they contained good work but required further development. The common theme is, predictably, the marrying of component specification with control of the interconnecting system, and the first 4 papers all present variations on the theme of Z + CSP. Sühl adds an additional structuring mechanism to Z and CSP and targets his application area as real-time embedded systems, whereas Derrick and Boiten use Object-Z to give partial specifications (viewpoints) which are then combined using CSP. Smith and Hayes describe Real-time Object-Z, which results from the integration of Object-Z with timed traces, and Mahony and Dong investigate the necessary formal underpinning required to combine Timed CSP and Object-Z by means of a trace model. The final paper, by Gro?e-Rhode, introduces and illustrates a mechanism for checking the compatibility of different partial specifications and for coping with composite specifications in which different formalisms have been used. Whether or not this area of formal methods research merely allows us to integrate different, more ‘appropriate’, specification languages or gives rise to new hybrid languages remains to be seen. What is certain is that there are still many problems to be tackled and technology to be transferred.  相似文献   

14.
Software architecture evaluation involves evaluating different architecture design alternatives against multiple quality-attributes. These attributes typically have intrinsic conflicts and must be considered simultaneously in order to reach a final design decision. AHP (Analytic Hierarchy Process), an important decision making technique, has been leveraged to resolve such conflicts. AHP can help provide an overall ranking of design alternatives. However it lacks the capability to explicitly identify the exact tradeoffs being made and the relative size of these tradeoffs. Moreover, the ranking produced can be sensitive such that the smallest change in intermediate priority weights can alter the final order of design alternatives. In this paper, we propose several in-depth analysis techniques applicable to AHP to identify critical tradeoffs and sensitive points in the decision process. We apply our method to an example of a real-world distributed architecture presented in the literature. The results are promising in that they make important decision consequences explicit in terms of key design tradeoffs and the architecture's capability to handle future quality attribute changes. These expose critical decisions which are otherwise too subtle to be detected in standard AHP results. Liming Zhu is a PHD candidate in the School of Computer Science and Engineering at University of New South Wales. He is also a member of the Empirical Software Engineering Group at National ICT Australia (NICTA). He obtained his BSc from Dalian University of Technology in China. After moving to Australia, he obtained his MSc in computer science from University of New South Wales. His principle research interests include software architecture evaluation and empirical software engineering. Aybüke Aurum is a senior lecturer at the School of Information Systems, Technology and Management, University of New South Wales. She received her BSc and MSc in geological engineering, and MEngSc and PhD in computer science. She also works as a visiting researcher in National ICT, Australia (NICTA). Dr. Aurum is one of the editors of “Managing Software Engineering Knowledge”, “Engineering and Managing Software Requirements” and “Value-Based Software Engineering” books. Her research interests include management of software development process, software inspection, requirements engineering, decision making and knowledge management in software development. She is on the editorial boards of Requirements Engineering Journal and Asian Academy Journal of Management. Ian Gorton is a Senior Researcher at National ICT Australia. Until Match 2004 he was Chief Architect in Information Sciences and Engineering at the US Department of Energy's Pacific Northwest National Laboratory. Previously he has worked at Microsoft and IBM, as well as in other research positions. His interests include software architectures, particularly those for large-scale, high-performance information systems that use commercial off-the-shelf (COTS) middleware technologies. He received a PhD in Computer Science from Sheffield Hallam University. Dr. Ross Jeffery is Professor of Software Engineering in the School of Computer Science and Engineering at UNSW and Program Leader in Empirical Software Engineering in National ICT Australia Ltd. (NICTA). His current research interests are in software engineering process and product modeling and improvement, electronic process guides and software knowledge management, software quality, software metrics, software technical and management reviews, and software resource modeling and estimation. His research has involved over fifty government and industry organizations over a period of 15 years and has been funded from industry, government and universities. He has co-authored four books and over one hundred and twenty research papers. He has served on the editorial board of the IEEE Transactions on Software Engineering, and the Wiley International Series in Information Systems and he is Associate Editor of the Journal of Empirical Software Engineering. He is a founding member of the International Software Engineering Research Network (ISERN). He was elected Fellow of the Australian Computer Society for his contribution to software engineering research.  相似文献   

15.
This paper uses techniques from formal language theory to describe the linear spatial patterns in urban freeway traffic flows in order to understand and analyze “hidden order” in such high volume systems. A method for measuring randomness based on algorithmic entropy is introduced and developed. These concepts are operationalized using Pincus’ approximate entropy formulation in an appropriate illustration. These measures, which may be viewed as counterintuitive, are believed to offer robust and rigorous guidance to enhance the overall understanding of efficiency in urban freeway traffic systems. Utilization of such measures should be facilitated by information generated by real time intelligent transportation systems (ITS) technologies and may prove helpful in real time traffic flow management. An earlier version of this paper was presented at the Fifth Joint Conference on Computing and Information Sciences, February 2000, Atlantic City, NJ. The authors appreciate the support of the NSF/ EPA Grant #SES-9976483 “Social Vulnerability Analysis and NSF Grant #ECS-0085981 “Road Transportation as a Complex Adaptive System” as well as the School of Public Policy’s USDOT Center of Excellence in Evaluation and Implementation funded under DOT Grant #DTRS98-G-0013. Any errors are the responsibility of the authors.  相似文献   

16.
Automatic filling in a language knowledgebase in analysis of the sense equivalence of statements is considered in the framework of an original approach based on a theory that represents language as a sense-to-text converter. Gennadii M. Emel’yanov. Born 1943. Graduated from Leningrad Institute of Electrical Engineering in 1966. Received candidate’s degree in 1971 and doctoral degree in 1990. Head of the Department of Computer Software for Computer Devices and Computerized Systems at Novgorod State University. Scientific interests: construction of problem-oriented computer systems for image processing and analysis. Dmitrii V. Mikhailov. Born 1974. Graduated from Novgorod State University in 1997. Received candidate’s degree in 2003. Staff member of the Department of Computer Software for Computer Devices and Computerized Systems at Novgorod State Unniversity. Member of the Russian Association of Pattern Recognition and Image Analysis since 2002. Scientific interests: computer linguistics and artificial intelligence. Author of 15 papers.  相似文献   

17.
Erik Hollnagel’s body of work in the past three decades has molded much of the current research approach to system safety, particularly notions of “error”. Hollnagel regards “error” as a dead-end and avoids using the term. This position is consistent with Rasmussen’s claim that there is no scientifically stable category of human performance that can be described as “error”. While this systems view is undoubtedly correct, “error” persists. Organizations, especially formal business, political, and regulatory structures, use “error” as if it were a stable category of human performance. They apply the term to performances associated with undesired outcomes, tabulate occurrences of “error”, and justify control and sanctions through “error”. Although a compelling argument can be made for Hollnagel’s view, it is clear that notions of “error” are socially and organizationally productive. The persistence of “error” in management and regulatory circles reflects its value as a means for social control.  相似文献   

18.
The engineering of distributed adaptive software is a complex task which requires a rigorous approach. Software architectural (structural) concepts and principles are highly beneficial in specifying, designing, analysing, constructing and evolving distributed software. A rigorous architectural approach dictates formalisms and techniques that are compositional, components that are context independent and systems that can be constructed and evolved incrementally. This paper overviews some of the underlying reasons for adopting an architectural approach, including a brief “rational history” of our research work, and indicates how an architectural model can potentially facilitate the provision of self-managed adaptive software system. Much of the research has been supported by the Engineering and Physical Sciences Research Council and is currently partly supported by EPSRC Platform grant AEDUS 2 and a DTC grant.  相似文献   

19.
This paper describes the application of a backpropagation artificial neural network (ANN) for charting the behavioural state of previously unseen persons. In a simulated theft scenario participants stole or did not steal some money and were interviewed about the location of the money. A video of each interview was presented to an automatic system, which collected vectors containing nonverbal behaviour data. Each vector represented a participant’s nonverbal behaviour related to “deception” or “truth” for a short period of time. These vectors were used for training and testing a backpropagation ANN which was subsequently used for charting the behavioural state of previously unseen participants. Although behaviour related to “deception” or “truth” is charted the same strategy can be used to chart different psychological states over time and can be tuned to particular situations, environments and applications. We thank those who kindly volunteered to participate in the study.  相似文献   

20.
Traditional network management approach involves the management of each vendor‘s equipment and networkd segment in isolation through its own proprietary element management system.It is necessary to set up a new network management architecture that calls for operation consolidation across vendor and technology boundaries.In this paper,an architerctural model for Intelligent Network Management(INM)is presented.The INM system includes a manager system,which controls all subsystems and coordinates different management tasks;an expert system,which is responsible for handling particularly difficult problems,and intelligent agents,which bring the management closer to applications and user requirements by spreading intellignet agents through network segments or domain.In the expert system model proposed,especially an intellignet fault management system is given.The architectural model is to build the INM system to meet the need of managing modern network systems.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号