首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Software systems must change to adapt to new functional requirements and nonfunctional requirements. According to Lehman''s laws of software evolution, on the one side, the size and the complexity of a software system will continually increase in its life time; on the other side, the quality of a software system will decrease unless it is rigorously maintained and adapted. Lehman''s laws of software evolution, especially of those on software size and complexity, have been widely validated. However, there are few empirical studies of Lehman''s law on software quality evolution, despite the fact that quality is one of the most important measurements of a software product. This paper defines a metric---accumulated defect density---to measure the quality of evolving software systems. We mine the bug reports and measure the size and complexity growth of four evolution lines of Apache Tomcat and Apache Ant projects. Based on these studies, Lehman''s law on software quality evolution is examined and evaluated.  相似文献   

2.
Web software applications have become complex, sophisticated programs that are based on novel computing technologies. Their most essential characteristic is that they represent a different kind of software deployment—most of the software is never delivered to customers’ computers, but remains on servers, allowing customers to run the software across the web. Although powerful, this deployment model brings new challenges to developers and testers. Checking static HTML links is no longer sufficient; web applications must be evaluated as complex software products. This paper focuses on three aspects of web applications that are unique to this type of deployment: (1) an extremely loose form of coupling that features distributed integration, (2) the ability that users have to directly change the potential flow of execution, and (3) the dynamic creation of HTML forms. Taken together, these aspects allow the potential control flow to vary with each execution, thus the possible control flows cannot be determined statically, prohibiting several standard analysis techniques that are fundamental to many software engineering activities. This paper presents a new way to model web applications, based on software couplings that are new to web applications, dynamic flow of control, distributed integration, and partial dynamic web application development. This model is based on the notion of atomic sections, which allow analysis tools to build the analog of a control flow graph for web applications. The atomic section model has numerous applications in web applications; this paper applies the model to the problem of testing web applications.  相似文献   

3.
ABSTRACT

Over the past 20 years, software has evolved from monolithic, stove-piped applications to services that communicate with other distributed components over communications networks. The rise in popularity of Service-oriented Architecture (SOA) and web services has presented unique challenges for securely conveying the identity of end users at every point, especially when mashups, Web service composition and orchestration solutions combine multiple distributed components throughout a network, and where each component may need to know the identity of the end user. Over the past decade, many U.S. government projects have embraced SOA, have identified security risks with certain types of identity propagation, and have built solutions for mitigating the risks. This paper focuses on identity propagation in Web service transactions and describes how several early SOA-based projects utilized “transitive trust” approaches. We categorize the security risks found and describe how these projects minimized or mitigated the risks. Finally, we discuss approaches used in current projects and provide guidance for future implementations.  相似文献   

4.
陈熹  周军 《中国图象图形学报》2005,10(11):1402-1405
为实现软件控制IP传送(IP transm ission,IP-TS)流组播汇聚,在分析交换板的接口函数代码后进行了基于简单对象访问协议(simp le ob ject access protocol,SOAP)的组播控制模块的开发、MySQL数据库建模,并编写了超文本预处理器(hypertext preprocessor,PHP)网络页面脚本。该软件应用于实际硬件系统中,能够使用户通过网页操作发出配置指令来完成组播IP-TS流的汇聚控制,而基于SOAP协议的软件则能够在分布式的环境下较稳定地实现控制功能。  相似文献   

5.
ABSTRACT

Web 2.0 defines a changing trend in the use of World Wide Web application development and web design technology. Web 2.0 design concepts have led to the evolution of a web culture that has allowed social-networking and ease of design use of non-secure component applications to enter the business domain of the enterprise. These Web 2.0 component applications are then commingled with other business legacy applications including databases. This article focuses on the taxonomy of the injection infection class of vulnerabilities associated with Web 2.0 application security issues.  相似文献   

6.
ContextEvent-Driven Software (EDS) is a class of software whose behavior is driven by incoming events. Web and desktop applications that respond to user-initiated events on their Graphical User Interface (GUI), or embedded software responding to events and signals received from the equipment in its operating environment are examples of EDS. Testing EDS poses great challenges to software testers. One of these challenges is the need to generate a huge number of possible event sequences that could sufficiently cover the EDS’s state space.ObjectiveIn this paper, we introduce a new six-stage testing procedure for event-driven web applications to overcome EDS testing challenges.MethodThe stages of the testing procedure include dividing the application based on its structure, creating functional graphs for each section, creating mutants from functional graphs, choosing coverage criteria to produce test paths, merging event sequences to make longer ones, and deriving and running test cases. We have analyzed our proposed testing procedure with the help of four metrics consisting of Fault Detection Density (FDD), Fault Detection Effectiveness (FDE), Mutation Score, and Unique Fault.ResultsUsing this procedure, we have prepared prioritized test cases and also discovered a list of unique faults by running the suggested test cases on a sample real-world web application called Academic E-mail System.ConclusionWe propose that our suggested testing procedure has some advantages such as creating functional graphs with requirements document, resolving the problem of removing infeasible test cases with these graphs and conditions on the “add edge” operator before creating mutants. But the suggested testing procedure, like any other method, had some drawbacks. Because most of the stages in the approach were performed manually, the testing time was increased.  相似文献   

7.
《Ergonomics》2012,55(8):1600-1616
Abstract

Theory and dynamics in ergonomics (ergodynamics) is being proposed for ergonomic design, improvement, and studies of dynamic work in a dynamic environment. In a more narrow sense, ergodynamics is an application of the transformation dynamics theory in ergonomics. A foundation of ergodynamics is being suggested as the three laws of: (1) mutual adaptation; (2) plurality of work functional structures; and (3) transformations. Practical applications of ergodynamics are discussed with regard to ergonomic design and mutual adaptation in human-machine-environments, professional training, control systems safety, prediction, planning and optimizing efficiency dynamics during implementation of ergonomic and engineering projects, transfer of new technologies, environment, work functions and skills, management structures, software, products, changing economic strategy. Ergodynamics helps to predict and minimize losses in productivity, quality, and safety during all these transformation periods.  相似文献   

8.
ContextThe web has had a significant impact on all aspects of our society. As our society relies more and more on the web, the dependability of web applications has become increasingly important. To make these applications more dependable, for the past decade researchers have proposed various techniques for testing web-based software applications. Our literature search for related studies retrieved 193 papers in the area of web application testing, which have appeared between 2000 and 2013.ObjectiveAs this research area matures and the number of related papers increases, it is important to systematically identify, analyze, and classify the publications and provide an overview of the trends and empirical evidence in this specialized field.MethodsWe systematically review the body of knowledge related to functional testing of web application through a systematic literature review (SLR) study. This SLR is a follow-up and complimentary study to a recent systematic mapping (SM) study that we conducted in this area. As part of this study, we pose three sets of research questions, define selection and exclusion criteria, and synthesize the empirical evidence in this area.ResultsOur pool of studies includes a set of 95 papers (from the 193 retrieved papers) published in the area of web application testing between 2000 and 2013. The data extracted during our SLR study is available through a publicly-accessible online repository. Among our results are the followings: (1) the list of test tools in this area and their capabilities, (2) the types of test models and fault models proposed in this domain, (3) the way the empirical studies in this area have been designed and reported, and (4) the state of empirical evidence and industrial relevance.ConclusionWe discuss the emerging trends in web application testing, and discuss the implications for researchers and practitioners in this area. The results of our SLR can help researchers to obtain an overview of existing web application testing approaches, fault models, tools, metrics and empirical evidence, and subsequently identify areas in the field that require more attention from the research community.  相似文献   

9.
ContextCoping with rapid requirements change is crucial for staying competitive in the software business. Frequently changing customer needs and fierce competition are typical drivers of rapid requirements evolution resulting in requirements obsolescence even before project completion.ObjectiveAlthough the obsolete requirements phenomenon and the implications of not addressing them are known, there is a lack of empirical research dedicated to understanding the nature of obsolete software requirements and their role in requirements management.MethodIn this paper, we report results from an empirical investigation with 219 respondents aimed at investigating the phenomenon of obsolete software requirements.ResultsOur results contain, but are not limited to, defining the phenomenon of obsolete software requirements, investigating how they are handled in industry today and their potential impact.ConclusionWe conclude that obsolete software requirements constitute a significant challenge for companies developing software intensive products, in particular in large projects, and that companies rarely have processes for handling obsolete software requirements. Further, our results call for future research in creating automated methods for obsolete software requirements identification and management, methods that could enable efficient obsolete software requirements management in large projects.  相似文献   

10.
11.
Like all software systems, databases are subject to evolution as time passes. The impact of this evolution can be vast as a change to the schema of a database can affect the syntactic correctness and the semantic validity of all the surrounding applications. In this paper, we have performed a thorough, large-scale study on the evolution of databases that are part of larger open source projects, publicly available through open source repositories. Lehman׳s laws of software evolution, a well-established set of observations on how the typical software systems evolve (matured during the last forty years), has served as our guide towards providing insights on the mechanisms that govern schema evolution. Much like software systems, we found that schemata expand over time, under a stabilization mechanism that constraints uncontrolled expansion with perfective maintenance. At the same time, unlike typical software systems, the growth is typically low, with long periods of calmness interrupted by bursts of maintenance and a surprising lack of complexity increase.  相似文献   

12.
ContextSoftware Product Line Engineering implies the upfront design of a Product-Line Architecture (PLA) from which individual product applications can be engineered. The big upfront design associated with PLAs is in conflict with the current need of “being open to change”. To make the development of product-lines more flexible and adaptable to changes, several companies are adopting Agile Product Line Engineering. However, to put Agile Product Line Engineering into practice it is still necessary to make mechanisms available to assist and guide the agile construction and evolution of PLAs.ObjectiveThis paper presents the validation of a process for “the agile construction and evolution of product-line architectures”, called Agile Product-Line Architecting (APLA). The contribution of the APLA process is the integration of a set of models for describing, documenting, and tracing PLAs, as well as an algorithm for guiding the change decision-making process of PLAs. The APLA process is assessed to prove that assists Agile Product Line Engineering practitioners in the construction and evolution of PLAs.MethodValidation is performed through a case study by using both quantitative and qualitative analysis. Quantitative analysis was performed using statistics, whereas qualitative analysis was performed through interviews using constant comparison, triangulation, and supporting tools. This case study was conducted according to the guidelines of Runeson and Höst in a software factory where three projects in the domain of Smart Grids were involved.ResultsAPLA is deployed through the Flexible-PLA modeling framework. This framework supported the successful development and evolution of the PLA of a family of power metering management applications for Smart Grids.ConclusionsAPLA is a well-supported solution for the agile construction and evolution of PLAs. This case study illustrates that the proposed solution for the agile construction of PLAs is viable in an industry project on Smart Grids.  相似文献   

13.
Water resources web applications or “web apps” are growing in popularity as a means to overcome many of the challenges associated with hydrologic simulations in decision-making. Water resources web apps fall outside of the capabilities of standard web development software, because of their spatial data components. These spatial data needs can be addressed using a combination of existing free and open source software (FOSS) for geographic information systems (FOSS4G) and FOSS for web development. However, the abundance of FOSS projects that are available can be overwhelming to new developers. In an effort to understand the web of FOSS features and capabilities, we reviewed many of the state-of-the-art FOSS software projects in the context of those that have been used to develop water resources web apps published in the peer-reviewed literature in the last decade (2004–2014).  相似文献   

14.
设计动态网站的最佳方案:Apache+PHP+MySQL   总被引:6,自引:0,他引:6  
Apache是目前应用最广的Web服务器;PHP是一种类似ASP的服务器端脚本语言,而MySQL是一个小巧的数据库系统软件,它们特别适用于网站建设.Apache PHP MySQL不仅仅是开源项目,可免费获取,而且它支持Linux、UNIX、OS/2和Windows多个操作系统,可移植性好.因此,这种组合是设计动态网站的最佳解决方案.首先介绍Apache PHP MySQL这3个软件的安装与配置,然后给出了在Windows操作系统平台下选择Apache PHP MySQL组合的网上商店系统的开发原理,系统功能的具体设计与实现.  相似文献   

15.
Web application development frameworks, like the Java Server Pages framework (JSP), provide web applications with essential functions such as maintaining state information across the application and access control. In the fast paced world of web applications, new frameworks are introduced and old ones are updated frequently. A framework is chosen during the initial phases of the project. Hence, changing it to match the new requirements and demands is a cumbersome task.We propose an approach (based on Water Transformations) to migrate web applications between various web development frameworks. This migration process preserves the structure of the code and the location of comments to facilitate future manual maintenance of the migrated code. Consequently, developers can move their applications to the framework that meets their current needs instead of being locked into their initial development framework. We give an example of using our approach to migrate a web application written using the Active Server Pages (ASP) framework to the Netscape Server Pages (NSP) framework.  相似文献   

16.
ContextMost companies, independently of their size and activity type, are facing the problem of managing, maintaining and/or replacing (part of) their existing software systems. These legacy systems are often large applications playing a critical role in the company’s information system and with a non-negligible impact on its daily operations. Improving their comprehension (e.g., architecture, features, enforced rules, handled data) is a key point when dealing with their evolution/modernization.ObjectiveThe process of obtaining useful higher-level representations of (legacy) systems is called reverse engineering (RE), and remains a complex goal to achieve. So-called Model Driven Reverse Engineering (MDRE) has been proposed to enhance more traditional RE processes. However, generic and extensible MDRE solutions potentially addressing several kinds of scenarios relying on different legacy technologies are still missing or incomplete. This paper proposes to make a step in this direction.MethodMDRE is the application of Model Driven Engineering (MDE) principles and techniques to RE in order to generate relevant model-based views on legacy systems, thus facilitating their understanding and manipulation. In this context, MDRE is practically used in order to (1) discover initial models from the legacy artifacts composing a given system and (2) understand (process) these models to generate relevant views (i.e., derived models) on this system.ResultsCapitalizing on the different MDRE practices and our previous experience (e.g., in real modernization projects), this paper introduces and details the MoDisco open source MDRE framework. It also presents the underlying MDRE global methodology and architecture accompanying this proposed tooling.ConclusionMoDisco is intended to make easier the design and building of model-based solutions dedicated to legacy systems RE. As an empirical evidence of its relevance and usability, we report on its successful application in real industrial projects and on the concrete experience we gained from that.  相似文献   

17.
ContextSoftware networks are directed graphs of static dependencies between source code entities (functions, classes, modules, etc.). These structures can be used to investigate the complexity and evolution of large-scale software systems and to compute metrics associated with software design. The extraction of software networks is also the first step in reverse engineering activities.ObjectiveThe aim of this paper is to present SNEIPL, a novel approach to the extraction of software networks that is based on a language-independent, enriched concrete syntax tree representation of the source code.MethodThe applicability of the approach is demonstrated by the extraction of software networks representing real-world, medium to large software systems written in different languages which belong to different programming paradigms. To investigate the completeness and correctness of the approach, class collaboration networks (CCNs) extracted from real-world Java software systems are compared to CCNs obtained by other tools. Namely, we used Dependency Finder which extracts entity-level dependencies from Java bytecode, and Doxygen which realizes language-independent fuzzy parsing approach to dependency extraction. We also compared SNEIPL to fact extractors present in language-independent reverse engineering tools.ResultsOur approach to dependency extraction is validated on six real-world medium to large-scale software systems written in Java, Modula-2, and Delphi. The results of the comparative analysis involving ten Java software systems show that the networks formed by SNEIPL are highly similar to those formed by Dependency Finder and more precise than the comparable networks formed with the help of Doxygen. Regarding the comparison with language-independent reverse engineering tools, SNEIPL provides both language-independent extraction and representation of fact bases.ConclusionSNEIPL is a language-independent extractor of software networks and consequently enables language-independent network-based analysis of software systems, computation of design software metrics, and extraction of fact bases for reverse engineering activities.  相似文献   

18.
Cell image analysis in microscopy is the core activity of cytology and cytopathology for assessing cell physiological (cellular structure and function) and pathological properties. Biologists usually make evaluations by visually and qualitatively inspecting microscopic images: this way, they are particularly able to recognize deviations from normality. Nevertheless, automated analysis is strongly preferable for obtaining objective, quantitative, detailed, and reproducible measurements, i.e., features, of cells. Yet, the organization and standardization of the wide domain of features used in cytometry is still a matter of challenging research. In this paper, we present the Cell Image Analysis Ontology (CIAO), which we are developing for structuring the cell image features domain. CIAO is a structured ontology that relates different cell parts or whole cells, microscopic images, and cytometric features. Such an ontology has incalculable value since it could be used for standardizing cell image analysis terminology and features definition. It could also be suitably integrated into the development of tools for supporting biologists and clinicians in their analysis processes and for implementing automated diagnostic systems. Thus, we also present a tool developed for using CIAO in the diagnosis of hematopoietic diseases. The text was submitted by the authors in English. Sara Colantonio. MSc degree with honors in computer science, University of Pisa, 2004; PhD student in information engineering at the Department of Information Engineering, Pisa University; research fellow at the Institute of Information Science and Technologies, National Research Council, Pisa. Received a grant from Finmeccanica for studies in the field of image categorization with applications in medicine and quality control. Her main interests include neural networks, machine learning, industrial diagnostics, and medical imaging. Coauthor of more than 30 scientific papers. Currently involved in a number of European research projects regarding image mining, information technology, and medical decision support systems. Igor B. Gurevich. Born 1938. Dr. Eng. (Diploma Engineer (Automatic Control and Electrical Engineering), 1961, Moscow Power Engineering Institute, Moscow, USSR); Dr. (Theoretical Computer Science/Mathematical Cybernetics), 1975, Moscow Institute of Physics and Technology, Moscow, USSR. Head of department at the Dorodnicyn Computing Center of the Russian Academy of Sciences, Moscow; assistant professor at the Faculty of Computer Science, Moscow State University. Since 1960, has worked as an engineer and researcher in industry, medicine, and universities and in the Russian Academy of Sciences. Area of expertise: image analysis; image understanding; mathematical theory of pattern recognition; theoretical computer science; pattern recognition and image analysis techniques for applications in medicine, nondestructive testing, and process control; knowledge bases; knowledge-based systems. Two monographs (in coauthorship); 135 papers on pattern recognition, image analysis, and theoretical computer science and applications in peer-reviewed international and Russian journals and conference and workshop proceedings; one patent of the USSR and four patents of the RF. Executive secretary of the Russian Association for Pattern Recognition and Image Analysis, member of the governing board of the International Association for Pattern Recognition (representative from the Russian Federation), IAPR fellow. Has served as PI of many research and development projects as part of national research (applied and basic) programs of the Russian Academy of Sciences, the Ministry of Education and Science of the Russian Federation, the Russian Foundation for Basic Research, the Soros Foundation, and INTAS. Deputy editor in chief of Pattern Recognition and Image Analysis. Massimo Martinelli. Works at the Institute of Information Science and Technologies (ISTI), National Research Council (CNR), Pisa. Member of the W3C multimedia semantics incubator group; coordinator of the CNR-ISTI web systems group. His main interests include semantic web and web technologies. Coauthor of more than 50 scientific papers. Currently involved in a number of European research projects regarding semantic web, information technology, multimedia semantics, and medical decision support systems. Ovidio Salvetti. Director of research at the Institute of Information Science and Technologies (ISTI), National Research Council (CNR), Pisa. Working in the field of theoretical and applied computer vision. His fields of research are image analysis and understanding, pictorial information systems, spatial modeling, and intelligent processes in computer vision. Coauthor of four books and monographs and more than 300 technical and scientific articles, with ten patents regarding systems and software tools for image processing. Has served as a scientific coordinator of several national and European research and industrial projects, in collaboration with Italian and foreign research groups, in the fields of computer vision and high-performance computing for diagnostic imaging. Member of the editorial boards of the international journals Pattern Recognition and Image Analysis and G. Ronchi Foundation Acts. Currently the CNR contact person in ERCIM (the European Research Consortium for Informatics and Mathematics) for the Working Group on Vision and Image Understanding and a member of IEEE and of the steering committee of a number of EU projects. Head of the ISTI Signals and Images Laboratory. Yulia O. Trusova. Born 1980. Graduated from the Faculty of Computational Mathematics and Cybernetics of Lomonosov Moscow State University in 2002. Works at the Dorodnicyn Computing Center of the Russian Academy of Sciences. Scientific interests: mathematical theory of pattern recognition and image analysis, methods of discrete mathematics, databases and knowledge bases, and computational linguistics. Coauthor of more than 25 papers. Laureate of the Aspirant Award, 2003–2005. Member of the Russian Association for Pattern Recognition and Image Analysis.  相似文献   

19.
ContextThe Web has had a significant impact on all aspects of our society. As our society relies more and more on the Web, the dependability of web applications has become increasingly important. To make these applications more dependable, for the past decade researchers have proposed various techniques for testing web-based software applications. Our literature search for related studies retrieved 147 papers in the area of web application testing, which have appeared between 2000 and 2011.ObjectiveAs this research area matures and the number of related papers increases, it is important to systematically identify, analyze, and classify the publications and provide an overview of the trends in this specialized field.MethodWe review and structure the body of knowledge related to web application testing through a systematic mapping (SM) study. As part of this study, we pose two sets of research questions, define selection and exclusion criteria, and systematically develop and refine a classification schema. In addition, we conduct a bibliometrics analysis of the papers included in our study.ResultsOur study includes a set of 79 papers (from the 147 retrieved papers) published in the area of web application testing between 2000 and 2011. We present the results of our systematic mapping study. Our mapping data is available through a publicly-accessible repository. We derive the observed trends, for instance, in terms of types of papers, sources of information to derive test cases, and types of evaluations used in papers. We also report the demographics and bibliometrics trends in this domain, including top-cited papers, active countries and researchers, and top venues in this research area.ConclusionWe discuss the emerging trends in web application testing, and discuss the implications for researchers and practitioners in this area. The results of our systematic mapping can help researchers to obtain an overview of existing web application testing approaches and indentify areas in the field that require more attention from the research community.  相似文献   

20.
With the rapid development of quantum computers capable of realizing Shor’s algorithm, existing public key-based algorithms face a significant security risk. Crystals-Kyber has been selected as the only key encapsulation mechanism (KEM) algorithm in the National Institute of Standards and Technology (NIST) Post-Quantum Cryptography (PQC) competition. In this study, we present a portable and efficient implementation of a Crystals-Kyber post-quantum KEM based on WebAssembly (Wasm), a recently released portable execution framework for high-performance web applications. Until now, most Kyber implementations have been developed with native programming languages such as C and Assembly. Although there are a few previous Kyber implementations based on JavaScript for portability, their performance is significantly lower than that of implementations based on native programming languages. Therefore, it is necessary to develop a portable and efficient Kyber implementation to secure web applications in the quantum computing era. Our Kyber software is based on JavaScript and Wasm to provide portability and efficiency while ensuring quantum security. Namely, the overall software is written in JavaScript, and the performance core parts (secure hash algorithm-3-based operations and polynomial multiplication) are written in Wasm. Furthermore, we parallelize the number theoretic transform (NTT)-based polynomial multiplication using single instruction multiple data (SIMD) functionality, which is available in Wasm. The three steps in the NTT-based polynomial multiplication have been parallelized with Wasm SIMD intrinsic functions. Our software outperforms the latest reference implementation of Kyber developed in JavaScript by ×4.02 (resp. ×4.32 and ×4.1), ×3.42 (resp. ×3.52 and ×3.44), and ×3.41 (resp. ×3.44 and ×3.38) in terms of key generation, encapsulation, and decapsulation on Google Chrome (resp. Firefox, and Microsoft Edge). As far as we know, this is the first software implementation of Kyber with Wasm technology in the web environment.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号