首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Application of Information Technology (IT) has had a significant impact on all aspects of business. Due to technology, the ease with which software can be pirated is increasing and is leading to increased concern for copyright protection. This paper reviews and discusses software piracy issues from a global perspective and reports the findings of a survey concerning the impact of sectors like government, private and academic in Turkey. Although software piracy has long been attracting the interest of academics, no quantitative research has ever been realized in this field in the country. Elsewhere also, most of the software piracy-related studies are from individuals' perspectives and are limited to students, academics, cost, and attitudes. Very few have reported findings related to IT professionals and organizations. The survey was conducted among IT managers of large-scale organizations from different sectors such as the government, private and academic community. Based on the survey of 162 IT managers, the results indicated that sectors have significant impact on software piracy to some extent.  相似文献   

2.
Application of Information Technology (IT) has had a significant impact on all aspects of business. Due to technology, the ease with which software can be pirated is increasing and is leading to increased concern for copyright protection. This paper reviews and discusses software piracy issues from a global perspective and reports the findings of a survey concerning the impact of sectors like government, private and academic in Turkey. Although software piracy has long been attracting the interest of academics, no quantitative research has ever been realized in this field in the country. Elsewhere also, most of the software piracy-related studies are from individuals' perspectives and are limited to students, academics, cost, and attitudes. Very few have reported findings related to IT professionals and organizations. The survey was conducted among IT managers of large-scale organizations from different sectors such as the government, private and academic community. Based on the survey of 162 IT managers, the results indicated that sectors have significant impact on software piracy to some extent.  相似文献   

3.
ContextDevelopers often need to find answers to questions regarding the evolution of a system when working on its code base. While their information needs require data analysis pertaining to different repository types, the source code repository has a pivotal role for program comprehension tasks. However, the coarse-grained nature of the data stored by commit-based software configuration management systems often makes it challenging for a developer to search for an answer.ObjectiveWe present Replay, an Eclipse plug-in that allows developers to explore the change history of a system by capturing the changes at a finer granularity level than commits, and by replaying the past changes chronologically inside the integrated development environment, with the source code at hand.MethodWe conducted a controlled experiment to empirically assess whether Replay outperforms a baseline (SVN client in Eclipse) on helping developers to answer common questions related to software evolution.ResultsThe experiment shows that Replay leads to a decrease in completion time with respect to a set of software evolution comprehension tasks.ConclusionWe conclude that there are benefits in using Replay over the state of the practice tools for answering questions that require fine-grained change information and those related to recent changes.  相似文献   

4.
The Extended Global Cardinality Constraint (EGCC) is a vital component of constraint solving systems, since it is very widely used to model diverse problems. The literature contains many different versions of this constraint, which trade strength of inference against computational cost. In this paper, I focus on the highest strength of inference usually considered, enforcing generalized arc consistency (GAC) on the target variables. This work is an extensive empirical survey of algorithms and optimizations, considering both GAC on the target variables, and tightening the bounds of the cardinality variables. I evaluate a number of key techniques from the literature, and report important implementation details of those techniques, which have often not been described in published papers. Two new optimizations are proposed for EGCC. One of the novel optimizations (dynamic partitioning, generalized from AllDifferent) was found to speed up search by 5.6 times in the best case and 1.56 times on average, while exploring the same search tree. The empirical work represents by far the most extensive set of experiments on variants of algorithms for EGCC. Overall, the best combination of optimizations gives a mean speedup of 4.11 times compared to the same implementation without the optimizations.  相似文献   

5.
6.
Software systems must continually evolve to adapt to new functional requirements or quality requirements to remain competitive in the marketplace. However, different software systems follow different strategies to evolve, affecting both the release plan and the quality of these systems. In this paper, software evolution is considered as a self-organization process and the difference between closed-source software and open-source software is discussed in terms of self-organization. In particular, an empirical study of the evolution of Linux from version 2.4.0 to version 2.6.13 is reported. The study shows how open-source software systems self-organize to adapt to functional requirements and quality requirements.  相似文献   

7.
A definition of software reliability is proposed in which reliability is treated as a generalization of the probability of correctness of the software in question. A tolerance function is introduced as a method of characterizing an acceptable level of correctness. This in turn is used, together with the probability function defining the operational input distribution, as a parameter of the definition of reliability. It is shown that the definition can be used to provide many natural models of reliability by varying the tolerance function and that it may be reasonably approximated using well-chosen test sets. It is also shown that there is an inherent limitation to the measurement of reliability using finite test sets  相似文献   

8.
This paper investigates trust in software outsourcing relationships. The study is based on an empirical investigation of eighteen high maturity software vendor companies based in India. Our analysis of the literature suggests that trust has received a lot of attention in all kinds of business relationships. This includes inter-company relationships, whether cooperative ventures or subcontracting relationships, and relationship among different parts of a single company. However, trust has been relatively under-explored in software outsourcing relationships. In this paper, we present a detailed empirical investigation of trust in commercial software outsourcing relationships. The investigation presents what vendor companies perceive about getting trust from client companies in outsourcing relationships. We present the results in two parts—(1) achieving trust initially in outsourcing relationships and (2) maintaining trust in ongoing outsourcing relationships. Our findings confirm that the critical factors to achieving trust initially in an outsourcing relationship include previous clients' reference and experience of vendor in outsourcing engagements. Critical factors identified for maintaining trust in an established outsourcing relationship include transparency, demonstrability, honesty, process followed and commitment. Our findings also suggest that trust is considered to be very fragile in outsourcing relationships.  相似文献   

9.
Simple thinking (or simplicity) is a way of coping with complexity. It is especially important in the software development process (SDP), which is an error‐prone, time‐consuming, and complex activity. This article investigates the role of the thinking style—namely, simple thinking—which has been found effective in solving complicated problems during software development. For this purpose, it reviews and discusses simplicity issues from a general perspective and, then, reports the findings of a survey concerning the assessment of simplicity in SDP. The survey was conducted among information and communication technologies senior professionals and managers from government and private‐sector organizations. Relevant hypotheses have been developed under different empirical categories for analysis. Statistical analysis techniques were then used to draw inferences based on these hypotheses. The results have proved simplicity to have a significant role in the SDP to a certain extent. © 2011 Wiley Periodicals, Inc.  相似文献   

10.
11.
Software is typically improved and modified in small increments (we refer to each of these increments as a modification record—MR). MRs are usually stored in a configuration management or version control system and can be retrieved for analysis. In this study we retrieved the MRs from several mature open software projects. We then concentrated our analysis on those MRs that fix defects and provided heuristics to automatically classify them. We used the information in the MRs to visualize what files are changed at the same time, and who are the people who tend to modify certain files. We argue that these visualizations can be used to understand the development stage of in which a project is at a given time (new features are added, or defects are being fixed), the level of modularization of a project, and how developers might interact between each other and the source code of a system.  相似文献   

12.
Release notes are an important source of information about a new software release. Such notes contain information regarding what is new, changed, and/or got fixed in a release. Despite the importance of release notes, they are rarely explored in the research literature. Little is known about the contained information, e.g., contents and structure, in release notes. To better understand the types of contained information in release notes, we manually analyzed 85 release notes across 15 different software systems. In our manual analysis, we identify six different types of information (e.g., caveats and addressed issues) that are contained in release notes. Addressed issues refer to new features, bugs, and improvements that were integrated in that particular release. We observe that most release notes list only a selected number of addressed issues (i.e., 6-26 % of all addressed issues in a release). We investigated nine different factors (e.g., issue priority and type) to better understand the likelihood of an issue being listed in release notes. The investigation is conducted on eight release notes of three software systems using four machine learning techniques. Results show that certain factors, e.g., issue type, have higher influence on the likelihood of an issue to be listed in release notes. We use machine learning techniques to automatically suggest the issues to be listed in release notes. Our results show that issues listed in all release notes can be automatically determined with an average precision of 84 % and an average recall of 90 %. To train and build the classification models, we also explored three scenarios: (a) having the user label some issues for a release and automatically suggest the remaining issues for that particular release, (b) using the previous release notes for the same software system, and (c) using prior releases for the current software system and the rest of the studied software systems. Our results show that the content of release notes vary between software systems and across the versions of the same software system. Nevertheless, automated techniques can provide reasonable support to the writers of such notes with little training data. Our study provides developers with empirically-supported advice about release notes instead of simply relying on adhoc advice from on-line inquiries.  相似文献   

13.
With the approach of the new millennium, a primary focus in software engineering involves issues relating to upgrading, migrating, and evolving existing software systems. In this environment, the role of careful empirical studies as the basis for improving software maintenance processes, methods, and tools is highlighted. One of the most important processes that merits empirical evaluation is software evolution. Software evolution refers to the dynamic behaviour of software systems as they are maintained and enhanced over their lifetimes. Software evolution is particularly important as systems in organizations become longer-lived. However, evolution is challenging to study due to the longitudinal nature of the phenomenon in addition to the usual difficulties in collecting empirical data. We describe a set of methods and techniques that we have developed and adapted to empirically study software evolution. Our longitudinal empirical study involves collecting, coding, and analyzing more than 25000 change events to 23 commercial software systems over a 20-year period. Using data from two of the systems, we illustrate the efficacy of flexible phase mapping and gamma sequence analytic methods, originally developed in social psychology to examine group problem solving processes. We have adapted these techniques in the context of our study to identify and understand the phases through which a software system travels as it evolves over time. We contrast this approach with time series analysis. Our work demonstrates the advantages of applying methods and techniques from other domains to software engineering and illustrates how, despite difficulties, software evolution can be empirically studied  相似文献   

14.
Android has a layered architecture that allows applications to leverage services provided by the underlying Linux kernel. However, Android does not prevent applications from directly triggering the kernel functionalities through system call invocations. As recently shown in the literature, this feature can be abused by malicious applications and thus lead to undesirable effects. The adoption of SEAndroid in the latest Android distributions may mitigate the problem. Yet, the effectiveness of SEAndroid to counter these threats is still to be ascertained. In this paper we present an empirical evaluation of the effectiveness of SEAndroid in detecting malicious interplays targeted to the underlying Linux kernel. This is done by extensively profiling the behavior of honest and malicious applications both in standard Android and SEAndroid-enabled distributions. Our analysis indicates that SEAndroid does not prevent direct, possibly malicious, interactions between applications and the Linux kernel, thus showing how it can be circumvented by suitably-crafted system calls. Therefore, we propose a runtime monitoring enforcement module (called Kernel Call Controller) which is compatible both with Android and SEAndroid and is able to enforce security policies on kernel call invocations. We experimentally assess both the efficacy and the performance of KCC on actual devices.  相似文献   

15.
IS project team performance is a topic of increasing importance to practicing managers as well as researchers. This paper discusses the development and testing of a theoretical model of IS project team performance. Empirical analysis suggests that team members' perceptions of their ability to represent users' views during a project is a significant predictor of the team's perception of their overall performance. Another significant predictor is team members' belief in their personal involvement in the development process. However, findings on the influence of cohesion on performance differ from those in other studies: apparently cohesion was not a significant factor in this study.  相似文献   

16.
Optical Networks are composed of multiple devices, from multiple vendors. Normally these networks have a huge transmission capacity. The Slicing of Optical Networks is not a new concept, but continues to be very important, since the capacity of Optical Networks keeps evolving. Most of the time the slicing is manually configured by system operators. Besides being laborious and error-prone, such configuration limits the clients’ ability to customize and configure the network according to their own needs. One way out of this problem is to separate from these devices the control of and the data from the planes. The Software Defined Networks (SDNs) propose the separation of planes while also offering network operators the flexibility to create and manage applications, enabling them to reduce the network costs by globally optimizing the network’s resources, reducing the staff needed to configure it, and contributing for less violation of the service level agreement (SLA). SDN can also help operators to maximize their profit by generating more revenue through mechanisms that increase availability and failure resiliency, maximize throughput, allow for fast dynamic reprovisioning and enable network virtualization. The goal of this paper is to propose a Software Defined Optical Networks Slicing Architecture (SONA) extension (eSONA). that permits defining components such as topology manager, inventory manager, slice manager and path provisioner, and thus enable Optical Networks slicing. It has proved to be capable of managing different slices and provisioning a path on a given slice over the same physical optical network. It has showed an excellent performance, taking little time to provision paths, even with a large number of nodes, which are crucial for optical environments.  相似文献   

17.
A study that evaluates new-paradigm-oriented software development environments which have been developed in the five-year formal approach to software environment technology (FASET) project is reviewed. For this study, a software environment evaluation technology based on a software quality evaluation process model defined in ISO/IEC 9126 has been developed. The evaluation technology has been applied to the R&D project at the middle and final phase of development. The evaluation results provide useful information to develop a widely acceptable evaluation technology and to improve the new-paradigm-oriented software development environments that are based on various specification methods: the algebraic specification method, function-oriented specification method, declarative specification method, natural-language-oriented specification method, diagrammatic specification method, state-transition-oriented specification method, and model-based specification method  相似文献   

18.
Research into software design models in general, and into the UML in particular, focuses on answering the question how design models are used, completely ignoring the question if they are used. There is an assumption in the literature that the UML is the de facto standard, and that use of design models has had a profound and substantial effect on how software is designed by virtue of models giving the ability to do model-checking, code generation, or automated test generation. However for this assumption to be true, there has to be significant use of design models in practice by developers.This paper presents the results of a survey summarizing the answers of 3785 developers answering the simple question on the extent to which design models are used before coding. We relate their use of models with (i) total years of programming experience, (ii) open or closed development, (iii) educational level, (iv) programming language used, and (v) development type.The answer to our question was that design models are not used very extensively in industry, and where they are used, the use is informal and without tool support, and the notation is often not UML. The use of models decreased with an increase in experience and increased with higher level of qualification. Overall we found that models are used primarily as a communication and collaboration mechanism where there is a need to solve problems and/or get a joint understanding of the overall design in a group. We also conclude that models are seldom updated after initially created and are usually drawn on a whiteboard or on paper.  相似文献   

19.
An empirical investigation of an object-oriented software system   总被引:2,自引:0,他引:2  
The paper describes an empirical investigation into an industrial object oriented (OO) system comprised of 133000 lines of C++. The system was a subsystem of a telecommunications product and was developed using the Shlaer-Mellor method (S. Shlaer and S.J. Mellor, 1988; 1992). From this study, we found that there was little use of OO constructs such as inheritance, and therefore polymorphism. It was also found that there was a significant difference in the defect densities between those classes that participated in inheritance structures and those that did not, with the former being approximately three times more defect-prone. We were able to construct useful prediction systems for size and number of defects based upon simple counts such as the number of states and events per class. Although these prediction systems are only likely to have local significance, there is a more general principle that software developers can consider building their own local prediction systems. Moreover, we believe this is possible, even in the absence of the suites of metrics that have been advocated by researchers into OO technology. As a consequence, measurement technology may be accessible to a wider group of potential users  相似文献   

20.
We performed an empirical investigation of factors affecting an individual's decision to adopt anti-spyware software. Our results suggested that an individual's attitude, subjective norm, perceived behavioral control, and denial of responsibility significantly affected anti-spyware adoption intention. Also, relative advantage and compatibility showed a significant effect on attitude, visibility, and image on subjective norm, and trialability, self-efficacy, and computing capacity on perceived behavioral control. Interestingly, moral obligation, ease of use, and perceived cost were not as significant as was originally expected.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号