首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The architecture of a large software system is widely considered important for such reasons as: providing a common goal to the stakeholders in realising the envisaged system; helping to organise the various development teams; and capturing foundational design decisions early in the development. Studies have shown that defects originating in system architectures can consume twice as much correction effort as that for other defects. Clearly, then, scientific studies on architectural defects are important for their improved treatment and prevention. Previous research has focused on the extent of architectural defects in software systems. For this paper, we were motivated to ask the following two complementary questions in a case study: (i) How do multiple-component defects (MCDs)—which are of architectural importance—differ from other types of defects in terms of (a) complexity and (b) persistence across development phases and releases? and (ii) How do highly MCD-concentrated components (the so called, architectural hotspots) differ from other types of components in terms of their (a) interrelationships and (b) persistence across development phases and releases? Results indicate that MCDs are complex to fix and are persistent across phases and releases. In comparison to a non-MCD, a MCD requires over 20 times more changes to fix it and is 6 to 8 times more likely to cross a phase or a release. These findings have implications for defect detection and correction. Results also show that 20% of the subject system’s components contain over 80% of the MCDs and that these components are 2–3 times more likely to persist across multiple system releases than other components in the system. Such MCD-concentrated components constitute architectural “hotspots” which management can focus upon for preventive maintenance and architectural quality improvement. The findings described are from an empirical study of a large legacy software system of size over 20 million lines of code and age over 17 years.  相似文献   

2.
Software crashes are severe manifestations of software bugs. Debugging crashing bugs is tedious and time-consuming. Understanding software changes that induce a crashing bug can provide useful contextual information for bug fixing and is highly demanded by developers. Locating the bug inducing changes is also useful for automatic program repair, since it narrows down the root causes and reduces the search space of bug fix location. However, currently there are no systematic studies on locating the software changes to a source code repository that induce a crashing bug reflected by a bucket of crash reports. To tackle this problem, we first conducted an empirical study on characterizing the bug inducing changes for crashing bugs (denoted as crash-inducing changes). We also propose ChangeLocator, a method to automatically locate crash-inducing changes for a given bucket of crash reports. We base our approach on a learning model that uses features originated from our empirical study and train the model using the data from the historical fixed crashes. We evaluated ChangeLocator with six release versions of Netbeans project. The results show that it can locate the crash-inducing changes for 44.7%, 68.5%, and 74.5% of the bugs by examining only top 1, 5 and 10 changes in the recommended list, respectively. It significantly outperforms the existing state-of-the-art approach.  相似文献   

3.
Service innovation is focused on customer value creation. At its core, customer centric service innovation is technology-enabled, human-centered, and process-oriented. To profit from such innovation, firms need an integrated cross-disciplinary, holistic method to design and commercialize service innovation. From diverse but interrelated strands of theories from service science, strategic management, organization science and information systems literatures, this article develops a new integrated design method, known as iSIM (integrated Service Innovation Method), for simultaneous service innovation and business model design for sustained customer value co-creation with the firm. Following design science research method, the article theoretically defines and integrates iSIM’s seven constitutive design process-elements: service strategy, customer type / value proposition, service concept, service system, customer experience, service architecture and monetization into a coherent and end-to-end aligned integrated design method. It explains how iSIM would be holistically and iteratively practiced by practitioners, and conceptually exemplifies its utility via telco and Amazon case studies using secondary data. Perspectives on iSIM from selected practitioners are discussed which confirm iSIM’s potential utility for their business. Managerial implications of implementing the iSIM and potential areas for further research are also discussed.  相似文献   

4.
Rapid advances in image acquisition and storage technology underline the need for real-time algorithms that are capable of solving large-scale image processing and computer-vision problems. The minimum st cut problem, which is a classical combinatorial optimization problem, is a prominent building block in many vision and imaging algorithms such as video segmentation, co-segmentation, stereo vision, multi-view reconstruction, and surface fitting to name a few. That is why finding a real-time algorithm which optimally solves this problem is of great importance. In this paper, we introduce to computer vision the Hochbaum’s pseudoflow (HPF) algorithm, which optimally solves the minimum st cut problem. We compare the performance of HPF, in terms of execution times and memory utilization, with three leading published algorithms: (1) Goldberg’s and Tarjan’s Push-Relabel; (2) Boykov’s and Kolmogorov’s augmenting paths; and (3) Goldberg’s partial augment-relabel. While the common practice in computer-vision is to use either BK or PRF algorithms for solving the problem, our results demonstrate that, in general, HPF algorithm is more efficient and utilizes less memory than these three algorithms. This strongly suggests that HPF is a great option for many real-time computer-vision problems that require solving the minimum st cut problem.  相似文献   

5.
6.
Suppose we have a parallel or distributed system whose nodes have limited capacities, such as processing speed, bandwidth, memory, or disk space. How does the performance of the system depend on the amount of heterogeneity of its capacity distribution? We propose a general framework to quantify the worst-case effect of increasing heterogeneity in models of parallel systems. Given a cost function g(C,W) representing the system’s performance as a function of its nodes’ capacities C and workload W (such as the makespan of an optimum schedule of jobs W on machines C), we say that g has price of heterogeneity α when for any workload, cost cannot increase by more than a factor α if node capacities become arbitrarily more heterogeneous. The price of heterogeneity also upper bounds the “value of parallelism”: the maximum benefit obtained by increasing parallelism at the expense of decreasing processor speed. We give constant or logarithmic bounds on the price of heterogeneity of several well-known job scheduling and graph degree/diameter problems, indicating that in many cases, increasing heterogeneity can never be much of a disadvantage.  相似文献   

7.
Communication-centric systems are software systems built as assemblies of distributed artifacts that interact following predefined communication protocols. Session-based concurrency is a type-based approach to ensure the conformance of communication-centric systems to such protocols. This paper presents a model of session-based concurrency with mechanisms for run-time adaptation. Our model allows us to specify communication-centric systems whose session behavior can be dynamically updated at run-time. We improve on previous work by proposing an event-based approach: adaptation requests, issued by the system itself or by its context, are assimilated to events which may trigger adaptation routines. These routines exploit type-directed checks to enable the reconfiguration of processes with active protocols. We equip our model with a type system that ensures communication safety and consistency properties: while safety guarantees absence of run-time communication errors, consistency ensures that update actions do not disrupt already established session protocols. We provide soundness results for binary and multiparty protocols.  相似文献   

8.
9.
The gravitational instability of a homogeneous isotropic infinite gravitating gaseous medium is investigated in order to study the physical processes that take place during the formation of the solar planetary system. The analytical and numerical solutions of the motion equations of such a medium are considered in two approximations: cold gas and gas at a finite temperature. The real solutions describing the behavior of both wave density disturbances of a homogeneous medium and single disturbances are obtained. Waves of gravitational instability whose amplitude grows exponentially and whose highs and lows, as well as their nodal points, retain their positions in space follow the basic laws of Jean’s model. The authors interpret this wave of instability as an analogue of protoplanetary rings, which can be formed in protoplanetary disks. According to the numerical calculation results, the reaction of a homogeneous gravitating medium to the single initial perturbation of its density is significantly different from the laws of Jean’s model. The instability localized in single initial perturbations extends to the region λ < λJ, although in this case the growth of the perturbation density is considerably less than for λ > λJ. It is discovered that the gravitational instabilities in the region λ > λJ suppress sound. It is shown that, without taking into account the rotation of the Sun’s protoplanetary disk medium, its critical density in the event of a large-scale gravitational instability is about four orders of magnitude smaller than the critical density in accordance with the theory of planet formation by the accumulation of solids and particles.  相似文献   

10.
In agents that operate in environments where decision-making needs to take into account, not only the environment, but also the minimizing actions of an opponent (as in games), it is fundamental that the agent is endowed with the ability of progressively tracing the profile of its adversaries, in such a manner that this profile aids in the process of selecting appropriate actions. However, it would be unsuitable to construct an agent with a decision-making system based only on the elaboration of such a profile, as this would prevent the agent from having its “own identity,” which would leave the agent at the mercy of its opponent. Following this direction, this study proposes an automatic Checkers player, called ACE-RL-Checkers, equipped with a dynamic decision-making module, which adapts to the profile of the opponent over the course of the game. In such a system, the action selection process is conducted through a composition of multilayer perceptron neural network and case library. In this case, the neural network represents the “identity” of the agent, i.e., it is an already trained static decision-making module. On the other hand, the case library represents the dynamic decision-making module of the agent, which is generated by the Automatic Case Elicitation technique. This technique has a pseudo-random exploratory behavior, which allows the dynamic decision-making of the agent to be directed either by the opponent’s game profile or randomly. In order to avoid a high occurrence of pseudo-random decision-making in the game initial phases—in which the agent counts on very little information about its opponent—this work proposes a new module based on sequential pattern mining for generating a base of experience rules extracted from human expert’s game records. This module will improve the agent’s move selection in the game initial phases. Experiments carried out in tournaments involving ACE-RL-Checkers and other agents correlated to this work, confirm the superiority of the dynamic architecture proposed herein.  相似文献   

11.
Incident reporting systems enable end-users to report problems that they have experienced in their working activities to authorities. Such applications are sought to sense the quality of the environment, thus enabling authorities to promote safety and well-being among citizens. Many governments are now promoting the use of mobile applications allowing citizens to report incidents in their neighbourhood to the administration. Nonetheless, it is not clear which user experience dimensions affect the adoption of incident reporting systems, and to what extent anticipated use of the system (anticipated UX) is a determinant for predicting the user experience with the final application. In order to understand how citizens perceive incident reporting systems and which factors affect the user experience (UX), we have performed empirical studies including interviews in early phases of the development process and empirical user testing of advanced prototypes. In this paper, we present the results of a longitudinal study on the evolution of the perception of UX dimensions along the development process, from interviews to running prototypes. Hereafter, we describe the method that has been used for coding the findings of these empirical studies according to six UX dimensions (including visual and aesthetic experience, emotions, stimulation, identification, meaning & value and social relatedness/co-experience). Moreover, we describe how the findings have been associated with users’ tasks. The findings from interviews and user testing indicate that whilst the perceived importance of some UX dimensions (such as identification and meaning & value) remains similar over time, other dimensions such as stimulation and emotions do evolve. Beyond the practical implications of this study for the design of incident reporting systems, this work presents an approach that allows comparing the results of UX assessments in different phases of the process.  相似文献   

12.
We address the problem of how a set of agents can decide to share a resource, represented as a unit-sized pie. The pie can be generated by the entire set but also by some of its subsets. We investigate a finite horizon non-cooperative bargaining game, in which the players take it in turns to make proposals on how the resource should for this purpose be allocated, and the other players vote on whether or not to accept the allocation. Voting is modelled as a Bayesian weighted voting game with uncertainty about the players’ weights. The agenda, (i.e., the order in which the players are called to make offers), is defined exogenously. We focus on impatient players with heterogeneous discount factors. In the case of a conflict, (i.e., no agreement by the deadline), no player receives anything. We provide a Bayesian subgame perfect equilibrium for the bargaining game and conduct an ex-ante analysis of the resulting outcome. We show that the equilibrium is unique, computable in polynomial time, results in an instant Pareto optimal outcome, and, under certain conditions provides a foundation for the core and also the nucleolus of the Bayesian voting game. In addition, our analysis leads to insights on how an individual’s bargained share is influenced by his position on the agenda. Finally, we show that, if the conflict point of the bargaining game changes, then the problem of determining the non-cooperative equilibrium becomes NP-hard even under the perfect information assumption. Our research also reveals how this change in conflict point impacts on the above mentioned results.  相似文献   

13.
In the author’s previous publications, a recursive linear algebraic method was introduced for obtaining (without gravitational radiation) the full potential expansions for the gravitational metric field components and the Lagrangian for a general N-body system. Two apparent properties of gravity— Exterior Effacement and Interior Effacement—were defined and fully enforced to obtain the recursive algebra, especially for the motion-independent potential expansions of the general N-body situation. The linear algebraic equations of this method determine the potential coefficients at any order n of the expansions in terms of the lower-order coefficients. Then, enforcing Exterior and Interior Effacement on a selecedt few potential series of the full motion-independent potential expansions, the complete exterior metric field for a single, spherically-symmetric mass source was obtained, producing the Schwarzschild metric field of general relativity. In this fourth paper of this series, the complete spatial metric’s motion-independent potentials for N bodies are obtained using enforcement of Interior Effacement and knowledge of the Schwarzschild potentials. From the full spatial metric, the complete set of temporal metric potentials and Lagrangian potentials in the motion-independent case can then be found by transfer equations among the coefficients κ(n, α) → λ(n, ε) → ξ(n, α) with κ(n, α), λ(n, ε), ξ(n, α) being the numerical coefficients in the spatial metric, the Lagrangian, and the temporal metric potential expansions, respectively.  相似文献   

14.
Considering the current price gap between hard disk and flash memory SSD storages, for applications dealing with large-scale data, it will be economically more sensible to use flash memory drives to supplement disk drives rather than to replace them. This paper presents FaCE, which is a new low-overhead caching strategy that uses flash memory as an extension to the RAM buffer of database systems. FaCE aims at improving the transaction throughput as well as shortening the recovery time from a system failure. To achieve the goals, we propose two novel algorithms for flash cache management, namely multi-version FIFO replacement and group second chance. This was possible due to flash write optimization as well as disk access reduction obtained by the FaCE caching methods. In addition, FaCE takes advantage of the nonvolatility of flash memory to fully support database recovery by extending the scope of a persistent database to include the data pages stored in the flash cache. We have implemented FaCE in the PostgreSQL open-source database server and demonstrated its effectiveness for TPC-C benchmarks in comparison with existing caching methods such as Lazy Cleaning and Linux Bcache.  相似文献   

15.
The popular Internet service, YouTube, has adopted by default the HyperText Markup Language version 5 (HTML5). With this adoption, YouTube has moved to Dynamic Adaptive Streaming over HTTP (DASH) as Adaptive BitRate (ABR) video streaming technology. Furthermore, rate adaptation in DASH is solely receiver-driven. This issue motivates this work to make a deep analysis of YouTube’s particular DASH implementation. Firstly, this article provides a state of the art about DASH and adaptive streaming technology, and also YouTube traffic characterization related work. Secondly, this paper describes a new methodology and test-bed for YouTube’s DASH implementation traffic characterization and performance measurement. This methodology and test-bed do not make use of proxies and, moreover, they are able to cope with YouTube traffic redirections. Finally, a set of experimental results are provided, involving a dataset of 310 YouTube’s videos. The depicted results show a YouTube’s traffic pattern characterization and a discussion about allowed download bandwidth, YouTube’s consumed bitrate and quality of the video. Moreover, the obtained results are cross-validated with the analysis of HTTP requests performed by YouTube’s video player. The outcomes of this article are applicable in the field of Quality of Service (QoS) and Quality of Experience (QoE) management. This is valuable information for Internet Service Providers (ISPs), because QoS management based on assured download bandwidth can be used in order to provide a target end-user’s QoE when YouTube service is being consumed.  相似文献   

16.
A (t, n) threshold quantum secret sharing (QSS) is proposed based on a single d-level quantum system. It enables the (t, n) threshold structure based on Shamir’s secret sharing and simply requires sequential communication in d-level quantum system to recover secret. Besides, the scheme provides a verification mechanism which employs an additional qudit to detect cheats and eavesdropping during secret reconstruction and allows a participant to use the share repeatedly. Analyses show that the proposed scheme is resistant to typical attacks. Moreover, the scheme is scalable in participant number and easier to realize compared to related schemes. More generally, our scheme also presents a generic method to construct new (t, n) threshold QSS schemes based on d-level quantum system from other classical threshold secret sharing.  相似文献   

17.
A control problem of a class of input-delayed linear systems is considered in this paper. Due to the delay τ in the input, any designed feedback controller can only be engaged after tτ. Then, this can become the cause of slow regulation since any feedback information cannot be available during the delay. So, the initial function defined for -τt ≤ 0 is engaged as an ‘initial non-feedback input’ for 0 ≤ tτ, which governs the system behavior during this initial time period. There have been numerous research results on the control of input-delayed linear systems by far. Yet, there have been no results on the examination and design of this initial function. Utilizing a time optimal control in the existing results, we show that if some pre-feedback as the initial function is engaged, the system response of the input-delayed linear system can be much improved, and a bang-bang input function is a good candidate as a pre-feedback which can provide better starting state values for the state feedback controller in order to perform the fast regulation. Two examples are given for the illustration of our results.  相似文献   

18.
19.
Tracking frequent items (also called heavy hitters) is one of the most fundamental queries in real-time data due to its wide applications, such as logistics monitoring, association rule based analysis, etc. Recently, with the growing popularity of Internet of Things (IoT) and pervasive computing, a large amount of real-time data is usually collected from multiple sources in a distributed environment. Unfortunately, data collected from each source is often uncertain due to various factors: imprecise reading, data integration from multiple sources (or versions), transmission errors, etc. In addition, due to network delay and limited by the economic budget associated with large-scale data communication over a distributed network, an essential problem is to track the global frequent items from all distributed uncertain data sites with the minimum communication cost. In this paper, we focus on the problem of tracking distributed probabilistic frequent items (TDPF). Specifically, given k distributed sites S = {S 1, … , S k }, each of which is associated with an uncertain database \(\mathcal {D}_{i}\) of size n i , a centralized server (or called a coordinator) H, a minimum support ratio r, and a probabilistic threshold t, we are required to find a set of items with minimum communication cost, each item X of which satisfies P r(s u p(X) ≥ r × N) > t, where s u p(X) is a random variable to describe the support of X and \(N={\sum }_{i=1}^{k}n_{i}\). In order to reduce the communication cost, we propose a local threshold-based deterministic algorithm and a sketch-based sampling approximate algorithm, respectively. The effectiveness and efficiency of the proposed algorithms are verified with extensive experiments on both real and synthetic uncertain datasets.  相似文献   

20.
Credit-assignment schemas are widely applied by providing fixed or flexible credit distribution formulas to evaluate the contributions of coauthors of a scientific publication. In this paper, we propose an approach named First and Others (F&O) counting. By introducing a tuning parameter α and a weight β, two new properties are obtained: (1) flexible assignment of credits by modifying the formula (with the change of α) and applying preference to the individual author by adjusting the weights (with the change of β), and (2) calculation of the credits by separating the formula for the first author from others. With formula separation, the credit of the second author shows an inflection point according to the change of α. The developed theorems and proofs concerning the modification of α and β reveal new properties and complement the base theory for informetrics. The F&O schema is also adapted when considering the policy of ‘first-corresponding-author-emphasis’. Through a comparative analysis using a set of empirical data from the fields of chemistry, medicine, psychology, and the Harvard survey data, the performance of the F&O approach is compared with those of other methods to demonstrate its benefits by the criteria of lack of fit and coefficient of determination.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号