共查询到20条相似文献,搜索用时 0 毫秒
1.
Software systems can be represented as complex networks and their artificial nature can be investigated with approaches developed in network analysis.Influence maximization has been successfully applied on software networks to identify the important nodes that have the maximum influence on the other parts.However,research is open to study the effects of network fabric on the influence behavior of the highly influential nodes.In this paper,we construct class dependence graph(CDG)networks based on eight practical Java software systems,and apply the procedure of influence maximization to study empirically the correlations between the characteristics of maximum influence and the degree distributions in the software networks.We demonstrate that the artificial nature of CDG networks is reflected partly from the scale free behavior:the in-degree distribution follows power law,and the out-degree distribution is lognormal.For the influence behavior,the expected influence spread of the maximum influence set identified by the greedy method correlates significantly with the degree distributions.In addition,the identified influence set contains influential classes that are complex in both the number of methods and the lines of code(LOC).For the applications in software engineering,the results provide possibilities of new approaches in designing optimization procedures of software systems. 相似文献
2.
Rapid distribution of newly released confidential information is often impeded by network traffic jams, especially when the
confidential information is either crucial or highly prized. This is the case for stock market values, blind auction bidding
amounts, many large corporations'strategic business plans, certain news agencies'timed publications, and some licensed software
updates. Hierarchical time-based information release (HTIR) schemes enable the gradual distribution of encrypted confidential
information to large, distributed, (potentially) hierarchically structured user communities, and the subsequent publication
of corresponding short decryption keys, at a predetermined time, so that users can rapidly access the confidential information.
This paper presents and analyzes the efficiency of a novel HTIR scheme.
Deholo Nali holds a M.Sc. in mathematics and a Ph.D. in Computer Science from the University of Ottawa, Canada. In the past, he worked
for two years as a software developer and pursued research in the design and analysis of identity-based cryptographic protocols.
His research interests now include identity theft and graphical password authentication. 相似文献
3.
Redistributing time-based rights between consumer devices for content sharing in DRM system 总被引:2,自引:0,他引:2
Device-based digital rights management (DRM) systems tightly bind rights for content to a device. However, it can decrease
the consumers’ convenience because it disturbs consumers who want to use the already purchased content with their other devices
freely. Previous research into solving this problem still have burdens such as restricting the number of devices that a consumer
can use and requiring a special device that manages content sharing. In this paper, we propose a new rights sharing scheme
which does not restrict the number of devices that a consumer can use and does not require a specialized device. In our scheme,
the right to use content is represented as the right to use the content for a certain amount of time. Consumers can use the
content with any of their devices by redistributing the usage amount of time between devices. The redistribution process only
requires local synchronization among participating devices. To prevent illegal content sharing and to detect illegally increased
content usage time, the amount of time that a consumer can have is limited and the rights for each unit of time has a unique
number to prevent illegal duplications. We present data structures and protocols, analyze security properties of our scheme,
compare our scheme with related work, and evaluate our scheme through implementation.
相似文献
Sung Je HongEmail: |
4.
Harjtsa J.R. Ramamritham K. Gupta R. 《Parallel and Distributed Systems, IEEE Transactions on》2000,11(2):160-181
We investigate the performance implications of providing transaction atomicity for firm-deadline real-time applications, operating on distributed data. Using a detailed simulation model, the real-time performance of a representative set of classical transaction commit protocols is evaluated. The experimental results show that data distribution has a significant influence on real-time performance and that the choice of commit protocol clearly affects the magnitude of this influence. We also propose and evaluate a new commit protocol, PROMPT (Permits Reading Of Modified Prepared-data for Timeliness), that is specifically designed for the real-time domain. PROMPT allows transactions to “optimistically” borrow, in a controlled manner, the updated data of transactions currently in their commit phase. This controlled borrowing reduces the data inaccessibility and the priority inversion that is inherent in distributed real-time commit processing. A simulation-based evaluation shows PROMPT to be highly successful, as compared to the classical commit protocols, in minimizing the number of missed transaction deadlines. In fact, its performance is close to the best on-line performance that could be achieved using the optimistic lending approach. Further, it is easy to implement and incorporate in current database system software. Finally, PROMPT is compared against an alternative priority inheritance-based approach to addressing priority inversion during commit processing. The results indicate that priority inheritance does not provide tangible performance benefits 相似文献
5.
The presumed-either two-phase commit protocol 总被引:2,自引:0,他引:2
This paper describes the presumed-either two-phase commit protocol. Presumed-either exploits log piggybacking to reduce the cost of committing transactions. If timely piggybacking occurs, presumed-either combines the performance advantages of presumed-abort and presumed-commit. Otherwise, presumed-either behaves much like the widely-used presumed-abort protocol. 相似文献
6.
K. V. S. Ramarao 《Acta Informatica》1989,26(6):577-595
Summary Implementation of atomic actions in a distributed system in the presence of fail-stop failures is investigated. Worst case time and message complexities of the protocols realizing this are studied on complete graphs, rings, trees, and arbitrary graphs. Two modes of communication are considered — point-to-point and broadcast. Individual lower and upper bounds on time and messages are presented and the simultaneous achievability of the optimum message and time bounds is shown impossible in all the interesting cases. 相似文献
7.
8.
《Control Engineering Practice》1999,7(3):401-411
The authors of the paper have collaborated in a joint project involving four French control, mechanics and computer-science laboratories. In the paper, various mechanical architectures of biped robots are examined in detail, showing that their walking capabilities are closely linked to the kinematic characteristics of the mechanical structure. Then, it is shown that the geometrical and inertial parameters of the mechanical systems strongly affect the gait. In particular, the influence of the biped inertia on the lateral stability of the system, as well as the conditions of the existence of passive pendular gaits during the swing phase, are computationally analyzed. Extending the ideas previously developed, some characteristics of the mechanical architecture and design of the BIP project can be clearly justified. It turns out that a kinematic structure with 15 degrees of freedom is necessary in order for the biped robot to develop anthropomorphic gaits. Furthermore, as an anthropometric mass distribution can improve the walking abilities of the robot, special transmitters have been designed in order to help to fulfil this requirement. 相似文献
9.
Animportant problem in the construction of fault-tolerant distributed database systems is the design of nonblocking transaction commit protocols. This problem has been extensively studied for synchronous systems (i.e., systems where no messages ever arrive late). In this paper, the synchrony assumption is relaxed. A new partially synchronous timing model is described. Developed for this model is a new nonblocking randomized transaction commit protocol, which incorporates an agreement protocol of Ben-Or. The new protocol works as long as fewer than half the processors fail. A matching lower bound is proved, showing that the number of processor faults tolerated is optimal. If half or more of the processors fail, the protocol degrades gracefully: it blocks, but no processor produces a wrong answer. A notion of asynchronous round is defined, and the protocol is shown to terminate in a small constant expected number of asynchronous rounds. In contrast it is shown that no protocol in this model can guarantee that a processor terminates in a bounded expected number of its own steps, even if processors are synchronous.
Brian A. Coan received the B.S.E. degree in electrical engineering and computer science from Princeton University, Princeton, New Jersey, in 1977; the M.S. degree in computer engineering from Stanford University, Stanford, California, in 1979; and the Ph.D. degree in computer science from the Massachusetts Institute of Technology, Cambridge, Massachusetts, in 1987. He has worked for Amdahl Corporation and AT & T Bell Laboratories. Currently he is a member of the technical staff at Bellcore. His main research interest is fault tolerance in distributed systems.
Jennifer Lundelius Welch received her B.A. in 1979 from the University of Texas at Austin, and her S.M. and Ph.D. from the Massachusetts Institute of Technology in 1984 and 1988 respecively. She was a member of technical staff at GTE Laboratories Incorporated in Waltham, Massachusetts, from 1988 to 1989. She is currently an assistant professor at the University of North Carolina in Chapel Hill. Her research interests include algorithms and lower bounds for distributed computing.The authors were with the MIT Laboratory for Computer Science when the bulk of this work was done. This work was supported in part by the Advanced Research Projects Agency of the Department of Defense under Contract N00014-83-K-0125, the National Science Foundation under Grant DCR-83-02391, the Office of Army Research under Contract DAAG29-84-K-0058, and the Office of Naval Research under Contract N00014-85-K-0168. A preliminary version of this paper appears in theProceedings of the Fifth Annual ACM Symposium on Principles of Distributed Computing [2] 相似文献
10.
Estimating the position of mobile devices with high accuracy in indoor environments is of interest across a wide range of applications. Many methods and technologies have been proposed to solve the problem but, to date, there is no “silver bullet”. This paper surveys research conducted on indoor positioning using time-based approaches in conjunction with the IEEE 802.11 wireless local area network standard (WiFi). Location solutions using this approach are particularly attractive due to the wide deployment of WiFi and because prior mapping is not needed. This paper provides an overview of the IEEE 802.11 standards and summarizes the key research challenges in 802.11 time-based positioning. The paper categorizes and describes the many proposals published to date, evaluating their implementation complexity and positioning accuracy. Finally, the paper summarizes the state-of-the-art and makes suggestions for future research directions. 相似文献
11.
12.
Although there are several factors contributing to the difficulty in meeting distributed real time transaction deadlines,
data conflicts among transactions, especially in commitment phase, are the prime factor resulting in system performance degradation.
Therefore, design of an efficient commit protocol is of great significance for distributed real time database systems (DRTDBS).
Most of the existing commit protocols try to improve system performance by allowing a committing cohort to lend its data to
an executing cohort, thus reducing data inaccessibility. These protocols block the borrower when it tries to send WORKDONE/PREPARED
message [1, 6, 8, 9], thus increasing the transactions commit time. This paper first analyzes all kind of dependencies that
may arise due to data access conflicts among executing-committing transactions when a committing cohort is allowed to lend
its data to an executing cohort. It then proposes a static two-phase locking and high priority based, write-update type, ideal for fast and timeliness commit protocol i.e. SWIFT. In SWIFT, the execution phase of a cohort is divided into two parts, locking phase and
processing phase and then, in place of WORKDONE message, WORKSTARTED message is sent just before the start of processing phase
of the cohort. Further, the borrower is allowed to send WORKSTARTED message, if it is only commit dependent on other cohorts
instead of being blocked as opposed to [1, 6, 8, 9]. This reduces the time needed for commit processing and is free from cascaded
aborts. To ensure non-violation of ACID properties, checking of completion of processing and the removal of dependency of
cohort are required before sending the YES-VOTE message. Simulation results show that SWIFT improves the system performance
in comparison to earlier protocol. The performance of SWIFT is also analyzed for partial read-only optimization, which minimizes
intersite message traffic, execute-commit conflicts and log writes consequently resulting in a better response time. The impact
of permitting the cohorts of the same transaction to communicate with each other [5] on SWIFT has also been analyzed.
Recommended by: Ahmed Elmagarmid 相似文献
13.
14.
《Journal of Computer and System Sciences》2006,72(7):1226-1237
Composite web services provide promising prospects for conducting cross-organizational business transactions. Such transactions: are generally complex, require longer processing time, and manipulate financially critical data. It is therefore crucial to ensure stronger reliability, higher throughput and enhanced performance of transactions. In order to meet these requirements, this paper proposes a new commit protocol for managing transactions in composite web services. Specifically, it aims to improve the performance by reducing network delays and the processing time of transactions. The proposed protocol is based on the concept of tentative commit that allows transactions to tentatively commit on the shared data of web services. The tentative commit protocol avoids resource blocking thus improving performance. The proposed protocol is tested through various simulation experiments. The outcomes of these experiments show that the proposed protocol outperforms existing protocols in terms of transaction performance. 相似文献
15.
This paper presents an overview of two maintenance techniques widely discussed in the literature: time-based maintenance (TBM) and condition-based maintenance (CBM). The paper discusses how the TBM and CBM techniques work toward maintenance decision making. Recent research articles covering the application of each technique are reviewed. The paper then compares the challenges of implementing each technique from a practical point of view, focusing on the issues of required data determination and collection, data analysis/modelling, and decision making. The paper concludes with significant considerations for future research. Each of the techniques was found to have unique concepts/principles, procedures, and challenges for real industrial practise. It can be concluded that the application of the CBM technique is more realistic, and thus more worthwhile to apply, than the TBM one. However, further research on CBM must be carried out in order to make it more realistic for making maintenance decisions. The paper provides useful information regarding the application of the TBM and CBM techniques in maintenance decision making and explores the challenges in implementing each technique from a practical perspective. 相似文献
16.
入侵检测通过收集各种网络数据,从中分析和发现可能的入侵攻击行为。聚类算法是一种无监督分类方法,能够很好地用于入侵检测。提出一种基于聚类分析和时间序列模型的异常入侵检测方法,该方法不需要手动标示的训练数据集就可以探测到很多不同类型的入侵行为。实验结果表明,该方法用于入侵检测具有较高的检测率和较低的误报率。 相似文献
17.
The popular dynamic reanalysis methods, such as combined approximation (CA) mainly focus on frequency domain. Compared with time domain analysis, the main challenge for reanalysis methods is to calculate the responses in each iteration. Due to this difficulty, popular reanalysis methods are not available for time domain analysis, such as Newmark- $\beta $ and central different method (CDF). Therefore, a novel adaptive time-based global reanalysis (ATGR) algorithm for Newmark- $\beta $ method is suggested. If basis vectors are generated for predicting the response in each time step, computational cost of reanalysis should be significantly increased. To improve the efficiency, an adaptive reanalysis algorithm is suggested. Moreover, in order to enhance the accuracy of the popular combined algorithm (CA) reanalysis, a global strategy is suggested to construct basis vectors. Numerical examples show that accurate approximations are achieved efficiently for time domain problems. 相似文献
18.
根据2008年7月在松花湖实测的水体反射光谱及实验室分析得到的叶绿素浓度数据,对松花湖水体反射光谱特征与叶绿素浓度之间的关系进行探讨与分析。研究结果表明:水体叶绿素浓度与各波长点处反射率相关性均较好,并选择700 nm处反射率建立单波段模型。而700 nm和677 nm波长处反射率比值、685 nm处光谱一阶微分、700 nm波长处波峰几何特征具有较好的相关性,给出了松花湖水体叶绿素浓度估算模型,为松花湖水体叶绿素浓度反演监测提供了一定的理论基础与参考。 相似文献
19.
20.
为在持续集成(CI)环境下减少回归测试集、提升回归测试的效率,提出一种适用于CI环境的回归测试套件选择方法。首先,根据每个提交的测试套件历史失败率和执行率信息,进行提交排序;然后,采用机器学习方法,对提交涉及的测试套件进行失败率预测,并选择具有较高失败率的测试套件。该方法综合使用提交排序技术和测试套件选择技术,从而保证既提高故障检测率又能在一定程度上降低测试成本。在Google的开源数据集上进行的实验结果表明:与同样采用提交排序的方法和采用测试套件选择的方法相比,所提方法的开销感知平均故障检测率APFDc提高了1%~27%;在相同的测试时间成本下,所提方法的测试召回提高了33.33~38.16个百分点,变更召回提高了15.67~24.52个百分点,测试套件选择率降低了约6个百分点。 相似文献