共查询到20条相似文献,搜索用时 0 毫秒
1.
Weverton Luis da Costa Cordeiro Flávio Roberto Santos Gustavo Huff Mauch Marinho Pilla Barcelos Luciano Paschoal Gaspary 《Computer Networks》2012,56(11):2569-2589
The Sybil attack consists on the indiscriminate creation of counterfeit identities, by a malicious user (attacker), in large-scale, dynamic distributed systems (for example, Peer-to-Peer). An effective approach to tackle this attack consists in establishing computational puzzles to be solved prior to granting new identities. Solutions based on this approach have the potential to slow down the assignment of identities to malicious users, but unfortunately may affect normal users as well. To address this problem, we propose the use of adaptive computational puzzles as an approach to limit the spread of Sybils. The key idea is to estimate a trust score of the source from which identity requests depart, calculated as a proportion of the number of identities already granted to (the) user(s) associated to that source, in regard to the average of identities granted to users associated to other sources. The higher the frequency (the) user(s) associated to a source obtain(s) identities, the lower the trust score of that source and, consequently, the higher the complexity of the puzzle to be solved. An in-depth analysis of both (i) the performance of our mechanism under various parameter and environment settings, and (ii) the results achieved with an experimental evaluation, considering real-life traces from a Peer-to-Peer file sharing community, has shown the effectiveness of the proposed mechanism in limiting the spread of Sybil identities. While comparatively more complex puzzles were assigned to potential attackers, legitimate users were minimally penalized with easier-to-solve puzzles. 相似文献
2.
《Journal of Parallel and Distributed Computing》2005,65(2):154-168
Large-scale P2P systems typically have hundreds of thousands of peers that involve frequent dynamic activities. Current structured overlays do not identify well the rhythm in the dynamic activities, thus resulting in high maintenance overhead. In this paper, we present a new state cache system, called SCS, that solves the problem by exploiting the access patterns of dynamic activities in P2P systems. SCS partitions the whole P2P network into clusters and dynamically chooses a “super” node in each cluster to selectively record and maintain the routing information for departed nodes most likely to arrive back in the near future. The cached routing information enables SCS to simplify self-organization, reduce system maintenance overhead and provide high quality routing service. The experimental results show that SCS reduces the maintenance overhead by up to 66% while delivering much better routing performance, as compared to current structured P2P systems. 相似文献
3.
A practical approach to testing GUI systems 总被引:1,自引:0,他引:1
GUI systems are becoming increasingly popular thanks to their ease of use when compared against traditional systems. However,
GUI systems are often challenging to test due to their complexity and special features. Traditional testing methodologies
are not designed to deal with the complexity of GUI systems; using these methodologies can result in increased time and expense.
In our proposed strategy, a GUI system will be divided into two abstract tiers—the component tier and the system tier. On
the component tier, a flow graph will be created for each GUI component. Each flow graph represents a set of relationships
between the pre-conditions, event sequences and post-conditions for the corresponding component. On the system tier, the components
are integrated to build up a viewpoint of the entire system. Tests on the system tier will interrogate the interactions between
the components. This method for GUI testing is simple and practical; we will show the effectiveness of this approach by performing
two empirical experiments and describing the results found.
Ping Li received her M.Sc. in Computer Engineering from the University of Alberta, Canada, in 2004. She is currently working for Waterloo Hydrogeologic Inc., a Schlumberger Company, as a Software Quality Analyst. Toan Huynh received a B.Sc. in Computer Engineering from the University of Alberta, Canada. He is currently a PhD candidate at the same institution. His research interests include: web systems, e-commerce, software testing, vulnerabilities and defect management, and software approaches to the production of secure systems. Marek Reformat received his M.Sc. degree from Technical University of Poznan, Poland, and his Ph.D. from University of Manitoba, Canada. His interests were related to simulation and modeling in time-domain, as well as evolutionary computing and its application to optimization problems. For three years he worked for the Manitoba HVDC Research Centre, Canada, where he was a member of a simulation software development team. Currently, Marek Reformat is with the Department of Electrical and Computer Engineering at University of Alberta. His research interests lay in the areas of application of Computational Intelligence techniques, such as neuro-fuzzy systems and evolutionary computing, as well as probabilistic and evidence theories to intelligent data analysis leading to translating data into knowledge. He applies these methods to conduct research in the areas of Software Engineering, Software Quality in particular, and Knowledge Engineering. Dr. Reformat has been a member of program committees of several conferences related to Computational Intelligence and evolutionary computing. He is a member of the IEEE Computer Society and ACM. James Miller received the B.Sc. and Ph.D. degrees in Computer Science from the University of Strathclyde, Scotland. During this period, he worked on the ESPRIT project GENEDIS on the production of a real-time stereovision system. Subsequently, he worked at the United Kingdom’s National Electronic Research Initiative on Pattern Recognition as a Principal Scientist, before returning to the University of Strathclyde to accept a lectureship, and subsequently a senior lectureship in Computer Science. Initially during this period his research interests were in Computer Vision, and he was a co-investigator on the ESPRIT 2 project VIDIMUS. Since 1993, his research interests have been in Software and Systems Engineering. In 2000, he joined the Department of Electrical and Computer Engineering at the University of Alberta as a full professor and in 2003 became an adjunct professor at the Department of Electrical and Computer Engineering at the University of Calgary. He is the principal investigator in a number of research projects that investigate software verification and validation issues across various domains, including embedded, web-based and ubiquitous environments. He has published over one hundred refereed journal and conference papers on Software and Systems Engineering (see www.steam.ualberta.ca for details on recent directions); and currently serves on the program committee for the IEEE International Symposium on Empirical Software Engineering and Measurement; and sits on the editorial board of the Journal of Empirical Software Engineering. 相似文献
James MillerEmail: |
Ping Li received her M.Sc. in Computer Engineering from the University of Alberta, Canada, in 2004. She is currently working for Waterloo Hydrogeologic Inc., a Schlumberger Company, as a Software Quality Analyst. Toan Huynh received a B.Sc. in Computer Engineering from the University of Alberta, Canada. He is currently a PhD candidate at the same institution. His research interests include: web systems, e-commerce, software testing, vulnerabilities and defect management, and software approaches to the production of secure systems. Marek Reformat received his M.Sc. degree from Technical University of Poznan, Poland, and his Ph.D. from University of Manitoba, Canada. His interests were related to simulation and modeling in time-domain, as well as evolutionary computing and its application to optimization problems. For three years he worked for the Manitoba HVDC Research Centre, Canada, where he was a member of a simulation software development team. Currently, Marek Reformat is with the Department of Electrical and Computer Engineering at University of Alberta. His research interests lay in the areas of application of Computational Intelligence techniques, such as neuro-fuzzy systems and evolutionary computing, as well as probabilistic and evidence theories to intelligent data analysis leading to translating data into knowledge. He applies these methods to conduct research in the areas of Software Engineering, Software Quality in particular, and Knowledge Engineering. Dr. Reformat has been a member of program committees of several conferences related to Computational Intelligence and evolutionary computing. He is a member of the IEEE Computer Society and ACM. James Miller received the B.Sc. and Ph.D. degrees in Computer Science from the University of Strathclyde, Scotland. During this period, he worked on the ESPRIT project GENEDIS on the production of a real-time stereovision system. Subsequently, he worked at the United Kingdom’s National Electronic Research Initiative on Pattern Recognition as a Principal Scientist, before returning to the University of Strathclyde to accept a lectureship, and subsequently a senior lectureship in Computer Science. Initially during this period his research interests were in Computer Vision, and he was a co-investigator on the ESPRIT 2 project VIDIMUS. Since 1993, his research interests have been in Software and Systems Engineering. In 2000, he joined the Department of Electrical and Computer Engineering at the University of Alberta as a full professor and in 2003 became an adjunct professor at the Department of Electrical and Computer Engineering at the University of Calgary. He is the principal investigator in a number of research projects that investigate software verification and validation issues across various domains, including embedded, web-based and ubiquitous environments. He has published over one hundred refereed journal and conference papers on Software and Systems Engineering (see www.steam.ualberta.ca for details on recent directions); and currently serves on the program committee for the IEEE International Symposium on Empirical Software Engineering and Measurement; and sits on the editorial board of the Journal of Empirical Software Engineering. 相似文献
4.
基于P2P系统的动态负载均衡算法 总被引:1,自引:0,他引:1
在现实的P2P网络环境中,由于节点的计算能力和带宽等方面的异构性,网络负载不均衡现象非常突出.基于数据复制/转移策略,提出一种动态的平衡算法.根据节点的能力,当前节点负载状态、负载转移代价预估算,在整个系统范围内找到一组传输代价较小并且负载较轻的节点集合,从中随机选取较为适宜的节点进行负载转移或者数据复制.试验结果表明,该算法能够有效地均衡负载的分布以及降低负载的迁移率. 相似文献
5.
This paper is devoted to the generic observability analysis for structured bilinear systems using a graph-theoretic approach. On the basis of a digraph representation, we express in graphic terms the necessary and sufficient conditions for the generic observability of structured bilinear systems. These conditions have an intuitive interpretation and are easy to check by hand for small systems and by means of well-known combinatorial techniques for large-scale systems. 相似文献
6.
Nadir Shah Ayaz Ahmad Babar Nazir Depei Qian 《Peer-to-Peer Networking and Applications》2016,9(2):356-371
Due to limited radio range and mobility of nodes in mobile ad hoc networks (MANETs), the network partitioning and merging could occur frequently. When structured peer-to-peer (P2P) overlays are running over MANETs, then network partition in the physical network can also cause network partition at the overlay layer. Existing approaches for structured P2P overlay over MANETs do not detect network partition at the overlay layer. This paper proposes a cross-layer approach to detect network partition at the overlay layer for structured P2P overlay over MANETs. Simulation results show that the proposed approach is highly effective and efficient in terms of routing overhead, success ratio and false-negative ratio. 相似文献
7.
高迎 《计算机工程与设计》2013,34(4)
为了有效实现结构化P2P系统中数据均衡分布,借鉴并行数据库中数据划分的基本思想,通过在节点加入和数据加入时的存储均衡算法实现大数据量在系统中存储均衡,使得系统存储差异系数大大降低.设计了一个使用数据划分的结构化P2P模型Balance-Peer.在不需要全局信息的情况下,实现动态数据划分方法.实验结果表明了该存储均衡策略是有效的. 相似文献
8.
9.
Software engineering should provide software engineers with methodologies and tools suitable for use in that small number of applications where efficiency is really important. In order to do that, the optimization process should be a clearly visible phase of the software lifecycle (regardless of the particular software development paradigm adopted), so that it can be regulated, securing the production of good quality and efficient software. With this in mind, the author suggests an approach to program optimization based on a paradigm, a method, some principles and guidelines, and some well-known techniques 相似文献
10.
V. R. Basili 《Computer Languages, Systems and Structures》1975,1(3):255-273
This report is an attempt at systematizing a set of ground rules for high-level language design. It recommends the use of a hierarchical semantic model schema. HGL, in a step by step, top-down approach imposing more and more structure on the language components as the design becomes solidified. The approach is demonstrated by showing the stepwise design of the high-level language, GRAAL. The method recommended is divided into three major phases. The first is an informal one. The second is encoding the language components into a very high-level model. This high-level design allows a redesign of language components before they have been specified at too detailed a level. The third phase is to design the compiler in HGL using the final language design. 相似文献
11.
This paper deals with the state and input observability analysis for structured linear systems with unknown inputs. The proposed method is based on a graph-theoretic approach and assumes only the knowledge of the system's structure. Using a particular decomposition of the systems into two subsystems, we express, in simple graphic terms, necessary and sufficient conditions for the generic state and input observability. These conditions are easy to check because they are based on comparison of integers and on finding particular subgraphs in a digraph. Therefore, our approach is suited to study large-scale systems. 相似文献
12.
13.
本文分析了基于分布式哈希表查找的大规模P2P系统的固有安全问题,提出了一种针对分布式哈希表系统利用算法复杂度进行DoS攻击的方法,并阐述了执行攻击的条件和构造这类攻击的方法,最后结合P2P网络发展趋势总结了此类攻击方法的局限性和进一步的研究方向。 相似文献
14.
Seán I. O’Donoghue Heiko Horn Evangelos Pafilis Sven Haag Michael Kuhn Venkata P. Satagopam Reinhard Schneider Lars J. Jensen 《Journal of Web Semantics》2010,8(2-3):182-189
To date, adding semantic capabilities to web content usually requires considerable server-side re-engineering, thus only a tiny fraction of all web content currently has semantic annotations. Recently, we announced Reflect (http://reflect.ws), a free service that takes a more practical approach: Reflect uses augmented browsing to allow end-users to add systematic semantic annotations to any web-page in real-time, typically within seconds. In this paper we describe the tagging process in detail and show how further entity types can be added to Reflect; we also describe how publishers and content providers can access Reflect programmatically using SOAP, REST (HTTP post), and JavaScript. Usage of Reflect has grown rapidly within the life sciences, and while currently only genes, protein and small molecule names are tagged, we plan to soon expand the scope to include a much broader range of terms (e.g., Wikipedia entries). The popularity of Reflect demonstrates the use and feasibility of letting end-users decide how and when to add semantic annotations. Ultimately, ‘semantics is in the eye of the end-user’, hence we believe end-user approaches such as Reflect will become increasingly important in semantic web technologies. 相似文献
15.
Tensegrity systems are lightweight structures composed of cables and struts. The nonlinear behavior of tensegrity systems
is critical; therefore, the design of these types of structures is relatively complex. In the present study, a practical and
efficient approach for geometrical nonlinear analysis of tensegrity systems is proposed. The approach is based on the point
iterative method. Static equilibrium equations are given in nodes for subsystems, thus the maximum unknown displacement number
in each step is three. Pre-stress forces in the system are taken into account in a tangent stiffness matrix, while similar
calculations are carried out for each node in the system which has a minimum of one degree of freedom. In each iteration step,
the values found in previous steps are used. When it reaches permissible tolerance of calculation, final displacements and
internal forces are obtained. The structural behavior of the tensegrity systems were evaluated by the proposed method. The
results show that the method can be used effectively for tensegrity systems. 相似文献
16.
17.
An approach to the correctness proof of static semantics with respect to the standard semantics of a programming language is presented, where correctness means that the properties of the language described by the static semantics, such as type checking, are consistent with the standard semantics. The standard and static semantics are given in a denotational style in terms of some basic domains and domain constructors, which, together with suitable operations, are used to describe fundamental semantic concepts. The domains have different meaning in the two semantics and the static semantics correctness proof is carried out by devising a set of suitable functions between them. We show that the correctness proof can be greatly simplified by structuring the semantics definitions, and we illustrate that by applying the methodology to a simple imperative language. In the example the derivation of a static checking algorithm from the static semantics is described. 相似文献
18.
Multimedia Tools and Applications - Cloud computing is an intelligent integration of distributed computing, hardware virtualization techniques, automated data center techniques and Internet... 相似文献
19.
20.
Metrics programs that create meaningful change in software practice must start with business goals in mind. Software metrics are quantitative standards of measurement for various aspects of software projects. A well-designed metrics program will support decision making by management and enhance return on the IT investment. There are many aspects of software projects that can be measured, but not all aspects are worth measuring. Starting a new metrics program or improving a current program consists of five steps: identify business goals; select metrics; gather historical data; automate measurement procedures; and use metrics in decision making 相似文献