共查询到20条相似文献,搜索用时 203 毫秒
1.
Jun Li 《Peer-to-Peer Networking and Applications》2011,4(4):325-345
Conventional client-server applications can be enhanced by enabling peer-to-peer data sharing between the clients, greatly
reducing the scalability concern when a large number of clients access a single server. However, for these “hybrid peer-to-peer
applications,” obtaining data from peer clients may not be secure, and clients may lack incentives in providing or receiving
data from their peers. In this paper, we describe our mSSL framework that encompasses key security and incentive functions
that hybrid peer-to-peer applications can selectively invoke based on their need. In contrast to the conventional SSL protocol
that only protects client-server connections, mSSL not only supports client authentication and data confidentiality, but also
ensures data integrity through a novel exploit of Merkle hash trees, all under the assumption that data sharing can be between
untrustworthy clients. Moreover, with mSSL’s incentive functions, any client that provides data to its peers can also obtain
accurate proofs or digital money for its service securely and reliably. Our evaluation further shows that mSSL is not only
fast and effective, but also has a reasonable overhead. 相似文献
2.
Much of the ongoing research in ubiquitous computing has concentrated on providing context information, e.g. location information,
to the level of services and applications. Typically, mobile clients obtain location information from their environment which
is used to provide “locally optimal” services. In contrast, it may be of interest to obtain information about the current
context a mobile user or device is in, from a client somewhere on the Web, i.e. to use the mobile device as an information
provider for Internet clients.
As an instance of such services we propose the metaphor of a “location-aware” Web homepage of mobile users providing information
about, e.g. the current location a mobile user is at. Requesting this homepage can be as easy as typing a URL containing the
mobile user's phone number such ashttp://mhp.net/+49123456789 in an off-the-shelf browser. The homepage is dynamically constructed as Web users access it and it can be configured in various
ways that are controlled by the mobile user. We present the architecture and implementation and discuss issues around this
example of “inverse” ubiquitous computing. 相似文献
3.
In recent years, on-demand transport systems (such as a demand-bus system) are focused as a new transport service in Japan.
An on-demand vehicle visits pick-up and delivery points by door-to-door according to the occurrences of requests. This service
can be regarded as a cooperative (or competitive) profit problem among transport vehicles. Thus, a decision-making for the
problem is an important factor for the profits of vehicles (i.e., drivers). However, it is difficult to find an optimal solution
of the problem, because there are some uncertain risks, e.g., the occurrence probability of requests and the selfishness of
other rival vehicles. Therefore, this paper proposes a transport policy for on-demand vehicles to control the uncertain risks.
First, we classify the profit of vehicles as “assured profit” and “potential profit”. Second, we propose a “profit policy”
and “selection policy” based on the classification of the profits. Moreover, the selection policy can be classified into “greed”,
“mixed”, “competitive”, and “cooperative”. These selection policies are represented by selection probabilities of the next
visit points to cooperate or compete with other vehicles. Finally, we report simulation results and analyze the effectiveness
of our proposal policies. 相似文献
4.
Alexander W. Dent 《International Journal of Information Security》2008,7(5):349-377
This paper surveys the literature on certificateless encryption schemes. In particular, we examine the large number of security
models that have been proposed to prove the security of certificateless encryption schemes and propose a new nomenclature
for these models. This allows us to “rank” the notions of security for a certificateless encryption scheme against an outside
attacker and a passive key generation centre, and we suggest which of these notions should be regarded as the “correct” model
for a secure certificateless encryption scheme. We also examine the security models that aim to provide security against an
actively malicious key generation centre and against an outside attacker who attempts to deceive a legitimate sender into
using an incorrect public key (with the intention to deny the legitimate receiver that ability to decrypt the ciphertext).
We note that the existing malicious key generation centre model fails to capture realistic attacks that a malicious key generation
centre might make and propose a new model. Lastly, we survey the existing certificateless encryption schemes and compare their
security proofs. We show that few schemes provide the “correct” notion of security without appealing to the random oracle
model. The few schemes that do provide sufficient security guarantees are comparatively inefficient. Hence, we conclude that
more research is needed before certificateless encryption schemes can be thought to be a practical technology. 相似文献
5.
Prediction of compressive and tensile strength of Gaziantep basalts via neural networks and gene expression programming 总被引:1,自引:0,他引:1
In this paper, two soft computing approaches, which are known as artificial neural networks and Gene Expression Programming
(GEP) are used in strength prediction of basalts which are collected from Gaziantep region in Turkey. The collected basalts
samples are tested in the geotechnical engineering laboratory of the University of Gaziantep. The parameters, “ultrasound
pulse velocity”, “water absorption”, “dry density”, “saturated density”, and “bulk density” which are experimentally determined
based on the procedures given in ISRM (Rock characterisation testing and monitoring. Pergamon Press, Oxford, 1981) are used
to predict “uniaxial compressive strength” and “tensile strength” of Gaziantep basalts. It is found out that neural networks
are quite effective in comparison to GEP and classical regression analyses in predicting the strength of the basalts. The
results obtained are also useful in characterizing the Gaziantep basalts for practical applications. 相似文献
6.
Allen Van Gelder Fumiaki Okushi 《Annals of Mathematics and Artificial Intelligence》1999,26(1-4):113-132
This paper describes new “lemma” and “cut” strategies that are efficient to apply in the setting of propositional Model Elimination.
Previous strategies for managing lemmas and C-literals in Model Elimination were oriented toward first-order theorem proving.
The original “cumulative” strategy remembers lemmas forever, and was found to be too inefficient. The previously reported
C-literal and unit-lemma strategies, such as “strong regularity”, forget them unnecessarily soon in the propositional domain.
An intermediate strategy, called “quasi-persistent” lemmas, is introduced. Supplementing this strategy, methods for “eager”
lemmas and two forms of controlled “cut” provide further efficiencies. The techniques have been incorporated into “Modoc”,
which is an implementation of Model Elimination, extended with a new pruning method that is designed to eliminate certain
refutation attempts that cannot succeed. Experimental data show that on random 3CNF formulas at the “hard” ratio of 4.27 clauses
per variable, Modoc is not as effective as recently reported model-searching methods. However, on more structured formulas
from applications, such as circuit-fault detection, it is superior.
This revised version was published online in June 2006 with corrections to the Cover Date. 相似文献
7.
Service management and design has largely focused on the interactions between employees and customers. This perspective holds
that the quality of the “service experience” is primarily determined during this final “service encounter” that takes place
in the “front stage.” This emphasis discounts the contribution of the activities in the “back stage” of the service value
chain where materials or information needed by the front stage are processed. However, the vast increase in web-driven consumer
self-service applications and other automated services requires new thinking about service design and service quality. It
is essential to consider the entire network of services that comprise the back and front stages as complementary parts of
a “service system.” We need new concepts and methods in service design that recognize how back stage information and processes
can improve the front stage experience. This paper envisions a methodology for designing service systems that synthesizes
(front-stage-oriented) user-centered design techniques with (back stage) methods for designing information-intensive applications. 相似文献
8.
Alexander Gendler Avi Mendelson Yitzhak Birk 《International journal of parallel programming》2006,34(2):171-188
Aggressive prefetching mechanisms improve performance of some important applications, but substantially increase bus traffic and “pressure” on cache tag arrays. They may even reduce performance of applications that are not memory bounded. We introduce a “feedback” mechanism, termed Prefetcher Assessment Buffer (PAB), which filters out requests that are unlikely to be useful. With this, applications that cannot benefit from aggressive prefetching will not suffer from their side-effects. The PAB is evaluated with different configurations, e.g., “all L1 accesses trigger prefetches” and “only misses to L1 trigger prefetches”. When compared with the non-selective concurrent use of multiple prefetchers, the PAB’s application to prefetching from main memory to the L2 cache can reduce the number of loads from main memory by up to 25% without losing performance. Application of more sophisticated techniques to prefetches between the L2- and L1-cache can increase IPC by 4% while reducing the traffic between the caches 8-fold. 相似文献
9.
10.
Computer security 总被引:2,自引:0,他引:2
A strong factor in the early development of computers was security – the computations that motivated their development, such
as decrypting intercepted messages, generating gunnery tables, and developing weapons, had military applications. But the
computers themselves were so big and so few that they were relatively easy to protect simply by limiting physical access to
them to their programmers and operators. Today, computers have shrunk so that a web server can be hidden in a matchbox and
have become so common that few people can give an accurate count of the number they have in their homes and automobiles, much
less the number they use in the course of a day. Computers constantly communicate with one another; an isolated computer is
crippled. The meaning and implications of “computer security” have changed over the years as well. This paper reviews major
concepts and principles of computer security as it stands today. It strives not to delve deeply into specific technical areas
such as operating system security, access control, network security, intrusion detection, and so on, but to paint the topic
with a broad brush.
Published online: 27 July 2001 相似文献
11.
Jose-Jesus Fernandez Jose-Roman Bilbao-Castro Roberto Marabini Jose-Maria Carazo Inmaculada Garcia 《New Generation Computing》2004,22(2):187-188
Conclusion This short note describes a potential application of grid computing in life sciences: high resolution structure determination
of biological specimens by electron microscope tomography. It is shown there are excellent opportunities to benefit from grids:
potential intensive applications to exploit the “high-throughput” and the “distributed supercomputing” capabilities of grids.
Furthermore, grids may turn into reality solving problems not dared so far, such as structure determination of large viruses
at near-atomic resolution or reconstruction of whole cells at molecular resolution. Grid computing will make it possible to
afford those “grand challenge” applications currently unapproachable. 相似文献
12.
Dong-Xi Liu 《计算机科学技术学报》2007,22(1):44-53
The problem of regulating access to XML documents has attracted much attention from both academic and industry communities. In existing approaches, the XML elements specified by access policies axe either accessible or inaccessible according to their sensitivity. However, in some cases, the original XML elements are sensitive and inaccessible, but after being processed in some appropriate ways, the results become insensitive and thus accessible. This paper proposes a policy language to accommodate such cases, which can express the downgrading operations on sensitive data in XML documents through explicit calculations on them. The proposed policy language is called calculation-embedded schema (CSchema), which extends the ordinary schema languages with protection type for protecting sensitive data and specifying downgrading operations. CSchema language has a type system to guarantee the type correctness of the embedded calculation expressions and moreover this type system also generates a security view after type checking a CSchema policy. Access policies specified by CSchema are enforced by a validation procedure, which produces the released documents containing only the accessible data by validating the protected documents against CSchema policies. These released documents are then ready to be accessed by, for instance, XML query engines. By incorporating this validation procedure, other XML processing technologies can use CSchema as the access control module. 相似文献
13.
Andrew Fano 《Personal and Ubiquitous Computing》2001,5(1):12-15
The promise of mobile devices lies not in their capacity to duplicate the capabilities of desktop machines, but rather in
their promise of enabling location-specific tasks. One of the challenges that must be addressed if they are to be used in
this way is how intuitive interfaces for mobile devices can be designed that enable access to location-specific services usable
across locations. We are developing a prototype mobile valet application that presents location-specific services organised
around the tasks associated with a location. The basic elements of the interface exploits commonalties in the way we address
tasks at various locations just as the familiar “file” and “edit” menus in various software applications exploit regularities
in software tasks. 相似文献
14.
Michiharu Kudo 《International Journal of Information Security》2002,1(2):116-130
Over the years a wide variety of access control models and policies have been proposed, and almost all the models have assumed
“grant the access request or deny it.” They do not provide any mechanism that enables us to bind authorization rules with
required operations such as logging and encryption. We propose the notion of a “provisional action” that tells the user that
his request will be authorized provided he (and/or the system) takes certain actions. The major advantage of our approach
is that arbitrary actions such as cryptographic operations can all coexist in the access control policy rules. We define a
fundamental authorization mechanism and then formalize a provision-based access control model. We also present algorithms
and describe their algorithmic complexity. Finally, we illustrate how provisional access control policy rules can be specified
effectively in practical usage scenarios.
Published online: 22 January 2002 相似文献
15.
Scott D. Stoller 《Distributed Computing》2000,13(2):85-98
Summary. This paper proposes a framework for detecting global state predicates in systems of processes with approximately-synchronized
real-time clocks. Timestamps from these clocks are used to define two orderings on events: “definitely occurred before” and
“possibly occurred before”. These orderings lead naturally to definitions of 3 distinct detection modalities, i.e., 3 meanings of “predicate held during a computation”, namely: (“ possibly held”), (“ definitely held”), and (“ definitely held in a specific global state”). This paper defines these modalities and gives efficient algorithms for detecting
them. The algorithms are based on algorithms of Garg and Waldecker, Alagar and Venkatesan, Cooper and Marzullo, and Fromentin
and Raynal. Complexity analysis shows that under reasonable assumptions, these real-time-clock-based detection algorithms
are less expensive than detection algorithms based on Lamport's happened-before ordering. Sample applications are given to
illustrate the benefits of this approach.
Received: January 1999 / Accepted: November 1999 相似文献
16.
Tomás Sánchez López Damith Chinthana Ranasinghe Bela Patkai Duncan McFarlane 《Information Systems Frontiers》2011,13(2):281-300
Deployment of embedded technologies is increasingly being examined in industrial supply chains as a means for improving efficiency
through greater control over purchase orders, inventory and product related information. Central to this development has been
the advent of technologies such as bar codes, Radio Frequency Identification (RFID) systems, and wireless sensors which when
attached to a product, form part of the product’s embedded systems infrastructure. The increasing integration of these technologies
dramatically contributes to the evolving notion of a “smart product”, a product which is capable of incorporating itself into
both physical and information environments. The future of this revolution in objects equipped with smart embedded technologies
is one in which objects can not only identify themselves, but can also sense and store their condition, communicate with other
objects and distributed infrastructures, and take decisions related to managing their life cycle. The object can essentially
“plug” itself into a compatible systems infrastructure owned by different partners in a supply chain. However, as in any development
process that will involve more than one end user, the establishment of a common foundation and understanding is essential
for interoperability, efficient communication among involved parties and for developing novel applications. In this paper,
we contribute to creating that common ground by providing a characterization to aid the specification and construction of
“smart objects” and their underlying technologies. Furthermore, our work provides an extensive set of examples and potential
applications of different categories of smart objects. 相似文献
17.
A. F. Newell P. Gregor M. Morgan G. Pullin C. Macaulay 《Universal Access in the Information Society》2011,10(3):235-243
Although “User-Centred”, “Participatory”, and other similar design approaches have proved to be very valuable for mainstream
design, their principles are more difficult to apply successfully when the user group contains, or is composed of, older and/or
disabled users. In the field of design for older and disabled people, the “Universal Design”, “Inclusive Design” and “Design
for All” movements have encouraged designers to extend their design briefs to include older and disabled people. The downside
of these approaches is that they can tend to encourage designers to follow a traditional design path to produce a prototype
design, and only then investigate how to modify their interfaces and systems to cope with older and/or disabled users. This
can lead to an inefficient design process and sometimes an inappropriate design, which may be “accessible” to people with
disabilities, but in practice unusable. This paper reviews the concept that the authors have called “User-Sensitive Inclusive
Design”, which suggests a different approach to designing for marginalised groups of people. Rather than suggesting that designers
rely on standards and guidelines, it is suggested that designers need to develop a real empathy with their user groups. A
number of ways to achieve this are recommended, including the use of ethnography and techniques derived from professional
theatre both for requirements gathering and for improving designers’ empathy for marginalised groups of users, such as older
and disabled people. 相似文献
18.
“PostDock”, a new visualization tool for the analysis and comparison of molecular docking results is described. It processes
a docking results database and displays an interactive pseudo-3D snapshot of multiple ligand docking poses such that their
docking energies and docking poses are visually encoded for rapid assessment. The docking energies are represented by a transparency
scale whereas the docking poses are encoded by a color scale. The applications of PostDock for ligand–protein docking and
for a novel molecular design approach termed “reverse-docking” are presented. 相似文献
19.
Erik Hollnagel’s body of work in the past three decades has molded much of the current research approach to system safety,
particularly notions of “error”. Hollnagel regards “error” as a dead-end and avoids using the term. This position is consistent
with Rasmussen’s claim that there is no scientifically stable category of human performance that can be described as “error”.
While this systems view is undoubtedly correct, “error” persists. Organizations, especially formal business, political, and
regulatory structures, use “error” as if it were a stable category of human performance. They apply the term to performances
associated with undesired outcomes, tabulate occurrences of “error”, and justify control and sanctions through “error”. Although
a compelling argument can be made for Hollnagel’s view, it is clear that notions of “error” are socially and organizationally
productive. The persistence of “error” in management and regulatory circles reflects its value as a means for social control. 相似文献
20.
We address the problem of detecting irregularities in visual data, e.g., detecting suspicious behaviors in video sequences,
or identifying salient patterns in images. The term “irregular” depends on the context in which the “regular” or “valid” are
defined. Yet, it is not realistic to expect explicit definition of all possible valid configurations for a given context.
We pose the problem of determining the validity of visual data as a process of constructing a puzzle: We try to compose a
new observed image region or a new video segment (“the query”) using chunks of data (“pieces of puzzle”) extracted from previous
visual examples (“the database”). Regions in the observed data which can be composed using large contiguous chunks of data
from the database are considered very likely, whereas regions in the observed data which cannot be composed from the database
(or can be composed, but only using small fragmented pieces) are regarded as unlikely/suspicious. The problem is posed as
an inference process in a probabilistic graphical model. We show applications of this approach to identifying saliency in
images and video, for detecting suspicious behaviors and for automatic visual inspection for quality assurance.
Patent Pending 相似文献